Next Article in Journal
Harmonization Strategies in Multicenter MRI-Based Radiomics
Next Article in Special Issue
Towards a Low-Cost Monitor-Based Augmented Reality Training Platform for At-Home Ultrasound Skill Development
Previous Article in Journal
Z2-γ: An Application of Zienkiewicz-Zhu Error Estimator to Brain Tumor Detection in MR Images
Previous Article in Special Issue
An Augmented Reality-Based Interaction Scheme for Robotic Pedicle Screw Placement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

HAPPY: Hip Arthroscopy Portal Placement Using Augmented Reality

1
Chair for Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, 85748 Munich, Germany
2
Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21205, USA
3
Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
J. Imaging 2022, 8(11), 302; https://doi.org/10.3390/jimaging8110302
Submission received: 6 October 2022 / Revised: 31 October 2022 / Accepted: 3 November 2022 / Published: 6 November 2022

Abstract

:
Correct positioning of the endoscope is crucial for successful hip arthroscopy. Only with adequate alignment can the anatomical target area be visualized and the procedure be successfully performed. Conventionally, surgeons rely on anatomical landmarks such as bone structure, and on intraoperative X-ray imaging, to correctly place the surgical trocar and insert the endoscope to gain access to the surgical site. One factor complicating the placement is deformable soft tissue, as it can obscure important anatomical landmarks. In addition, the commonly used endoscopes with an angled camera complicate hand–eye coordination and, thus, navigation to the target area. Adjusting for an incorrectly positioned endoscope prolongs surgery time, requires a further incision and increases the radiation exposure as well as the risk of infection. In this work, we propose an augmented reality system to support endoscope placement during arthroscopy. Our method comprises the augmentation of a tracked endoscope with a virtual augmented frustum to indicate the reachable working volume. This is further combined with an in situ visualization of the patient anatomy to improve perception of the target area. For this purpose, we highlight the anatomy that is visible in the endoscopic camera frustum and use an automatic colorization method to improve spatial perception. Our system was implemented and visualized on a head-mounted display. The results of our user study indicate the benefit of the proposed system compared to baseline positioning without additional support, such as an increased alignment speed, improved positioning error and reduced mental effort. The proposed approach might aid in the positioning of an angled endoscope, and may result in better access to the surgical area, reduced surgery time, less patient trauma, and less X-ray exposure during surgery.

1. Introduction

Arthroscopy is a mostly outpatient procedure used to diagnose or treat abnormalities and disorders in growing joints. It can enable the diagnosis and treatment of a broad variety of non-inflammatory, inflammatory, and infectious types of arthritis, as well as various injuries within the joint. To perform an arthroscopic procedure, orthopedic surgeons insert an arthroscope, a tube-like viewing instrument, through a small incision. The incision size depends on the size of the examined joint, but can typically be approximated by the size of a buttonhole. The arthroscope includes a small tube that contains optical fibers and lenses, which is connected to a video camera. In a typical setup, the camera view visualizing the interior of the examined joint is displayed on an external monitor.
Minimal access techniques through surgical trocars seek to perform surgical procedures while avoiding the morbidity associated with conventional surgical wounds [1]. Although hip arthroscopy is becoming more common for a growing array of indications due to improvements in instrumentation and more comfortable techniques, it still involves considerable challenges. Due to the hip joint’s overall structure and difficulties with direct access and visualization, several portal-placement-related complications can occur. In the presence of a greater soft-tissue envelope, patient positioning, portal placement, and instrument triangulation can all be significantly more difficult in obese people. The learning curve for hip arthroscopy among surgeons is typically referred to as “steep”, which means that the necessary skills and experience are difficult to attain while minimizing complications. These issues are associated with the challenges of the procedure, such as the limited maneuverability, the depth of the joint, and the distance of the hands from the point of the operating instruments [2]. In particular, the viewing direction of the endoscopic camera is often angled and not aligned with the direction of the instrument. Various conventional arthroscopic systems are angled by 30 [3]. This drastically increases the complexity of the hand–eye coordination to navigate the endoscope to the target anatomy.
In this work, we propose an augmented reality (AR) system to support endoscope positioning for arthroscopy. Tracking the endoscopic instrument as well as the patient’s anatomy facilitates the augmentation of the angled endoscope frustum and in situ visualization of patient anatomy from pre-operative CT data. To evaluate the effectiveness of our system, we conduct a user study with a phantom setup, comparing the performance of our AR system to baseline insertions without additional support. Finally, we report and discuss our results, confirming the effectiveness of our AR system.

2. Related Work

To date, there are only a few works that integrate components related to our proposed system. To overcome the challenges of portal placement, Traub et al. [4] introduced an AR application to provide portal placement planning, but also in situ visualization for minimally invasive cardiovascular surgery. However, in their system, the portal placement planning was performed offline before the procedure and visualized on a see-through, 2D video screen.
In recent years, AR became more accessible with the useo f commercially available head-mounted displays (HMD). The applications for the use of AR include robotics, manufacturing, product assembly, medical training, and image-guided procedures [5,6,7,8]. Similar to the augmentation of the endoscope frustum in our work, Fotouhi et al. [9] introduced the concept of Interactive Flying Frustums, which can store the spatially aware surgical data for C-arm re-orientation and alignment. For each acquired X-ray, the frustum was augmented in space and the X-ray image was visualized in the frustum, allowing for a comparison of various C-arm poses. The proposed AR solution showed clear benefits of reducing the operating time and radiation dose while maintaining similar accuracy [10] for the K-wire insertion and the placement of the acetabular cup. In our system, we leveraged a similar frustum augmentation strategy. While, in [10], the augmentations were used to visualize X-ray images after acquisition with a C-arm, we used system for real-time endoscope navigation support with additional in situ visualization.
Furthermore, Qian et al. [11] proposed an AR assistance system for minimally invasive robotic surgery. To estimate the orientation and position of the endoscope, the camera frustum was augmented on an HMD and the surgical environment was reconstructed from a point cloud stream within the camera frustum.
To visualize anatomical structures inside the body in an in situ AR visualization, Bichlmeier et al. [12] proposed a method of contextual anatomic mimesis. The augmented anatomy, inside a focus area, was indicated by an outline, visually separating the real patient anatomy from the virtual content, where transparency levels are adjusted to support perceptual understanding. A similar approach was later integrated into intra-operative settings [13]. Spatial understanding and depth perception play an important role in AR applications for the correct perception of the relationship between real and virtual content. Many works [14,15,16,17] have addressed techniques that could further improve the spatial understanding of the virtual augmentations in an AR system. In other domains, only rendering virtual volumetric data, especially colorization in combination with shadowing, has been shown to provide useful perceptual cues for understanding depth [18,19].
In this paper, we propose a method that combines several state-of-the-art AR techniques to enhance hip arthroscopy and try to overcome the challenges presented in the current endoscope positioning procedure. To the best of our knowledge, this is the first time that an HMD-based AR prototype has been introduced for hip arthroscopy optimal endoscope positioning.

3. Methodology

This section describes the components of the proposed AR system. Those include the calibration of the endoscopic instrument, described in Section 3.1, the calibration between an external tracking system and the HMD space, described in Section 3.2, and the augmentation of the endoscope frustum, as well as the in situ visualization of the patient anatomy, described in Section 3.3 and Section 3.4.

3.1. Tool Calibration

Since the marker is rigidly attached to the endoscope, the calibration problem can be formulated as a pivot calibration. This system of equations can be solved if we rotate the endoscope around a fixed point and collect the tracking poses of the marker. For every tracked pose, the homogeneous rigid body transformation tool T tracker ( i ) is defined as:
tool T tracker ( i ) = R i t i 0 1
Therefore, for all transformations, we have the following formulation:
p W C S 1 = R i t i 0 1 p T C S 1
or
R i p T C S p W C S = t i
where p W C S and p T C S are the translation from the tracker’s frame to the pivoting point and the translation from the marker to the tip. We can re-write and stack Equation (3) to form the overdetermined linear equation system:
R 1 I . . . . . . R n I p T C S p W C S = t 1 . . . . . . t n
Finally, to approximate the best solution, we adopted the least-squares minimization using Moore–Penrose inverse [20].

3.2. Tracking Space to HMD Space Calibration

Due to the limitations of image-based tracking accuracy and tracking volume on HMD, an outside-in tracking system was employed. However, the introduction of an external tracking system brought a new coordinate frame to the setup. Therefore, a calibration procedure to register the HMD to the tracking system needed to be performed, as shown in Figure 1. We first defined five unique points x HMD ( 1 5 ) on a image marker that can be tracked by the HMD. Then, we used a tool tracked by the tracking system to mark the same point set, so that they were in the tracker coordinate frame as p tracker ( 1 5 ) . Therefore, we could use the point cloud registration method to solve the transformation between two point sets, which essentially solves the following equation:
W T tracker = arg W T ^ tracker min   d W T ^ tracker p tracker ( i ) , x HMD ( i )
where W T tracker is the estimated transformation between two point sets, and the calibration matrix between the tracker and HMD space. To solve this transformation, we broke it down into a rotation component W R tracker and translation component W t tracker and estimated these sequentially. The detailed algorithm is demonstrated in Algorithm 1.
Algorithm 1 Tracking Space to HMD Space Calibration
( . ) ¯ : expected value
procedurepoint sets registration(M pairs)         ▹ M = 5
      x HMD ¯ = 1 M i = 1 M x HMD ( i )
      p tracker ¯ = 1 M i = 1 M p tracker ( i )
     for N = 1 to M do
          x HMD * ( N ) = x HMD ( N ) x HMD ¯
          p tracker * ( N ) = p tracker ( N ) p tracker ¯
     end for
      W R tracker = error i = 1 M ( W R tracker p tracker * ( i ) x HMD * ( i ) ) 2
      W t tracker = x HMD ¯ W R tracker p tracker ¯
      W T tracker = W R tracker W t tracker 0 1
end procedure

3.3. Frustum Generation and Endoscopic Streaming

To visualize the anatomy that is located along the view of the angled endoscope, we attached a virtual camera angled at 30 degrees to the tracked endoscope model. The camera view frustum was visualized based on the the standard 11 DoF camera parameters. The 6-DoF extrinsic parameters was calculated using the tracking information, and the 5-DoF intrinsic parameters were derived from the simulated virtual camera. We configured the virtual camera to only render the patient anatomy of the registered pre-operative data. To aid navigation of the endoscope, we augmented the camera frustum on the HMD, while the virtual camera’s view was projected onto its far clipping plane. This setup is illustrated in Figure 2. Additionally, the 512 × 512 endoscopic view was encoded and streamed to an external monitor via a TCP socket connection.

3.4. In-Situ Visualization

In order to provide a better spatial perception of the anatomical structures in the endoscope field of view, we augmented the part of the registered pre-operative anatomical bone model in the field of view of the endoscope. In an effort to reduce visual clutter in AR, the peripheral anatomy was rendered transparent, where transparency increased with increased distance to the focus area. To highlight the focused area containing the endoscopic camera frustum, we integrated an outline, adapted from the virtual window technique [12], for in situ visualization. We dynamically defined the size of the focus area based on the width of the frustum cone at the intersection with the virtual anatomical model. During rendering, we decided that if each vertex of the anatomical model was located within the cone frustum of the endoscopic camera, i.e., if the distance to the cone center line was smaller than the cone radius, then the cone radius at the depth of a given vertex position is defined by:
r c o n e = a b s r f a r d f a r · d v e r t e x
where r f a r is the cone radius at the far clipping plane, d f a r is the distance between the virtual camera center and the far clipping plane along the cone center and d v e r t e x is the distance between the virtual camera center and the vertex position projected onto the cone center line. To further improve spatial perception, the colorization of the anatomical structures in the in situ visualization was based on the distance from the model vertices to the endoscope tip:
C f i n a l = ( 1 d v e r t e x d f a r ) · C n e a r + d v e r t e x d f a r · C f a r
C n e a r and C f a r are the RGB color values associated with the camera center and the far clipping plane, and were set to (0.05, 0.9, 0.05) and (0.05, 0.05, 0.9), respectively. We chose the RGB color values of C n e a r and C f a r to interpolate the color of the bone vertices in the frustum between a blue color tone at the far clipping plane and a green color tone at the near clipping plane. The final visualization of the anatomical area and the integration of the endoscope frustum is illustrated in Figure 3. All visualizations were implemented in a fragment shader program on the GPU.

4. Experiments and Results

In the following sections, we describe the user study setup (Section 4.1), the experimental variables (Section 4.2) and the hypotheses (Section 4.3). Afterwards, the information about the participants (Section 4.4) and the procedure (Section 4.5) is specified. We conclude this section by reporting our findings from the quantitative assessments and survey analysis (Section 4.6).

4.1. User Study Setup

For our physical phantom setup, we segmented the hip and femur bones from the open-source 3D human anatomy model: z-anatomy (https://www.z-anatomy.com/ (accessed on 5 October 2022)), and designed the marker attachment to the bone for localization. To evaluate if our proposed system provides adequate guidance and could improve the placement of an angled endoscope for hip arthroscopy, we developed a solution using Unity3D (Unity Technologies, San Francisco, CA, USA), which was deployed to an optical see-through HMD (OST-HMD), the Microsoft HoloLens 2 (Microsoft, Redmond, WA, USA). To evaluate the effectiveness of our system, we conducted a user study in which participants were asked to navigate an angled endoscope to predefined target locations for hip arthroscopy. For this reason, we designed a study setup consisting of a hip phantom model, an angled endoscope and an external monitor, illustrated in Figure 4. The phantom bone models, as well as the endoscope, were tracked with an external tracking system ART Dtrack2 (Advanced Realtime Tracking, Weilheim, Oberbayern, Germany). Furthermore, a Vuforia visual marker (PTC, Boston, MA, USA) was attached to the hip phantom to facilitate the correct augmentation of the virtual objects in the HoloLens 2. This initial calibration between the HMD coordinate system and the ART-tracking system was performed by a calibration procedure, as depicted in Figure 5. Once the tracking system was setup, the calibration procedure for every participant took no longer than two minutes.
The external monitor illustrated in Figure 4 shows both the virtual endoscopic view and the view from the desired target pose. Participants were asked to navigate the endoscope to find target areas given the endoscopic view of the target area. As shown in Figure 4, the target view, as well as the endoscopic live view, was displayed on an external monitor. For the baseline task of navigating without an AR system, the users were asked to position the endoscope using only the streamed endoscopic live view on the external monitor by aligning it with the target view. However, during the AR-supported positioning, users could see the virtual endoscopic frustum and in situ visualization described in Section 3. In addition, we augmented the frustum of the target endoscope pose, as well as the focus outline of the target pose on the anatomy, as illustrated in Figure 6.

4.2. Experimental Variables and Measures

The distance and orientation accuracy of the alignment, as well as the time to completion, were selected as dependent variables. The distance error was computed as the Euclidean distance between the gravity centers of the real and virtual objects, and the orientation errors were calculated as the angle, in degrees, between the objects using an axis-angle representation. Finally, the time to completion was considered as the time that elapsed since the first change in the pose of the virtual object until alignment confirmation from the user. In addition, we collected qualitative scores using a 7-point Likert scale adapted from the NASA task load index questionnaire (TLX) [21].

4.3. Hypotheses

Our principal assumption is that the use of our AR system would benefit the overall alignment accuracy without significantly affecting the time required to complete the alignment task. Thus, we present the following hypotheses:
Hypothese 1.
Participants can achieve significantly better alignment scores when using AR visualization.
Hypothese 2.
Participants can complete the alignment task with significantly less time when using AR visualization.
Hypothese 3.
Participants can complete the alignment task with significantly less mental effort when using AR visualization.

4.4. Participants

A total of seven users were recruited using mailing lists and campus announcements. The participants were aged between 25 and 29, with a mean age of 27.0 ± 2.2 years. The participants had different levels of experience with augmented reality systems, ranging between zero and eight years, with an average of 3.7 years of experience. Participation in the study was voluntary and could be aborted at any time and performed in accordance with the Declaration of Helsinki. Symptoms of, or exposure to, COVID-19 were a hard exclusion criterion. All data collected from the study were anonymized.

4.5. Procedure

The participants were informed about the study procedure and were provided with a consent form. Upon completion of this form, participants were asked to perform a set of visual tests, including the Ishihara, Landolt, and stereo tests, to ensure correct or corrected-to-normal vision. Our exclusion criteria included color vision deficiency, impaired stereopsis (>140 angle of stereopsis at 40 cm) or a visual acuity below 63 % (20/32), and all participants were eligible for the study. Before starting the study, participants were provided with the HMDs and asked to perform a device calibration to adjust the interpupillary distance (IPD), after which they received a short introduction to augmented reality and HMD. After completion of the introduction, participants were asked to stand in front of the phantom and a big screen monitor. Participants were presented with 10 alignment tasks, split into 5 non-AR tasks and 5 tasks with our AR system. The alignment order was pseudo-randomly assigned using a Latin squared matrix. The poses of the target endoscope were predefined by a medical expert to ensure surgical relevance. Upon successful completion of the alignment tasks, participants were asked to complete a Raw-TLX questionnaire.

4.6. Results

In the following sections, we report the results of our user study. We collected quantitative measurements of the targeting accuracy with our AR system compared to the baseline method, as well as results from the survey analysis.

4.6.1. Accuracy

Comparing the poses of the endoscope after performing the positioning task to the respective ground-truth endoscope poses, we individually evaluated the positional error of the endoscope tip and the rotational error. As shown in Figure 7, we observed a mean translational error of 7.9 ± 3.7 cm for the baseline positioning and a mean error of 6.6 ± 3.3 cm for AR-aided positioning. The average rotational error during baseline positioning was estimated to be 65.2 ± 44.5 degree, while, for the AR-aided positioning, we estimated an average error of 51.3 ± 45.4 degree error. These results suggest a trend towards higher accuracy when using the AR system compared to the baseline method. To further evaluate our findings, we conducted a two-sample t-test. The threshold for statistical significance was considered as p = 0.05. The results determined that the improvement in alignment accuracy in both translation (p = 0.1202) and orientation (p = 0.2056) had no statistical significance with our AR system in the conducted study.

4.6.2. Temporal Performance

As shown in Figure 8, performing the positioning tasks took, on average, 83 ± 70 s for the baseline positioning, while, for the AR-guided system, an average elapsed time of 70 ± 48 s was measured. In addition, the t-test did not reveal statistical significance for the time to complete the alignment task (p = 0.3466).

4.6.3. Subjective Ratings

We used NASA-TLX to subjectively evaluate the task load of the alignment task with and without AR visualization, and the results are shown in Figure 9. As demonstrated in the chart, the mental demand, physical demand, temporal demand, and effort level were perceived to be lower in the study with AR. In addition, the performance of alignment was perceived to be better with AR visualization, considering the average value. Considering the non-normal distribution of the task load index data, we applied a one-sided Wilcoxon signed rank test with α = 0.05 for the subjective measures. The statistical results showed that the mental demand (p = 0.0156), physical demand (p = 0.0156), temporal demand (p = 0.0156), the effort level (p = 0.0078) and the frustration (p = 0.0078) were significantly reduced, while the performance significantly increased (p = 0.0078).

5. Discussion

Results from a user study comparing these interactive schemes using AR HMDs shows that the AR-guided solution has advantages in terms of usability, related to a decrease in physical demand, mental effort and frustration. Overall, participants achieved lower distance and orientation errors when using the proposed AR solution compared to the Baseline. However, the statistical significance could not be shown to support our hypothesis H1. We noticed that there are a few large orientation errors centered around 180 degrees, while the translational errors associated with the data are small in both cases. This shows that it is still very challenging for users to find the correct orientation with an angled endoscope, even with the help of AR, which might be related to the steep learning curve associated with the alignment of an angled endoscope. Nevertheless, on average, participants in the study took less time to complete the alignment tasks using an AR-aided endoscope positioning compared to the baseline. However, similar to the pose error, statistical significance could not be proven; thus, hypothesis H2 could not be supported. While we believe this trend could become more significant with the addition of more users, further studies need to be conducted in this regard. In terms of the subjective measures, the scores reported by the participants show significant differences in every category between the baseline and our proposed AR solution. This implies that the usability of our system significantly improves the Baseline method, supporting the hypothesis H3. This finding further suggests that, compared to the Baseline method, the cognitive load used while positioning an angled endoscope could be reduced when using our AR system.
Despite the proposed AR solution achieving significantly better results in a qualitative evaluation, with similar quantitative results to the baseline method, there are still a few limitations. The setup for the tracking system and calibration before the procedure is complicated and time-consuming compared to the baseline method. While we tried to mitigate potential third-variable biases that could have impacted the user study outcome, some limitations are inherent to the study design. First, the tracking space to HMD space calibration is subject to calibration errors, and thus can affect the user alignment. Recent works have explored the possibility of accurately tracking IR makers using sensors on HMD [22,23]. Martin-Gomez et al. [24] report comparable tracking accuracy to the external tracking system. Without the need to calibrate an external tracking system, the final propagated error from the pivot calibration of the point tool to the manual point set registration could be reduced in the system. Additionally, the rendering of the hip and femur model could not provide sufficient realism due to a lack of stimulations of the flesh, ligament, etc., which are present in the real anatomy. Fewer landmarks are visible to the users in the study, and this affected the alignment outcome. One aspect that is necessary for the integration of such an AR system into surgical setups is the registration of the pre-operative hip model to the patient anatomy. To incorporate this into our setup, available methods of CT-to-X-ray registration [25,26] could be employed to enable augmentation of the bone model in the patient. In future works, we could also use imaging to integrate such a method into a robotic setup for the improved guidance of endoscopic tools.

6. Conclusions

In this work, we proposed a novel method for AR assistance during the positioning of an angled endoscope for hip arthroscopy. An HMD-based application was developed and evaluated with a mock 3D-printed setup. Our system integrates features of frustum visualization, augmentation of the anatomical focus area and distance-based colorization. We conducted a user study, in which we compared the endoscope positioning with our AR system to baseline positioning without additional support. The results from our user study suggest that the average endoscope positioning accuracy could be improved using our AR system. Furthermore, the speed at which endoscope positioning was performed could be increased. In addition, statistical significance was demonstrated in terms of usability, indicating a lower cognitive load when aligning an angled endoscope with our AR system compared to baseline positioning. We believe that such a method could improve the overall operation quality, decrease the cognitive effort of aligning an angled endoscope, and might reduce the required X-ray dose during surgery.

Author Contributions

Conceptualization, T.S., M.S. (Michael Sommersperger), T.A.B., M.S. (Matthias Seibold) and N.N.; methodology, T.S. and M.S. (Michael Sommersperger); software, T.S. and M.S. (Michael Sommersperger); validation, T.S. and M.S. (Michael Sommersperger); formal analysis, T.S. and M.S. (Michael Sommersperger); investigation, T.S. and M.S. (Michael Sommersperger); resources, T.S. and M.S. (Michael Sommersperger); data curation, T.S. and M.S. (Michael Sommersperger); writing—original draft preparation, T.S. and M.S. (Michael Sommersperger); writing—review and editing, T.S. and M.S. (Michael Sommersperger); visualization, T.S. and M.S. (Michael Sommersperger); supervision, M.S. (Matthias Seibold) and N.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study, because this study did not involve a prospective evaluation, did not involve laboratory animals and only involved non-invasive procedures.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All the reported data can be found at https://nextcloud.in.tum.de/index.php/s/dGcAc7kBjKddkzD (accessed on 6 October 2022).

Acknowledgments

We would like to thank NARVIS lab for the support of equipment and tools for the user study, and we appreciate the comments from the medical experts from the Medial Augmented Reality Summer School.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jaffray, B. Minimally invasive surgery. Arch. Dis. Child. 2005, 90, 537–542. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Pietrzak, J.; Donaldson, M.; Kayani, B.; Rowan, F.; Haddad, F. Musculoskeletal Disorders and Treatment. 2018. Available online: https://clinmedjournals.org/articles/jmdt/journal-of-musculoskeletal-disorders-and-treatment-jmdt-4-057.php?jid=jmdt (accessed on 31 October 2022).
  3. Jung, K.; Kang, D.J.; Kekatpure, A.L.; Adikrishna, A.; Hong, J.; Jeon, I.H. A new wide-angle arthroscopic system: A comparative study with a conventional 30arthroscopic system. Knee Surgery Sport. Traumatol. Arthrosc. 2016, 24, 1722–1729. [Google Scholar] [CrossRef] [PubMed]
  4. Traub, J.; Feuerstein, M.; Bauer, M.; Schirmbeck, E.U.; Najafi, H.; Bauernschmitt, R.; Klinker, G. Augmented reality for port placement and navigation in robotically assisted minimally invasive cardiovascular surgery. In International Congress Series; Elsevier: Amsterdam, The Netherlands, 2004; Volume 1268, pp. 735–740. [Google Scholar]
  5. Fotouhi, J.; Song, T.; Mehrfard, A.; Taylor, G.; Wang, Q.; Xian, F.; Martin-Gomez, A.; Fuerst, B.; Armand, M.; Unberath, M.; et al. Reflective-ar display: An interaction methodology for virtual-to-real alignment in medical robotics. IEEE Robot. Autom. Lett. 2020, 5, 2722–2729. [Google Scholar] [CrossRef]
  6. Barsom, E.Z.; Graafland, M.; Schijven, M.P. Systematic review on the effectiveness of augmented reality applications in medical training. Surg. Endosc. 2016, 30, 4174–4183. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Mekni, M.; Lemieux, A. Augmented reality: Applications, challenges and future trends. Appl. Comput. Sci. 2014, 20, 205–214. [Google Scholar]
  8. Palmarini, R.; Erkoyuncu, J.A.; Roy, R.; Torabmostaedi, H. A systematic review of augmented reality applications in maintenance. Robot. -Comput.-Integr. Manuf. 2018, 49, 215–228. [Google Scholar] [CrossRef] [Green Version]
  9. Fotouhi, J.; Unberath, M.; Song, T.; Gu, W.; Johnson, A.; Osgood, G.; Armand, M.; Navab, N. Interactive flying frustums (IFFs): Spatially aware surgical data visualization. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 913–922. [Google Scholar] [CrossRef] [PubMed]
  10. Fotouhi, J.; Mehrfard, A.; Song, T.; Johnson, A.; Osgood, G.; Unberath, M.; Armand, M.; Navab, N. Development and pre-clinical analysis of spatiotemporal-aware augmented reality in orthopedic interventions. IEEE Trans. Med. Imaging 2020, 40, 765–778. [Google Scholar] [CrossRef]
  11. Qian, L.; Zhang, X.; Deguet, A.; Kazanzides, P. Aramis: Augmented reality assistance for minimally invasive surgery using a head-mounted display. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2019; pp. 74–82. [Google Scholar]
  12. Bichlmeier, C.; Wimmer, F.; Heining, S.M.; Navab, N. Contextual Anatomic Mimesis Hybrid In-Situ Visualization Method for Improving Multi-Sensory Depth Perception in Medical Augmented Reality. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; pp. 129–138. [Google Scholar] [CrossRef] [Green Version]
  13. Bichlmeier, C.; Kipot, M.; Holdstock, S.; Heining, S.M.; Euler, E.; Navab, N. A practical approach for intraoperative contextual in-situ visualization. In International Workshop on Augmented environments for Medical Imaging including Augmented Reality in Computer-aided Surgery (AMI-ARCS 2009); MICCAI Society: New York, NY, USA, 2009. [Google Scholar]
  14. Bichlmeier, C.; Navab, N. Virtual window for improved depth perception in medical AR. In International Workshop on Augmented Reality environments for Medical Imaging and Computer-aided Surgery (AMI-ARCS); Citeseer, 2006; pp. 1–5. Available online: https://campar.in.tum.de/pub/bichlmeier2006window/bichlmeier2006window.pdf (accessed on 6 October 2022).
  15. Bichlmeier, C.; Sielhorst, T.; Heining, S.M.; Navab, N. Improving depth perception in medical ar. In Bildverarbeitung für die Medizin 2007; Springer: Berlin/Heidelberg, Germany, 2007; pp. 217–221. [Google Scholar]
  16. Diaz, C.; Walker, M.; Szafir, D.A.; Szafir, D. Designing for depth perceptions in augmented reality. In Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Nantes, France, 9–13 October 2017; pp. 111–122. [Google Scholar]
  17. Ping, J.; Weng, D.; Liu, Y.; Wang, Y. Depth perception in shuffleboard: Depth cues effect on depth perception in virtual and augmented reality system. J. Soc. Inf. Disp. 2020, 28, 164–176. [Google Scholar] [CrossRef]
  18. Weiss, J.; Eck, U.; Nasseri, M.A.; Maier, M.; Eslami, A.; Navab, N. Layer-Aware iOCT Volume Rendering for Retinal Surgery. In Proceedings of the VCBM 2019, Brno, Czech Republic, 4–6 September 2019; pp. 123–127. [Google Scholar]
  19. Bleicher, I.D.; Jackson-Atogi, M.; Viehland, C.; Gabr, H.; Izatt, J.A.; Toth, C.A. Depth-based, motion-stabilized colorization of microscope-integrated optical coherence tomography volumes for microscope-independent microsurgery. Transl. Vis. Sci. Technol. 2018, 7, 1. [Google Scholar] [CrossRef]
  20. Penrose, R. On best approximate solutions of linear matrix equations. In Mathematical Proceedings of the Cambridge Philosophical Society; Cambridge University Press: Cambridge, UK, 1956; Volume 52, pp. 17–19. [Google Scholar]
  21. Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in Psychology; Elsevier: Amsterdam, The Netherlands, 1988; Volume 52, pp. 139–183. [Google Scholar]
  22. Kunz, C.; Maurer, P.; Kees, F.; Henrich, P.; Marzi, C.; Hlaváč, M.; Schneider, M.; Mathis-Ullrich, F. Infrared marker tracking with the HoloLens for neurosurgical interventions. Curr. Dir. Biomed. Eng. 2020, 6, 20200027. [Google Scholar] [CrossRef]
  23. Gsaxner, C.; Li, J.; Pepe, A.; Schmalstieg, D.; Egger, J. Inside-out instrument tracking for surgical navigation in augmented reality. In Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology, Osaka, Japan, 8–10 December 2021; pp. 1–11. [Google Scholar]
  24. Martin-Gomez, A.; Li, H.; Song, T.; Yang, S.; Wang, G.; Ding, H.; Navab, N.; Zhao, Z.; Armand, M. STTAR: Surgical Tool Tracking using off-the-shelf Augmented Reality Head-Mounted Displays. arXiv 2022, arXiv:2208.08880. [Google Scholar]
  25. Esteban, J.; Grimm, M.; Unberath, M.; Zahnd, G.; Navab, N. Towards fully automatic X-ray to CT registration. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2019; pp. 631–639. [Google Scholar]
  26. Häger, S.; Lange, A.; Heldmann, S.; Modersitzki, J.; Petersik, A.; Schröder, M.; Gottschling, H.; Lieth, T.; Zähringer, E.; Moltz, J.H. Robust Intensity-based Initialization for 2D-3D Pelvis Registration (RobIn). In Bildverarbeitung für die Medizin 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 69–74. [Google Scholar]
Figure 1. The calibration transformation chain. Transformations, shown as solid line arrows, are directly estimated, while transformations, shown as dash-line arrows, are derived. The blue color indicates the transformations in the HMD coordinate frame, and the orange color illustrates the transformations in the tracking system. The goal of tracking space to an HMD space calibration is to find the W T tracker , shown in red.
Figure 1. The calibration transformation chain. Transformations, shown as solid line arrows, are directly estimated, while transformations, shown as dash-line arrows, are derived. The blue color indicates the transformations in the HMD coordinate frame, and the orange color illustrates the transformations in the tracking system. The goal of tracking space to an HMD space calibration is to find the W T tracker , shown in red.
Jimaging 08 00302 g001
Figure 2. Visualization of the virtual frustum of the angled endoscope. The rendered image was projected onto the far clipping plane of the virtual camera.
Figure 2. Visualization of the virtual frustum of the angled endoscope. The rendered image was projected onto the far clipping plane of the virtual camera.
Jimaging 08 00302 g002
Figure 3. Visualization of the augmented target anatomy (left). The red outline marks the anatomical area in focus, while colorization encodes the distance to the endoscope. The anatomical visualization is combined with the frustum visualization (right).
Figure 3. Visualization of the augmented target anatomy (left). The red outline marks the anatomical area in focus, while colorization encodes the distance to the endoscope. The anatomical visualization is combined with the frustum visualization (right).
Jimaging 08 00302 g003
Figure 4. The simulation setup for our user study consisted of a tracked hip phantom and a tracked angled endoscope. The live view of the virtual endoscope, as well as the target view, were displayed on an external monitor.
Figure 4. The simulation setup for our user study consisted of a tracked hip phantom and a tracked angled endoscope. The live view of the virtual endoscope, as well as the target view, were displayed on an external monitor.
Jimaging 08 00302 g004
Figure 5. Calibration procedure between HMD and NDI tracking space.
Figure 5. Calibration procedure between HMD and NDI tracking space.
Jimaging 08 00302 g005
Figure 6. The orange frustum and depth-based colorization of the anatomical focus area show the current endoscopic pose, while the blue frustum and the in situ visualization without colorization shows the desired target pose.
Figure 6. The orange frustum and depth-based colorization of the anatomical focus area show the current endoscopic pose, while the blue frustum and the in situ visualization without colorization shows the desired target pose.
Jimaging 08 00302 g006
Figure 7. The pose error of the alignment task for both baseline and AR-aided endoscope positioning. We separately evaluated the position and rotation error of the poses.
Figure 7. The pose error of the alignment task for both baseline and AR-aided endoscope positioning. We separately evaluated the position and rotation error of the poses.
Jimaging 08 00302 g007
Figure 8. The duration of the alignment task per trial, comparing baseline and AR-guided positioning.
Figure 8. The duration of the alignment task per trial, comparing baseline and AR-guided positioning.
Jimaging 08 00302 g008
Figure 9. Subjective task load rating for the alignment.
Figure 9. Subjective task load rating for the alignment.
Jimaging 08 00302 g009
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Song, T.; Sommersperger, M.; Baran, T.A.; Seibold, M.; Navab, N. HAPPY: Hip Arthroscopy Portal Placement Using Augmented Reality. J. Imaging 2022, 8, 302. https://doi.org/10.3390/jimaging8110302

AMA Style

Song T, Sommersperger M, Baran TA, Seibold M, Navab N. HAPPY: Hip Arthroscopy Portal Placement Using Augmented Reality. Journal of Imaging. 2022; 8(11):302. https://doi.org/10.3390/jimaging8110302

Chicago/Turabian Style

Song, Tianyu, Michael Sommersperger, The Anh Baran, Matthias Seibold, and Nassir Navab. 2022. "HAPPY: Hip Arthroscopy Portal Placement Using Augmented Reality" Journal of Imaging 8, no. 11: 302. https://doi.org/10.3390/jimaging8110302

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop