Next Article in Journal
How Well Do Self-Supervised Models Transfer to Medical Imaging?
Next Article in Special Issue
Medical Augmented Reality: Definition, Principle Components, Domain Modeling, and Design-Development-Validation Process
Previous Article in Journal
Human Attention and Visual Cognition: Introduction
Previous Article in Special Issue
Towards a Low-Cost Monitor-Based Augmented Reality Training Platform for At-Home Ultrasound Skill Development
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Remote Training for Medical Staff in Low-Resource Environments Using Augmented Reality

1
UNC Graphics and Virtual Reality Group, Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
2
Nakamir Inc., Menlo Park, CA 94025, USA
3
School of Engineering, Stanford University, Stanford, CA 94305, USA
4
Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Informatics, Technical University of Munich, 80333 Munich, Germany
*
Author to whom correspondence should be addressed.
J. Imaging 2022, 8(12), 319; https://doi.org/10.3390/jimaging8120319
Submission received: 16 September 2022 / Revised: 18 November 2022 / Accepted: 25 November 2022 / Published: 29 November 2022

Abstract

:
This work aims to leverage medical augmented reality (AR) technology to counter the shortage of medical experts in low-resource environments. We present a complete and cross-platform proof-of-concept AR system that enables remote users to teach and train medical procedures without expensive medical equipment or external sensors. By seeing the 3D viewpoint and head movements of the teacher, the student can follow the teacher’s actions on the real patient. Alternatively, it is possible to stream the 3D view of the patient from the student to the teacher, allowing the teacher to guide the student during the remote session. A pilot study of our system shows that it is easy to transfer detailed instructions through this remote teaching system and that the interface is easily accessible and intuitive for users. We provide a performant pipeline that synchronizes, compresses, and streams sensor data through parallel efficiency.

1. Introduction

1.1. Clinical Background

Medical augmented reality (AR) is a highly scalable technology that improves the capabilities of medical caregivers by guiding medical caregivers through diagnostic and treatment protocols and projecting relevant information directly into the medical caregiver’s field of view [1,2]. Due to the rapid developments and falling costs of AR hardware, medical AR can also play an important role in helping with training and extending medical capabilities in resource-limited settings. Limited access to essential health services is a major problem in the developing world. According to World Health Organization (WHO) estimates in 2013, there was a global shortage of 7.2 million qualified healthcare workers, a number that will increase to 12.9 million by 2035 [3]. There is also a wide gap between high- and low-income countries: countries such as the USA have more than 2.5 physicians per 1000 people, while countries such as Kenya have only 0.15 physicians per 1000 people, a number that is even lower for rural areas [3]. Although medical training and medical schools require expensive infrastructure that cannot be easily deployed in these areas, most healthcare workers have access to cell phone service [4]. In Sub-Saharan Africa, cell phone ownership has increased from around 10% to 80% between 2002 and 2014 [5].
AR headsets are similar to cell phones in many respects. Like cell phones, they are essentially mobile computers with a display and do not require expensive infrastructure, which means that they can also be deployed in remote locations and small hospitals without modern medical equipment. In cases like Microsoft’s HoloLens 2, see-through AR head-mounted displays (HMDs) provide sensors and dedicated processors for simultaneous localization and mapping (SLAM) or hand tracking, which make them still considerably more expensive than cell phones. However, AR headset technology is constantly improving and prices are constantly falling, making this technology more and more accessible. Provided there are sufficiently useful applications, AR HMDs may experience similar growth in low-resource areas as cell phones in the future. For this reason, our aim is to leverage medical AR technology to counter the shortage of medical experts in low-income countries by providing a remote training process for medical procedures.

1.2. Related Works

Remote collaboration and guidance is an important application for AR [6]. The Microsoft HoloLens has been tested as a telemedicine platform for remote procedural training [7]. Due to the small form factor and the ability to be deployed in far-forward environments, AR has also been tested as a remote guidance tool for battlefield care [8]. Researchers at Purdue University have developed the System for Telementoring with Augmented Reality (STAR) to create a portable and self-contained telementoring platform for critical surgeries such as cricothyrotomies [9]. The ability of medical experts to remotely support clinical point-of-care providers through AR has been tested as a tool for damage control surgery even in remote settings and on the battlefield [10,11].
Previous work on remote medical training in augmented reality has explored a training setup in which the first person, the streamer, wears an HMD and the second person, the receiver, sees a rendering of the first person’s view on a screen or an HMD [12]. Researchers at UC San Diego developed a system called ARTEMIS, which enables experienced surgeons and novices to share the same virtual space: Expert surgeons at remote sites use virtual reality (VR) to access a 3D reconstruction of the patient’s body and instruct novice surgeons on complex procedures as if they were together in the operating room [13].
Similarly, the ARTEKMED research project from the Technical University of Munich is a remote surgical application that scans the entire operating room, allowing a remote physician to visualize the scanned scene through VR and advise attending physicians with AR HMDs [14]. Another study has compared representing either an avatar or a point cloud reconstruction as the remote user in AR. Their results showed that the point cloud reconstruction-based representation of the remote medical expert was rated higher in social interaction and copresence than the virtual avatar [15]. Hence, our AR remote training system uses point cloud reconstruction to visualize the remote patient. The ARTEKMED research group has also evaluated teleconsultation for the use case of a preclinical emergency in which a remote expert instructs a paramedic on emergency procedures [16].
Like the work of Strak et al., our objective is to develop an application that allows a medical expert to remotely instruct a student, specifically for medical training in low-resource environments [16]. To provide a solution that can be used with minimal infrastructure, we eliminate the need for extensive camera equipment used in other studies to scan the room of the person seeking training or consultation [14,16]. Instead, we present a standalone system that uses only one HMD in both locations to realize a remote training application for low-resource environments.

2. Materials & Methods

We used Microsoft’s HoloLens 2 (v22H1) and Epic Games’ Unreal Engine (v4.27) to develop a remote teaching solution for medical applications consisting of two components, a teacher application and a student application (Figure 1). Our AR application includes two modes: a streamer mode that streams the Microsoft HoloLens 2 depth and color data, and a receiver mode that receives the data from the streamer and combines color and depth to visualize a color point cloud of the streamer’s field of view on any OpenXR-compatible device.

2.1. Streaming Application

In our application, a HoloLens 2 user streams the color and depth camera data, as well as the HoloLens 2 pose (position and orientation) and audio to a remote AR device. The HoloLens 2 provides information about the real-world pose of the user, while the depth camera provides information about the real-world environment in front of the user, such as the patient during a remote medical procedure.
We created an application based on the HoloLens 2 Research Mode API for easy access to the depth camera sensor [17,18]. First, we calibrate the HoloLens 2 cameras to undistort the camera images. Since the Research Mode API only exposes the camera pixel data mapped to the unit plane instead of each camera sensor’s intrinsics and distortion coefficients, we iterate through each pixel and pre-compute the depth mappings for the streaming device. We then serialize and save the camera mappings to the device’s local documents folder for future retrieval. This calibration method saves time for users who start the app, since it quickly deserializes the saved mappings instead of computing them for every launch on the same device. The stream of depth images on one device must attach this metadata to accurately depict the 3D point cloud visualization on another device. Additionally, by storing these mappings beforehand, we save computation time when constructing the point cloud since we have the undistorted points readily available for each depth frame.
We temporally and spatially align the latest color camera pose with the depth camera pose for the most accurate depiction of a color point cloud. The device’s long-range depth sensor mode produces a buffer of depth information, which has a maximum frame rate of 5 frames per second (FPS), and for the color camera, we specify a frame rate of 15 FPS to select the nearest red, green, and blue (RGB) image. We create two thread-safe queues to buffer multiple frames of each camera and then, based on the timestamp of each frame and the order in which the frames come in, we grab the pair with the lowest timestamp difference that is currently available. We compress the depth and color images with RVL and JPEG compression and then, together with the head pose and audio of the streamer, send them to the remote server [19].

2.2. Server

We implemented a client-server model in Python to transfer data between streamers and receivers. Since emergency situations and training scenarios oftentimes entail sensitive information about patients from a streamer’s point of view, we use the Transmission Control Protocol (TCP) over User Datagram Protocol (UDP) for secure and reliable communication. From the streamer, the server receives a stream of bytes consisting of the undistorted depth and infrared (IR) images, an RGB image of the user’s point of view, and their respective timestamps. We decompress the depth and IR images from the streamer and feed them into the Azure Body Tracking software development kit (SDK) v1.1.0 to segment the color point cloud of the patient out of the environment [20]. In addition to the compressed depth and RGB image pair, the server sends the user’s head pose, audio, and depth mappings to the receiving application. The server also relays the receiver’s hand tracking data, head pose, and audio to show their virtual embodiment to the streamer.

2.3. Receiving Application

The primary purpose of the receiving side application allows (1) students to visualize a 3D view of the teacher’s interaction with a patient, or (2) teachers to provide instructions to the student. These scenarios are interchangeable within the same application session, where streamers and receivers can switch roles and stream the alternative 3D perspective. We tested the system on a HoloLens 2 and a PC. However, since it is based on OpenXR, the receiver application should also run on devices such as HoloLens 2, Quest Pro, Windows Mixed Reality, SteamVR, and any other OpenXR-compatible device with only minor modifications. The receiver visualizes streams of data, including head pose, audio, depth camera, and color camera from the remote server, and sends streams of data including head, audio, and hand pose, for immersive telepresence. The receiving application obtains compressed depth and color images that are decompressed and projected as a color point cloud stream of the patient.
For each depth image that arrives, we iterate through the pixels in parallel and validate them through data masking. Each depth image is paired with an RGB image to map color pixel data to depth points. Given Research Mode’s exposed functionality to map image points onto a unit plane one meter from the user, we deproject depth pixels in camera space to 3D points in world space. Then we project these world-mapped depth points into RGB camera space as pixel coordinates. If these pixels fit within the dimensions of the RGB camera, we keep these depth points and assign their color value; otherwise, they are ignored. Once we obtain the array of location and color information, we use Unreal Engine’s Niagara visual effects system for the efficient rendering of a point cloud as a particle system. The receiver sees this virtual representation of the patient in addition to a 3D model of the streamer’s head with respect to the virtual patient.
Together, these components enable the receiver to view a live 3D representation of the streamer interacting in real time with a patient while in separate locations. By seeing the hand and head movements of the teacher next to the virtual patient, the student can follow the teacher’s actions on the real patient and vice versa. Communication between the teacher and the student is done through the AR HMD microphone and speaker. Since the interaction between the teacher and the student is in real time, whenever the student has questions, they can ask the teacher to perform the action again, mimicking in-person instruction.

2.4. Experimental Design

To evaluate the system, we conducted a pilot study with five users. A trainer (streamer) was wearing the HoloLens 2 and looking at a person lying on the ground. The trainer performed ten actions on the person on the ground, such as lifting the left arm, poking the nose, simulating CPR or touching an ear. The student (receiver) was viewing the point cloud stream of the trainer’s actions on their own AR device. After each trainer’s action, the student confirmed that they had correctly experienced the action by describing the action in their own words, e.g., “you lifted the left arm” or “you poked the nose”. The trainer counted how many of these actions the student correctly identified. After the experiment, each student was asked 16 questions about the complexity, applicability, realism, and enjoyment of the application.

3. Results & Discussion

We successfully developed the system with an overview of the final system in Figure 2. The application can stream the first-person 3D perspective of the streamer in real time to the receiver and allows the receiver to view a 3D point cloud of the streamer’s field of view.

3.1. Technical Evaluation

Our high-performance system maintains a frame rate of 60 hertz when accessing all sensor streams on the device. While the user experiences stable holograms, the system efficiently partitions its processing of large amounts of data for synchronization and rendering. Depth sensor frames are streamed at a steady 5 FPS for the long-range mode and are temporally aligned with incoming RGB frames within a 20-millisecond time window. JPEG compression of color images has a 10:1 ratio, and RVL compression of depth and IR images has a 4:1 ratio. The receiver’s hands, rendered as 3D models in the eyes of the streamer, are lightweight enough to sustain a smooth 45 FPS as well. After capturing 1000 depth frames, we measured the improvements in parallelizing the construction of a color point cloud. Our results show that running the algorithm in parallel is 2.4 times faster than an iterative approach, with a time of 17 milliseconds per depth frame.

3.2. Pilot Study

Each student was able to correctly identify all actions that the trainer performed on the person lying on the ground. One student had to ask the trainer to repeat two actions because the action was not visible in the field of view of the 3D point cloud. Figure 3 shows the results of the questionnaire. Users rated the complexity of the application with a 5.8 ± 1.0 (1 difficult, 7 easy), the applicability with a 5.9 ± 1.5 (1 impractical, 7 practical), the realism with a 5.1 ± 1.3 (1 not realistic, 7 very realistic), and the enjoyment factor of the application with a 5.2 ± 1.2 (1 boring, 7 entertaining) with all values being the average score ± one standard deviation.

3.3. Remote Teaching

Our application enables teachers and students to interact in the same 3D environment while in separate locations. By seeing the 3D perspective and head movements of the teacher, the student can follow the teacher’s actions in the real patient. Alternatively, it is possible to stream the 3D view of the patient from the student to the teacher, allowing the teacher to guide the student. This is a critical use case because healthcare experts are extremely scarce and unevenly distributed around the world, which highly constrains access to them, especially for trainees from low-resource environments. Our application can potentially help increase access to these experts since it does not require students to be collocated with their teachers.

3.4. Patching a Remote Specialist

Another potential use case of our application could be the patching of a remote specialist while treating a patient in a low-resource environment similar to what Strak et al. proposed [16]. For example, a paramedic could use our application to quickly and accurately communicate the status of a patient to the physician whose license they are under to receive guidance or permission for modifications to their protocol. Because our work presented would allow the physician to see exactly what the paramedic sees, it could help diagnose the problem, learn how to help, and offer a modified procedure.

3.5. Limitations

One limitation of our application is that the receiver cannot decide what they are viewing but only sees what the streamer looks at one RGB-D frame at a time. A potential solution to this could be to have a pointer arrow controlled by the receiver to indicate to the streamer where to look. The accumulation of points by the streamed RGB-D frames could create a more complete virtual representation of the streamer’s environment that the receiver can see even when the streamer looks elsewhere.
Furthermore, the current price of AR devices is another limiting factor. However, as alluded to in our clinical background discussion, we expect this cost to continue to decrease. As AR technology and network coverage continue to improve, we are hopeful that remote AR training and guidance can be leveraged to counter the shortage of medical experts. We believe that our work contributes to this development and will improve the quality and quantity of medical training in low-resource areas in the long term.

4. Conclusions

We successfully implemented a complete and cross-platform proof-of-concept AR system that enables remote users to teach and train medical procedures. The highlight of the work presented here lies in the high scalability of our standalone approach, which can be easily set up in remote locations and low-resource settings without expensive medical equipment or external sensors.

Author Contributions

Conceptualization, A.H., M.F., L.S. and C.L.; software, A.H. and M.F.; resources, C.L.; writing—original draft preparation, A.H., M.F. and L.S.; writing—review and editing, A.H. and C.L.; visualization, L.S.; supervision, C.L. and H.F. All authors have read and agreed to the published version of the manuscript.

Funding

The AR headsets used for the study were provided by Nakamir Inc.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to thank the organizers of the MARSS Medical Augmented Reality Summer School 2021 for their provided resources and feedback. Furthermore, we wish to thank Nisal Udawatta and Melinda Xu for their contribution to this project during MARSS.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ARAugmented reality
FPSFrames per second
HMDHead-mounted display
IRInfrared
RGBRed, green, blue
SDKSoftware development kit
SLAMSimultaneous localization and mapping
VRVirtual reality
WHOWorld Health Organization

References

  1. Azimi, E.; Winkler, A.; Tucker, E.; Qian, L.; Sharma, M.; Doswell, J.; Navab, N.; Kazanzides, P. Evaluation of Optical See-Through Head-Mounted Displays in Training for Critical Care and Trauma. In Proceedings of the 25th IEEE Conference on Virtual Reality and 3D User Interfaces, Reutlingen, Germany, 18–22 March 2018; pp. 511–512. [Google Scholar] [CrossRef]
  2. Leuze, C.; Zoellner, A.; Schmidt, A.R.; Cushing, R.E.; Fischer, M.J.; Joltes, K.; Zientara, G.P. Augmented reality visualization tool for the future of tactical combat casualty care. J. Trauma Acute Care Surg. 2021, 91, S40–S45. [Google Scholar] [CrossRef] [PubMed]
  3. Ellard, D.R.; Shemdoe, A.; Mazuguni, F.; Mbaruku, G.; Davies, D.; Kihaile, P.; Pemba, S.; Bergström, S.; Nyamtema, A.; Mohamed, H.M.; et al. Can training non-physician clinicians/associate clinicians (NPCs/ACs) in emergency obstetric, neonatal care and clinical leadership make a difference to practice and help towards reductions in maternal and neonatal mortality in rural Tanzania? The ETATMBA project. BMJ Open 2016, 6, e008999. [Google Scholar]
  4. Snowden, J.M.; Muoto, I. Strengthening the health care workforce in Fragile States: Considerations in the Health Care Sector and Beyond. Health Serv. Res. 2018, 53, 1308. [Google Scholar] [CrossRef]
  5. Silver, L.; Johnson, C. Internet Connectivity Seen as Having Positive Impact on Life in Sub-Saharan Africa. 2018. Available online: https://www.pewresearch.org/global/2018/10/09/internet-connectivity-seen-as-having-positive-impact-on-life-in-sub-saharan-africa/ (accessed on 24 February 2022).
  6. Kato, H.; Billinghurst, M. Marker tracking and HMD calibration for a video-based augmented reality conferencing system. In Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR’99), San Francisco, CA, USA, 20–21 October 1999; pp. 85–94. [Google Scholar] [CrossRef] [Green Version]
  7. Wang, S.; Parsons, M.; Stone-McLean, J.; Rogers, P.; Boyd, S.; Hoover, K.; Meruvia-Pastor, O.; Gong, M.; Smith, A. Augmented reality as a telemedicine platform for remote procedural training. Sensors 2017, 17, 2294. [Google Scholar] [CrossRef] [PubMed]
  8. Harris, T.E.; Delellis, S.F.; Heneghan, J.S.; Buckman, R.F.; Miller, G.T.; Magee, J.H.; Vasios, W.N.; Nelson, K.J.; Kane, S.F.; Choi, Y.S. Augmented Reality Forward Damage Control Procedures for Nonsurgeons: A Feasibility Demonstration. Mil. Med. 2020, 185, 521–525. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Rojas-Muñoz, E.; Lin, C.; Sanchez-Tamayo, N.; Cabrera, M.E.; Andersen, D.; Popescu, V.; Barragan, J.A.; Zarzaur, B.; Murphy, P.; Anderson, K.; et al. Evaluation of an augmented reality platform for austere surgical telementoring: A randomized controlled crossover study in cricothyroidotomies. npj Digit. Med. 2020, 3, 1–9. [Google Scholar] [CrossRef] [PubMed]
  10. Kirkpatrick, A.W.; Tien, H.; Laporta, A.T.; Lavell, K.; Keillor, J.; Wright Beatty, H.E.; McKee, J.L.; Brien, S.; Roberts, D.J.; Wong, J.; et al. The marriage of surgical simulation and telementoring for damage-control surgical training of operational first responders: A pilot study. J. Trauma Acute Care Surg. 2015, 79, 741–747. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Kirkpatrick, A.W.; LaPorta, A.; Brien, S.; Leslie, T.; Glassberg, E.; McKee, J.; Ball, C.G.; Wright Beatty, H.E.; Keillor, J.; Roberts, D.J.; et al. Technical innovations that may facilitate real-time telementoring of damage control surgery in austere environments: A proof of concept comparative evaluation of the importance of surgical experience, telepresence, gravity and mentoring in the conduct of damage control laparotomies. Can. J. Surg. 2015, 58, S88–S90. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Lin, C.; Andersen, D.; Popescu, V.; Rojas-Munoz, E.; Cabrera, M.E.; Mullis, B.; Zarzaur, B.; Anderson, K.; Marley, S.; Wachs, J. A first-person mentee second-person mentor AR interface for surgical telementoring. In Proceedings of the 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Munich, Germany, 16–20 October 2018; pp. 3–8. [Google Scholar]
  13. Gasques, D.; Johnson, J.G.; Sharkey, T.; Feng, Y.; Wang, R.; Xu, Z.R.; Zavala, E.; Zhang, Y.; Xie, W.; Zhang, X.; et al. ARTEMIS: A collaborative mixed-reality system for immersive surgical telementoring. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–14. [Google Scholar]
  14. Roth, D.; Yu, K.; Pankratz, F.; Gorbachev, G.; Keller, A.; Lazarovici, M.; Wilhelm, D.; Weidert, S.; Navab, N.; Eck, U. Real-time mixed reality teleconsultation for intensive care units in pandemic situations. In Proceedings of the 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Lisbon, Portugal, 27 March–1 April 2021; pp. 693–694. [Google Scholar]
  15. Yu, K.; Gorbachev, G.; Eck, U.; Pankratz, F.; Navab, N.; Roth, D. Avatars for teleconsultation: Effects of avatar embodiment techniques on user perception in 3D asymmetric telepresence. IEEE Trans. Vis. Comput. Graph. 2021, 27, 4129–4139. [Google Scholar] [CrossRef] [PubMed]
  16. Strak, R.; Yu, K.; Pankratz, F.; Lazarovici, M.; Sandmeyer, B.; Reichling, J.; Weidert, S.; Kraetsch, C.; Roegele, B.; Navab, N.; et al. Comparison Between Video-Mediated and Asymmetric 3D Teleconsultation During a Preclinical Scenario. In Proceedings of the Mensch Und Computer 2021, Ingolstadt, Germany, 4–7 September 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 227–235. [Google Scholar]
  17. Ungureanu, D.; Bogo, F.; Galliani, S.; Sama, P.; Duan, X.; Meekhof, C.; Stühmer, J.; Cashman, T.J.; Tekin, B.; Schönberger, J.L.; et al. Hololens 2 research mode as a tool for computer vision research. arXiv 2020, arXiv:2008.11239. [Google Scholar]
  18. Microsoft Hololens 2 Research Mode for Unreal Engine. Available online: https://github.com/microsoft/HoloLens-ResearchMode-Unreal (accessed on 24 February 2022).
  19. Wilson, A. Fast Lossless Depth Image Compression. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces (ISS ’17), Brighton, UK, 17–20 October 2017. [Google Scholar]
  20. 3D Skeletal Tracking on Azure Kinect. Available online: https://www.microsoft.com/en-us/research/uploads/prod/2020/01/AKBTSDK.pdf (accessed on 24 February 2022).
Figure 1. Project architecture and breakdown of the streamer and receiver into their components. By seeing the 3D patient with which the streamer is interacting, the receiver can shadow the interaction or provide feedback.
Figure 1. Project architecture and breakdown of the streamer and receiver into their components. By seeing the 3D patient with which the streamer is interacting, the receiver can shadow the interaction or provide feedback.
Jimaging 08 00319 g001
Figure 2. Small image top-left: A trainer (streamer) wearing a HoloLens 2 and looking at a patient. Large image right: A mixed reality capture of the student’s first-person view (receiver) showing the color point cloud of the patient from the perspective of the trainer wearing the HoloLens 2. The white head model represents the head pose of the trainer. A full (video demo, https://youtu.be/oo6A1yKrJFY, accessed on 22 February 2022) also shows the segmentation of the patient.
Figure 2. Small image top-left: A trainer (streamer) wearing a HoloLens 2 and looking at a patient. Large image right: A mixed reality capture of the student’s first-person view (receiver) showing the color point cloud of the patient from the perspective of the trainer wearing the HoloLens 2. The white head model represents the head pose of the trainer. A full (video demo, https://youtu.be/oo6A1yKrJFY, accessed on 22 February 2022) also shows the segmentation of the patient.
Jimaging 08 00319 g002
Figure 3. Average ± standard deviation of the questions asked to five users during a preliminary study to rate the complexity (1 difficult, 7 easy), applicability (1 impractical, 7 practical), realism (1 not realistic, 7 very realistic), and enjoyment (1 boring, 7 entertaining) of the application.
Figure 3. Average ± standard deviation of the questions asked to five users during a preliminary study to rate the complexity (1 difficult, 7 easy), applicability (1 impractical, 7 practical), realism (1 not realistic, 7 very realistic), and enjoyment (1 boring, 7 entertaining) of the application.
Jimaging 08 00319 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hale, A.; Fischer, M.; Schütz, L.; Fuchs, H.; Leuze, C. Remote Training for Medical Staff in Low-Resource Environments Using Augmented Reality. J. Imaging 2022, 8, 319. https://doi.org/10.3390/jimaging8120319

AMA Style

Hale A, Fischer M, Schütz L, Fuchs H, Leuze C. Remote Training for Medical Staff in Low-Resource Environments Using Augmented Reality. Journal of Imaging. 2022; 8(12):319. https://doi.org/10.3390/jimaging8120319

Chicago/Turabian Style

Hale, Austin, Marc Fischer, Laura Schütz, Henry Fuchs, and Christoph Leuze. 2022. "Remote Training for Medical Staff in Low-Resource Environments Using Augmented Reality" Journal of Imaging 8, no. 12: 319. https://doi.org/10.3390/jimaging8120319

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop