Next Article in Journal
Enhancing Localization Performance with Extended Funneling Vibrotactile Feedback
Previous Article in Journal
Kids Save Lives by Learning through a Serious Game
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

NextGen Training for Medical First Responders: Advancing Mass-Casualty Incident Preparedness through Mixed Reality Technology

1
Center for Technology Experience, Austrian Institute of Technology, 1210 Vienna, Austria
2
Department for Artificial Intelligence and Human Interfaces, University of Salzburg, 5020 Salzburg, Austria
3
Energy & Industry 5.0 Division, IDENER, 41300 Sevilla, Spain
4
Department of Nursing & Department of Surgical and Perioperative Sciences, Umeå University, 901 87 Umeå, Sweden
5
Police Education Unit, Umeå University, 901 87 Umeå, Sweden
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2023, 7(12), 113; https://doi.org/10.3390/mti7120113
Submission received: 23 October 2023 / Revised: 20 November 2023 / Accepted: 23 November 2023 / Published: 1 December 2023

Abstract

:
Mixed reality (MR) technology has the potential to enhance the disaster preparedness of medical first responders in mass-casualty incidents through new training methods. In this manuscript, we present an MR training solution based on requirements collected from experienced medical first responders and technical experts, regular end-user feedback received through the iterative design process used to develop a prototype and feedback from two initial field trials. We discuss key features essential for an effective MR training system, including flexible scenario design, added realism through patient simulator manikins and objective performance assessment. Current technological challenges such as the responsiveness of avatars and the complexity of smart scenario control are also addressed, along with the future potential for integrating artificial intelligence. Furthermore, an advanced analytics and statistics tool that incorporates complex data integration, machine learning for data analysis and visualization techniques for performance evaluation is presented.

1. Introduction

Medical first responders (MFRs) function as the vanguard in emergency healthcare, frequently serving as the pivotal factor between patient survival and fatality. Their timely intervention play a critical role in emergency situations such as cardiac arrests, car accidents or mass casualty incidents (MCIs). MCIs are incidents that produce more casualties than available emergency response services can handle and therefore demand particular organization and management of resources to maintain an adequate level of care. The initial responding team is responsible for conducting triage. In the pre-hospital emergency setting, triage typically involves a rapid assessment for immediate life-threatening conditions based on patients’ respiratory, circulatory and cognitive status and generally takes no longer than one minute per individual. A color-coded triage tag is attached to each casualty to place them in one of the following categories:
  • Category 1 (red): acute, life-threatening condition;
  • Category 2 (yellow): non-life-threatening injuries but urgent treatment required;
  • Category 3 (green): minor or no injuries that do not require ambulance transport;
  • Category 4 (black): deceased or expectant.
MCIs can arise from various sources, including natural disasters like forest fires, floods and earthquakes, which are increasing globally due to climate change [1,2]. Man-made threats such as terrorism; Chemical, Biological, Radiological, Nuclear and high-yield Explosives (CBRNE) threats and transport accidents also contribute to the growing number of MCIs [3]. This increase in MCIs, complexity of organizing real-life simulation training and limited time for ambulance personnel to train due to high workloads calls for a reevaluation of current training methods to ensure MFRs have the capacity to handle a broad range of incident types and to manage complex high-stress situations that require skills that go beyond mere technical proficiency. The ability to make split-second decisions in highly stressful environments is not just an asset but a necessity in this profession. Any delay can have fatal consequences.
The objective of this research was to investigate how immersive technology can support and potentially enhance training for medical emergency response services. Data were collected through the European Union’s Horizon 2020 research project MED1stMR (https://www.med1stmr.eu/, accessed on 20 November 2023). The main goal of MED1stMR is to enhance the preparedness of MFRs for complex and stressful disaster scenarios by creating a next-generation MR training program that incorporates haptic feedback for increased realism.

1.1. Current Training Methods

Effective training is crucial for disaster preparedness among MFRs. Live full-scale scenario training, considered the gold standard, enables MFRs to practice key competencies for large-scale incidents. However, it presents challenges related to resources and evaluation [4]. Research reveals inadequate MCI scenario training, leading to MFRs feeling unprepared, potentially affecting their response to disasters and patient safety [5].
From a pedagogical perspective, emphasizing the alignment of the chosen method with specific learning objectives is crucial [6] pp. 17–19. The design process of any training should start with focusing on what we want the trainees to be able to do after the training [7]. The key lies within the concept of alignment. According to the Biggs and colleagues [8] teaching and learning framework, constructive alignment starts with formulating clear learning outcomes, then on the design of the teaching and learning activities (the content) that support the LOs, and finally assessment methods that align with the previous two goals and include feedback and reflection. Put simply, selecting an appropriate method that aligns with the learning outcome is essential for effective learning. Clear learning objectives are pivotal for choosing the right tools and methods to successfully achieve these goals.
Training for MCI scenarios presents its unique challenges, including physical risks for MFRs, resource-intensive requirements, logistical complexities and high costs. To address these shortcomings, the adoption of immersive technologies, such as MR, is emerging as a promising solution. MR technologies provide immersive and realistic simulations, enabling responders to train in a safe and controlled environment. This reduces the risk of physical injury and offers cost-effective and scalable training options [9].
Baetzner et al. [5] highlight significant challenges in current MCI training, which is supported by the feedback received. Two critical areas of limitations have been identified: a lack of realism in training scenarios and inadequate post-training evaluation. The absence of realistic environmental hazards and convincing portrayals of casualties hampers effective risk assessment and response as well as the level of stress trainees experience compared to a serious MCI. Additionally, post-training evaluations are often rushed and lack depth, failing to provide individualized feedback. These limitations have informed the development of the MED1stMR training framework, aimed at addressing these gaps and better aligning training with the complexities of real-world MCIs.

1.2. Understanding Mixed Reality (MR)

Mixed Reality (MR) covers a broad spectrum that includes Virtual Reality (VR), Augmented Reality (AR) and various blends of both. Often, it is also referred to as Extended Reality (XR). To enable a common ground of discussion and terminology, we adhere to the MR framework put fourth by Skarbez, Smith and Whitton [10], who span the MR space along the three dimensions of Immersion, Extent of World Knowledge (EWK) and Coherence. This framework’s first dimension, Immersion, gauges the system’s ability to reproduce sensory experiences analogous to real life. This concept aligns closely with Slater’s idea of the Place Illusion [11]. In contrast, the Extent of World Knowledge (EWK) measures the system’s awareness and modeling of its immediate environment and the objects within it. Finally, the third dimension is closely tied to Slater’s Plausibility Illusion, emphasizing the internal consistency and believability of the simulation’s events.
This framework was recently applied and mapped to first responder training by Uhl et al. [12], who especially highlight MR with high immersion and high extent of world knowledge for future immersive skill training for MFRs. Put simply, this combination describes an MR setup for which the simulated experience feels real and the system knows a lot about its surroundings. This blend can be achieved using tools like haptic feedback with real objects and advanced virtual simulations. The resulting simulation enables hands-on skill training as the environment feels genuine and provides essential context for tasks.
The MR training presented in this work aims to achieve high immersion with high environmental realism combined with full-body tracking of multiple users. Additionally, EWK is achieved by integrating realistic medical training manikins into the virtual scenario to enable an embodied triage experience including feeling the pulse or the respiratory rate of a patient.

1.3. Related Work

MFRs are trained by different training methods: for example, lectures, real-life scenario training, discussion-based learning, practical skill training, field visits, debriefings and computer-based learning, including VR and MR training, as highlighted by [5].
In this work, we focus on related work that uses VR, AR or MR for training MFRs. Overall, we can see that recently VR training has been more and more used to train for a wide range of skills that first responders need: technical skills, psychological skills, communication skills and decision-making skills.
VR provides opportunities for training MCIs and triage situations in a realistic context that are otherwise difficult to train for due to the high resource needs for setting up realistic large-scale exercises (a large number of actors, logistics, etc.). For example, Andretta et al. [13] investigated performance in virtual reality triage training and could not find significant differences in performance between training in VR and in reality. The authors conclude that “Virtual reality can provide a feasible alternative for training EM personnel in mass disaster triage a[...]” while having the benefits of providing a stable, repeatable, flexible on-demand training option.
Schneeberger et al. and Paletta et al. [14,15] compared VR training against a real-world condition. The training was focused on a formalized reporting schema and was targeted towards MFRs and firefighters. The results indicate that VR training induces similar stress as real-world training and thus supports the usage of VR training for stress-resilient decision-making. They used the VROnSite system [16] that combines realistic VR scenarios and a multidimensional treadmill to walk in the environment. Biophysical data were captured using a BIOPAC BioNomadix® system. Zechner et al. [17] describe how stress can be measured during VR training using heart rate and heart rate variability (HRV) with an end-user-friendly chest strap from Zephyr (https://www.zephyranywhere.com/, accessed on 20 November 2023). In our research, bio-signal acquisition tools from PLUX (https://www.pluxbiosignals.com/, accessed on 20 November 2023) were used [18].
According to Baetzner et al. [5], in disaster response training, practical skill training was never the sole objective. However multiple works investigated training for concrete practical skills. Kouitas et al. noted the importance of training for muscle memory [19]. They investigated this by using a VR system to train the handling of equipment and tools for EMS first responders. In their study, they analyzed first responder training using AR (using the Microsoft Hololens) and VR (using an Oculus Rift VR headset). The authors state that AR/VR training outperformed traditional training. Buchner et al. [20] describe VR first aid and reanimation training using the HTC Vive headset. However, their work focused mainly on learning the procedural steps; for interacting with the virtual manikin, the HTC Vive controllers are used. Similarly, Al-Hiyari and Jusoh [21] describe VR training for first aid for patients with seizures using the Oculus Quest headset. Again, no physical manikin is used, but virtual persons and objects are manipulated using the Oculus Touch controllers.
To make immersive training more realistic, different approaches have been described that use physical manikins and thus allow realistic, hands-on experience. Scherfgen et al. [22] describe a system that uses the front camera of the VR headset; in their study, they used the Valve Index Head-Mounted Display (HMD) to detect the position of a physical manikin and map that physical position in the immersive training environment. Thus, trainees are able to physically touch a virtual patient. Similarly, Uhl et al. [23] describe an MR system wherein a manikin is integrated into VR using the VARJO XR3 headset and chroma-keying. Their approach also allows the integration of realistic tools in the immersive scenario.
For the remainder of the paper, we discuss methods and materials (Section 2), including data collection in the form of end-user requirements, the agile product development process and finally testing the system in a field trial; the results and discussion section (Section 3) outlines training aspects and technical features that emerged as important through data analysis, including training objectives, scenario design, responsive worlds and performance monitoring; we provide a short outlook on future work (Section 4), where we identify areas requiring further development; finally, we conclude the article (Section 5).

2. Methods and Material

Agile end-user-centered research methodologies have been used to collect end-user requirements in the beginning of the project and throughout the MR system’s development process, including several end-user trials and feedback loops.

2.1. End-User Requirements

Initial end-user requirements have been collected through training observations with 123 MFRs (47 female, 76 male) and contextual interviews with 30 MFRs and trainers (16 female, 14 male).
To gain a common understanding of MFRs’ experiences and perceptions of current and future of MCI disaster training, four end-user organizations demonstrated training sessions based on their distinct curricula and objectives. Servicio Madrileño de Salud (SERMAS) focused on large-scale MCI training emphasizing communication and triage skills. Hellenic Rescue Team’s (HRT) training centered on advanced life support in two different scenarios, including victim extraction from a car. Sanitätspolizei Bern’s (SANO) simulation training covered a range of medical procedures, and Johanniter Österreich Ausbildung und Forschung Gemeinnützige GmbH (JOAFG) offered a blend of case studies and hands-on skills training. These diverse training approaches highlight the multifaceted nature of MFR training needs.
Following the training observations, contextual interviews were conducted with instructors and trainees to gain further insights into individual perspectives on MCI disaster training. Contextual interviews combine classic in-depth interviews with ethnographic observations in the actual context of use [24] and are therefore a valuable tool for capturing processes in their natural settings [25].
Observation and interview guides were developed in collaboration between the researchers within the MED1stMR project. Each interview was recorded digitally and transcribed verbatim by the research team to capture the interview itself as well as the inputs from the translator. Inductive qualitative content analysis was used to process and analyze the gathered material [26,27]. For this article, themes related to “how technologies are used for training purposes” and “how trainees interact with the training materials” have been used.

2.2. Agile Prototype Development

Agile and the user-centered-design (UCD) methodologies were used to develop the MR solution, allowing for an iterative and flexible approach with the focus on end-user needs [28]. A modular design approach provided the opportunity to test different approaches and features without re-evaluating the overall system design. Individual components and interfaces were developed and refined through iterative integration, and the outcomes were tested with users at varying maturity levels.

The MR Base System

The MR system, provided by Refense (https://www.refense.com/, accessed on 20 November 2023), integrates a wireless headset (HTC Vive Focus 3 (https://www.vive.com/eu/product/vive-focus3/overview/, accessed on 20 November 2023)) and full-body tracking (including sensors on hands, feet and back—see Figure 1) to facilitate a highly immersive training environment. The tracking system accurately mirrors user movements in the virtual world, while the wireless design eliminates physical constraints, enabling trainees to engage in realistic, dynamic learning experiences.
The training field requires an empty space (10 by 10 m) with minimal direct sunlight exposure to not interfere with the tracking cameras securely mounted on the truss surrounding the training field (Figure 2). The training can be observed by both the operator and spectators through designated links: the operator’s station on the left and the spectator’s screen on the right. The system’s easy and efficient setup allows for a potentially mobile training solution.
The operator station for training execution is a powerful control center that enables trainers to manage and control immersive MR training scenarios. It features a high-performance computer system, a specialized MR headset and intuitive controls. Trainers can manipulate virtual environments (VEs), provide real-time guidance to trainees and monitor their progress from different views (top view, perspective from a trainee or free camera position). With hands-on interactions and seamless collaboration, the MR operator station empowers trainers to create dynamic and effective training experiences, enhancing skill development in a safe and engaging mixed reality setting.
Modules developed as part of the MED1stMR research project (patient simulator manikin integration, trainee stress monitoring and performance dashboard application) are discussed in the results section (see Section 3).

2.3. Field Trials

To assess user experience as part of the agile development process, two field trials have been organized to receive initial feedback on the current development status. Data were collected through training observations (17 participants: 3 female, 14 male), interviews with 5 trainers (1 female, 4 male), trainer questionnaires (4 participants, all male) and qualitative feedback from 73 (14 female, 59 male) trainees. Thematic analysis of observations focused on MR features and interaction with trainees (interaction with avatars and manikins, engagement with virtual environment, usage of medical equipment (tourniquet) and triage card and communication with dispatch center as well as amongst trainees). Interview guides focused on receiving feedback related to realism and immersion, scenario design, stressors, smart scenario control, real-time performance and stress monitoring and the de-briefing process and tools. Feedback was analyzed and grouped according to features and was furthermore translated into a back-log, providing actionable insights and technical requirements to further refine the prototype.

3. Results and Discussion

3.1. Training Objectives in XR

Simulation-based training offers immersive experiences designed to replicate real-world scenarios for effective learning. This training is grounded in educational theories like behaviorism, cognitivism and constructivism and aims to transition learners from dependent to self-directed learning [29]. It is particularly beneficial in fields requiring high error sensitivity and complexity and incorporates adult education standards like self-reflective learning, human factors and crew resource management [30]. Each training unit typically consists of an introduction, performance and debriefing, with the latter being crucial for learning outcomes [6].
The integration of technologies like MR into simulation training opens up new avenues for modern didactic methods. The key to maximizing learning outcomes lies in clearly defined learning objectives, which guide the selection of appropriate training methods and tools. Taxonomies like Bloom’s taxonomy [31] are often employed to formulate these objectives and distinguishing different levels of competence and thereby aiding in the customization and evaluation of the training program.
The MED1stMR training framework addresses the following key elements:
1.
Objectives and Outcomes: Enhancing situational awareness, resilience and MCI performance.
2.
Audience Proficiency: Tailoring to MFRs’ experience levels.
3.
Scenario and Tasks: Defining MCI specifics and required actions.
4.
Technology and Devices: Selecting the virtual environment, patient simulation manikin, feedback systems and sensors.
5.
Feedback and Guidance: Providing real-time cues and post-training assessment.
6.
Evaluation Metrics: Assessing performance, cognitive states and satisfaction.
The results from the requirements analysis show that the learning objectives encompass a range of critical skills and competencies necessary for effective response and management in MCIs. They emphasize the importance of communication, coordination and adaptability in handling complex emergency situations. The following core learning objectives where identified:
1.
Initial Scene Assessment and Coordination:
  • Identify as the commander.
  • Conduct a scene safety assessment.
  • Establish communication channels.
  • Coordinate with internal and external teams.
  • Organize and categorize the scene into zones (hot, warm and cold).
  • Make informed decisions based on the initial assessment.
  • Reiterate scene safety assessment to adapt to changing circumstances.
MR offers the opportunity to realistically simulate the chaos of an MCI where trainees can practice assuming command roles, establishing communication channels (via virtual radios) and organizing the scene into different zones. This allows for training decision-making in stressful situations based on initial assessments.
2.
Triage, Evacuation and Resource Management:
  • Arrange and perform the initial triage.
  • Assign evacuation priority to triaged victims.
  • Continue communication and coordination.
  • Prioritize care and evacuation.
  • Organize logistics for care and transport of uninjured individuals.
  • Efficiently manage internal and external resources.
  • Communicate the arrival of additional resources.
  • Identify access and egress points.
  • Oversee ambulance parking for optimal access.
  • Determine correct hospital destinations and transport modes.
  • Continuously adapt to resource availability and patient needs.
MR provides a controlled setting for practicing initial triage of virtual patients with various injury types. Compared to real-world training, where injuries are often represented through written cards and/or make-up, VEs offer an enhanced level of realism and adaptability in terms of injury progression. Trainees can practice making decisions regarding prioritization of care and evacuation, managing resources and identifying crucial access and exit points in a simulated but realistic environment.
3.
Medical Assessment, Treatment and Psychological Support:
  • Rapidly evaluate and categorize injured individuals based on injury severity.
  • Use triage tags effectively for labeling and identifying victims.
  • Communicate critical information to the triage commander.
  • Follow the systematic ABCDE approach for medical assessment and treatment (Airway, Breathing, Circulation, Disability and Exposure).
  • Understand the importance of continuous communication, especially during patient evacuation.
  • Provide psychological first aid to individuals in emotional distress.
  • Closely monitor patient conditions and promptly report deterioration.
  • Explore the effective utilization of “Healthy Green” victims as resources for various essential tasks.
  • By incorporating technological solutions into the training framework, MFRs can effectively master the core learning objectives, improving their preparedness and response capabilities in challenging MCI scenarios.
  • The technological solutions can also collect data on trainees’ actions and performance in the simulated scenarios in relation to the learning objectives, thus supporting the trainers’ feedback and the joint reflections performed in the debriefing activity of the simulation training.
In MR, trainees can practice medical assessments using the ABCDE approach on virtual patients exhibiting a range of medical conditions. They can also work on providing psychological first aid to virtual agents programmed to show signs of emotional distress. However, it is worth noting that the current state of MR technology presents limitations in the interaction and responsiveness of these virtual patients, as discussed in Section 3.3.1.
To address the specific learning objectives and the technological constraints of the current MED1stMR system, such as the 10 × 10 movement space for trainees, the training is designed to be modular and is divided into five distinct sequences: Incident Commander Post, Incident Site, Assembly Areas for the Injured, Assembly Area for the Uninjured and Transportation. While these sequences occur almost simultaneously in a real MCI, they will be executed sequentially in the training. Despite this separation, the sequences are interdependent and require automated information sharing, potentially facilitated by instructors, to achieve the learning objectives. The feasibility of running multiple or all sequences concurrently is still under consideration and would necessitate additional trainees and space.

3.2. Scenario Design

The creation of training content for the MR training involves close collaboration between medical experts, trainers, designers, developers and content creators. These synergies ensure that the content is accurate, relevant and aligned with the desired learning outcomes. Two MR scenarios have been developed based on the requirements and guidelines collected from end-users, with the overarching goal to (a) prepare trainees to handle triage at the scene of an MCI (e.g., Figure 3) and to (b) test the possibilities and limitations of the Med1stMR training system. These scenarios serve as illustrations of the extensive array of possibilities available in the development of virtual training environments (e.g., earthquakes, flooding, wildfires, train derailment, plane crash, construction site collapse, etc.).

3.2.1. Scenario 1—Country Road Bus Crash

The first scenario is a bus crash MCI (Figure 4), which was chosen as it was frequently mentioned as relevant by end-users as well as in the literature. It allows all relevant learning objectives to be trained and focuses on the tasks of MFRs at the incident site (other first responders such as the police and fire brigade are usually also involved in MCIs). In this training scenario, trainees face a simulated bus crash incident with approximately 20 victims. The location is a country road, and the scenario takes place during daytime. The weather conditions on the scene are cloudy with no rain. To enhance the immersion, sound effects of environmental noises recreate the atmosphere of an actual accident site. The scenario includes a life-size virtual bus; the rescue services have already assessed the intervention zone as safe, and traffic on the road has come to a complete stop to ensure the safety of the trainees and the overall experience. The scenario is designed to challenge trainees to make critical decisions under pressure, assess risks and prioritize their actions to save lives efficiently.

3.2.2. Scenario 2—Tunnel Accident

In this training scenario, trainees are immersed in a simulated bus crash incident that occurred in a tunnel (Figure 5). The visuals and sound effects accurately depict the accident site, including environmental sounds that create a sense of urgency and characteristic of a tunnel setting. Trainees will face the complexities of rescuing victims and managing resources effectively within the confined space of the tunnel. It is worth mentioning that a tunnel scenario also demonstrates very well how easy it is to train in environments that are otherwise difficult to make available for training, as they would have to be closed off separately.

3.2.3. Scenario Editor

A feature that was frequently requested by end-users was a user-friendly scenario editor, giving trainers the opportunity to edit scenarios before or during training sessions without the need to involve unity developers. This feature was regarded as high priority when it comes to technology adoption and can potentially lead to broader and more successful implementation within an organization or first responder domain. In response, a scenario editor was developed for the MED1stMR project, including an asset library and drag-and-drop functionality to adapt training scenarios. This feature gives organizations the opportunity to simulate a wide variety of situations, adapt scenarios to suite the experience level of individual teams or trainees and consider regional requirements.
Two perspectives are available to edit the virtual world from different viewpoints: the top view provides an elevated position to observe the entire world, and a freely movable camera allows trainers to explore the VE from different angles (Figure 5). Once the virtual world is created, it can be saved and revisited for further editing or to utilize it for training purposes.
The scenario editor embeds logic directly into integrated objects—be it vehicles, architectural elements or non-player characters (NPCs)—allowing for user-specified injury assessments and treatments. As a cornerstone of the MED1stMR training system, the editor offers content customization, quick development and user-centered adaptability. It shortens the learning curve necessary for innovative technology for trainers and enables cost-effective, iterative enhancements. Its versatility and efficiency are key to its broad adoption and ultimately contribute to successful training and organizational outcomes.

3.2.4. Stress Factors in Scenarios for MFRs

A critical aspect of simulation training, particularly in high-stakes fields like emergency response, is stress exposure. MCIs are inherently stressful environments [32]. However, certain events and environmental factors have the potential to amplify the amount of stress MFRs experience during both real deployment as well as simulated training scenarios.
For the scenarios developed within this project, the following stressors were identified by MFR:
Environmental hazards:
  • Electricity (e.g., broken lamp post);
  • Unidentifiable liquids at the site (e.g., oily liquid on the floor);
  • Changes in weather (e.g., thunderstorm clouds approaching);
  • Electrical light changes (e.g., flickering light in tunnel or building).
Patients:
  • Increased number of casualties (e.g., newly arriving at the site);
  • Sudden deterioration of a patient.
Distractions:
  • Loud noises (e.g., alarms, ambulance noise);
  • Healthy “green” persons, bystanders or children and other family members at the incident site or assembly area for the injured.
Determining the ideal stress level to enhance learning performance in simulation training is challenging given that it differs among individuals. As a result, tailoring stress monitoring and scenario adjustments are crucial steps towards more-individualized training.

3.2.5. Smart Scenario Control

Smart scenario control is an advanced feature that allows for dynamic adjustments to the difficulty and complexity of training scenarios based on real-time data. These data can include physiological markers such as stress or performance metrics such as triage accuracy.
For instance, if a trainee exhibits low stress, the system might suggest or automatically increase the complexity of the scenario to present a more challenging environment. In the MED1stMR system, a library of stress cues (see Section 3.2.4) has been developed based on audio only (e.g., dog barking as a moving sound source) and audio–visual information (a lost child wandering around the incident site). The integration of wearable technology provides a continuous stream of data, allowing trainers to gain insight into trainees’ physiological responses during simulations. The ultimate goal of the smart scenario control feature is to create a tailored training experience that optimally prepares MFRs for high-pressure, real-world situations. This ensures that trainees are consistently stimulated without being overwhelmed.
Conducting simulation training in MR settings demands a significant cognitive load from trainers because of the multitude of tasks they must manage simultaneously. This complexity is further amplified when multiple trainees are involved in a single training session, requiring the trainer to juggle various roles from monitoring performance metrics to providing real-time feedback and ensuring trainee’s safety.
Given these challenges, there is a need for features that can assist trainers in tailoring the VEs either through suggestions of what to do or entire automation. Notably, end-users were skeptical about the full automation of VE adaptations, citing concerns over the potential risks of inadvertently overexposing trainees to stress due to errors, such as faulty signals or inaccurate baseline measurements. However, they appreciated semi-automating certain processes and receiving support through system recommendations to alleviate the cognitive burden the task of selecting stressors from a library would bring.

3.3. Responsive Worlds

The responsiveness of the VE is pivotal in shaping the trainee’s perception of realism. Adherence to real-world rules and dynamics within the simulation not only enhances immersion but also contributes to the effectiveness of the training. Whether it is the timing required to complete certain tasks or the spatial dimensions within the scenario, these elements collectively serve to make the training experience more authentic and, consequently, more valuable for skill development. While features like ‘teleporting’ in MR offer convenient navigation, they can also distort the trainee’s perception of time and effort required to reach a specific location in the scenario, thereby affecting the training’s realism [33]. In our field trials the virtual space used for the scenario was constricted to the real space available (10 × 10 m) and therefore no teleportation was needed.
Responsive elements are another vital component for creating an immersive and effective training experience [34]. Whether using actual equipment in mixed-reality or virtual tools, the time it takes to apply these (e.g., a tourniquet) and the time it takes to get the appropriate response (e.g., bleeding stops) should mirror real-world scenarios. This ensures trainees are prepared for the time-sensitive nature of this critical medical intervention.
In the field trials, the responsiveness of elements, including virtual patient avatars, was one of the most discussed and commented features, reflecting its importance for MFR training. In our prototype, for example, a wound stopped bleeding immediately after applying a tourniquet, which does not resemble reality, and trainers requested a delay for the bleeding to stop. The portrayal and possibility to check vital signs was another important aspect. Although the pulse and breath rate, for example, could be felt on the avatars represented by manikins, some avatars were only virtual. Field trial participants emphasized the importance of receiving feedback when checking the pulse or breathing, even if it is displayed as a pop-up.

3.3.1. Responsive Agents

When MFRs approach a patient, particularly one with head injuries, the patient’s verbal response serves as a critical component of the medical assessment and is essential to determine the severity of the injury and the appropriate course of action. A timely and realistic response to a question can provide valuable insight into the patient’s cognitive function and overall condition.
In VEs, realistic communications have so far been achieved through role-playing, where the trainer or a dedicated role-player acts as the avatar [35]. However, in an MCI scenario with twenty plus patient avatars, this would become quite resource intensive as a single trainer or even dedicated role-player would not be able to voice-act so many different avatars at the same time. Creating solutions that are more efficient (i.e., compared to real simulation exercises) is one of the major goals of mixed reality training, and therefore, alternative solutions to role-playing have been explored.
In our current prototype, each patient avatar has a one-minute looped audio file, which provides clues to its individual condition. This was reported as one of the major limitations of the current version of the system. Participants reported limited interaction capabilities, which do not yet match the experience of interacting with real humans acting as patients. In several instances, there was a notable lack of response from the virtual patients to the actions of the participants, emphasizing the need for more-refined interactivity.
With recent advancements in artificial intelligence (AI), there is significant potential to address these limitations and enhance the realism and interactivity of virtual patients. AI-driven conversational agents could be programmed to provide dynamic, context-sensitive responses that adapt to the actions and queries of the MFRs. These virtual patients could be designed to exhibit a range of symptoms and responses that are medically accurate and situationally appropriate, thereby offering a more authentic training experience. Alternatively, AI algorithms could be employed to manage multiple patient avatars simultaneously, thereby reducing the resource burden associated with role-playing in large-scale MCI scenarios. This would allow for a more efficient and scalable training environment.

3.3.2. Realism and Haptics

Haptics play a crucial role in enhancing realism in MR training, particularly for MFRs, who often rely on tactile feedback in real-world scenarios. The incorporation of haptic technology allows trainees to experience physical sensations that mimic the textures, temperatures and resistances they would encounter during actual medical procedures. For example, the sensation of applying a tourniquet, performing cardiopulmonary resuscitation (CPR) or even suturing a wound can be realistically simulated with the help of specialized technologies such as patient simulator manikins.
Medical Patient Simulator Manikin: In the MED1stMR prototype, the patient simulator manikin ADAM-X (https://medical-x.com/product/adam-x/, accessed on 20 November 2023) enhances the immersive training experience, offering a hands-on tool for practicing skills that benefit from tactile feedback, such as checking vital signs, intubation or resuscitation. This manikin is a detailed replica of human skeletal and anatomical features. It reacts to treatments and displays medical data like blood pressure, heart rate, respiratory patterns, chest movements, facial and finger cyanosis, pulse, electrocardiogram (ECG), CPR metrics and various physiological reactions like sweating and tearing.
In the field trial, the manikin’s presence in MR added a layer of realism for many users. Being able to touch the manikin while engaging in an MR scenario helped bridge the gap between the virtual and real worlds. However, the feedback also suggested that the use of a manikin, particularly such a sophisticated version like ADAM-X, is not entirely necessary for all training modules. For instance, when the focus is on practicing first triage and communication skills, the manikin may not add significant value. However, its importance becomes evident when training for specific medical procedures such as intubation or resuscitation, where tactile feedback and realistic anatomical features are crucial for skill acquisition and retention. This underscores the importance of having clear training goals identified prior to setting up training scenarios to ensure the technology employed aligns with the skills being taught.
Medical Tools For Treatment: Our prototype includes virtual medical tools such as tourniquets and hemostatic dressings, allowing trainees to practice medical procedures in a controlled setting and thus enhancing their proficiency and efficiency. Field trial participants appreciated the easy-to-use tools in the current prototype but also identified some evident gaps. Participants requested non-medical-treatment-related tools that would help them document information on individual patients and the ability to make notes as they would in the real world. In a related study [23], the use of real medical tools in MR has been tested and found to increase the level of immersion.
Triage Card System: The inclusion of a virtual triage card system (see Figure 6) allows trainees to prioritize patients based on the severity of their injuries, mimicking real-world triage procedures. This feature not only adds an additional layer of realism but also helps trainees practice critical decision-making skills.

3.4. Performance Monitoring and Management

Performance monitoring and management are integral components of modern training systems. These elements not only enhance the learning experience but also provide valuable data for continuous improvement.

3.4.1. Real-Time Performance Monitoring

Real-time performance monitoring is helpful for immediate feedback and adaptation. In MR training environments, sensors and software analytics can track a trainee’s actions, decision-making processes and physiological responses. This real-time data can be displayed to instructors, allowing them to intervene or provide guidance as scenarios unfold. In our prototype, the focus on real-time metrics is on displaying real-time stress levels and the timing and accuracy of triage card application, as these metrics can be used to interact with trainees during the scenario. For instance, if the trainer sees that trainees are not challenged enough as they are moving smoothly through the scenario, efficiently and accurately triaging each patient and not displaying any sign of stress, trainers can pressure and interact with trainees by demanding status updates or by testing their situational awareness by inserting distractions.

3.4.2. After Action Review

The After Action Review (AAR) is an essential follow-up to any training session. It provides a structured format for trainees and instructors to discuss what happened, why it happened and how it can be improved [30]. In the context of MR training, the debriefing process can be significantly enhanced through the use of recorded data and analytics. Objective performance indicators, such as time taken for triage or accuracy in medical procedures, can be reviewed and discussed. This data-driven approach adds an extra layer of depth to the review process, allowing for more targeted and effective feedback.
The assessment criteria for the field trials focused on several key competencies. These include the ability to efficiently organize and coordinate tasks at an MCI scene and carry out triage based on agreed upon algorithms and in an appropriate time frame. MFRs were also expected to accurately identify patients’ vital parameters and make correct medical decisions, such as applying tourniquets or placing patients in the recovery position. Communication skills are assessed based on the MFRs’ ability to convey a clear picture of the incident scene to the medical commander, share crucial information within the team and interact effectively with injured individuals. Continuous risk assessment is another critical competency, requiring MFRs to regularly evaluate the intervention zone for potential risks and to take appropriate actions.
The MED1stMR debriefing system consists of two major parts: (a) the MR After Action Review system, which is further enhanced through the (b) MED1stMR Analytics and Statistics (MAS) Tool, which is described in the next chapter (see Section 3.4.3).
In the MR AAR system, previous scenarios can be loaded and replayed in a three-dimensional view. System navigation is facilitated through an intuitive interface employing standard keyboard and mouse inputs for user-friendly operation. The “w” and “s” keys enable seamless zooming capabilities, while lateral movement is controlled via the “a” and “d” keys. Camera angles (top view, trainee view and free movement) can be adjusted with the right mouse button. Performance indication metrics and analytics are processed in the MAS tool.

3.4.3. MED1stMR Analytics and Statistics Tool (MAS)

The MAS User Interface provides access to the following training performance metrics visualized on a dashboard display as seen in Figure 7 and Figure 8:
  • Total Triages reflects the sum of triage processes executed.
  • Minimum, Maximum and Average Triage Duration (s) portrays the shortest, longest and mean time, respectively, taken for triage processes.
  • Triage Accuracy assesses the precision in identifying pre-assigned colors of NPCs and aims to gauge simulation clarity over trainee performance.
  • Total Training Time denotes the entire duration spent in the scenario.
  • Identified Stressors table delineates high-stress periods with details like timestamps, anomaly confidence and severity, identified stressor and stressor confidence, which evaluate the certainty and significance of stress alterations and their causes.
  • Main Stressor highlights the primary source of severe stress.
  • Stress During Training quantifies stress levels at each second, derived from heart rate and heart rate variability versus baseline measurements.
The MAS tool is a key component for next-generation MR training systems and encapsulates machine learning model development and data processing pipelines within the MED1stMR framework, fostering an advanced, adaptive environment. The MAS Tool’s data architecture adheres to the Lambda Architecture [36], a robust data management system tailored for handling extensive data. It is segmented into three core layers: the batch layer for historical data, the speed layer for near-real-time processing and analytics and the serving layer for data access. This architecture, central to the integration framework’s storage, aids training session debriefing and performance analysis and is supported by security and monitoring services to ensure accurate, up-to-date data views with essential flexibility and fault tolerance. See Figure 9 for a detailed depiction of the architecture services and connections.
The MAS data architecture features a batch layer dedicated to managing historical data and uses batch processing techniques to ensure data accuracy and completeness. This layer integrates the Hadoop Distributed File System (HDFS) (https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html, accessed on 20 November 2023) for robust storage and high-throughput access, Jupyter Notebook (https://jupyter.org/, accessed on 20 November 2023) for scripting and data extraction from HDFS using Apache Spark (https://spark.apache.org/, accessed on 20 November 2023), and Apache ZooKeeper (https://zookeeper.apache.org/, accessed on 20 November 2023) in three instances within the MED1stMR architecture for fault tolerance and distributed system coordination. Through these integrations, this layer facilitates efficient data retrieval, processing and analytics, significantly contributing to the data management system. The Stream services layer within the MAS Tool data architecture handles near-real-time data processing and updates through a variety of tools. Apache Kafka (https://kafka.apache.org/, accessed on 20 November 2023) facilitates high-throughput streaming and complex stream processing with three brokers for resilience. Kafka Connect, deployed twice, mediates data streams between Kafka and other systems, while Kafka Rest provides remote cluster access. KSQL aids real-time analytics through a stream processing SQL engine, and AKHQ offers a web-based interface for managing Apache Kafka. The Serving services layer in the MED1stMR data architecture indexes and serves processed data from both the batch and stream layers using a variety of tools. Apache Druid (https://druid.apache.org/, accessed on 20 November 2023) facilitates real-time analytics and organizes data into segments for swift queries. Trino (https://trino.io/, accessed on 20 November 2023), a distributed SQL query engine, manages big data processing and analytics by distributing query tasks across a cluster. Apache Superset allows data visualization and exploration from various sources and supports dashboard creation and sharing. Apache Zookeeper manages distributed applications within the Druid cluster, collectively enhancing large data volume querying, analysis, and visualization, thereby augmenting system efficiency and versatility.
As detailed in Section 3.4.3 and illustrated in Figure 9, the automated processing pipeline activates upon training data upload to the MAS framework (A). Data are uploaded to Apache Kafka, the data platform’s communication hub, and are then processed by a Python-based container (B) to generate training statistics and identify stressors via AI-driven anomaly detection tools. Following this, both raw and processed data are housed in HDFS (C), leveraging its native replicability features, before automated tasks structure these data within Apache Druid (D) for efficient querying. The platform yields multiple outputs:
  • Grafana (E): Produces graphical visualizations of training statistics, biosignals and stressor analyses using processed data.
  • Trino (F): Grants fine-grained data access to authorized users, aiding researchers in downloading both raw and processed data.
These outputs are accessible through dedicated sections in the MAS User Interface, which simplifies research activities and debriefing for first responders by enabling visualization and direct querying of various raw and processed data tables like ClientData, NPCData, training statistics and identified stressors, among others. Output samples can be found in the Supplementary Materials Tables S1—S9.
The Security and Monitoring services enhance data integrity, control and insight within the system in order to augment reliability, efficiency and transparency. Security is bolstered by Apache Ranger (https://ranger.apache.org/, accessed on 20 November 2023), which centralizes administration, refines authorization and provides auditing capabilities. Monitoring is facilitated through Prometheus (https://prometheus.io/, accessed on 20 November 2023), which delivers real-time insights and alerts, alongside Grafana (https://grafana.com/, accessed on 20 November 2023), a platform for data visualization and analytics that interfaces seamlessly with Prometheus for enhanced metrics analysis and alerting.

3.4.4. Data Management and Integration

Within the MED1stMR project, proficient data management, leveraging the MAS Tool and a Sharepoint Protected Server hosted by the Austrian Institute of Technology (AIT), is crucial for integrity and efficiency. This section outlines the data flow and storage structure, with ensuing sections addressing data privacy and security protocols. The MED1stMR training solution, comprising the Refense MR training system, manikin ADAM-X and PLUX biosignal sensor system (https://www.pluxbiosignals.com/, accessed on 20 November 2023), generates essential debriefing data. Their coordination, focusing on data flow, with varied data formats and temporal resolutions from each subsystem, is detailed subsequently.
  • Refense MR training system: Provides data on motion of tracked objects and trainees, including position, rotation, status, audio recordings, voice clips, and event records.
  • Manikin ADAM-X: Coordinates patient simulation, transmitting medical data like blood pressure, heart rate, respiratory rate and CPR metrics to the VR system.
  • Wearable PLUX biosignal sensor system: Supplies live streams of raw sensor data and processed features like stress score, encompassing ECG, heart rate (HR) and electrodermal activity (EDA).
The Refense Framework, acting as the central generation hub, collects and exports training data, prioritizing low latency for real-time virtual reality training. Post-scenario, data are retained in the Refense system for the After Action Review; then, structured data are moved to the MAS Tool for long-term storage and AI analytics, while non-structured data are transferred to the AIT’s (https://www.ait.ac.at/, accessed on 20 November 2023) Protected Sharepoint Server (PSS). The data flow is depicted in Figure 10.
Idener’s (https://idener.ai/, accessed on 20 November 2023) MAS Tool acts as the integration framework for structured training scenario data and aims to enhance debriefing and intelligent scenario management through AI-powered analytics. To identify unusual patterns among the manikin, trainee, and event data within simulations, the MAS tool employs Multi-Scale Convolutional Recurrent Encoder–Decoder (MSCRED), which is a robust machine learning model designed for anomaly detection in time-series data [37]. MSCRED’s multi-scale feature enables data analysis at various scales, which is crucial for medical scenarios to uncover anomalies. Its convolutional aspect, derived from Convolutional Neural Networks (CNNs), processes time-series data akin to one-dimensional “images”. The recurrent encoder–decoder structure, leveraging Recurrent Neural Networks (RNNs), processes input data to predict anomalies and aptly handles sequential time-series data.
MSCRED learns ‘normal’ interaction patterns between data streams during the training phase, post which, it flags deviations in new data as anomalies. Its proficiency in minimizing false positives stems from a reconstruction-based approach, which recreates interactions based on ‘normal’ patterns for reliable detection. The architecture encompasses an Encoding Phase with 1D convolutional layers, a Temporal Modeling Phase wherein encoded outputs are processed by a GRU layer, and a Decoding Phase reversing the convolutional process through 1D deconvolutional layers. Preliminary results, demonstrated in Figure 11, depict matrix reconstruction errors over time, suggesting root causes for the detected anomalies. The challenge of threshold optimization in real scenarios is acknowledged, with future model fine-tuning potentially incorporating trainers’ criteria and ignoring minor anomalies to reduce noise. MSCRED represents a notable advancement in anomaly detection, with its detailed neural network design and methodological approach laying a foundation for further refinement and cross-domain adaptability.
The MAS Tool also incorporates multiple security measures to ensure data confidentiality, integrity, and availability. It is hosted on a remote server with restricted access to authorized Idener team members. The server limits exposure to potential attack vectors by keeping all ports closed except for the Web User Interface (Web UI). User access control is defined across different profiles, with varying levels of access for administrators, trainees, trainers and researchers managed by Apache Ranger, which also monitors access through auditing. Secure credential transmission is employed using multi-channel authentication methods, reinforcing security within this data integration framework in the project’s scope.

4. Future Work

The initial feedback collected from this study provides valuable insights into the areas requiring further development and refinement in MR training for MFRs. In future work, we are planning to address the following avenues.
Enhanced Responsiveness through AI Integration: One of the primary challenges that emerged from the field trials was the limited interactivity and responsiveness of the virtual agents. To address this, we plan to integrate AI algorithms to create more responsive and realistic virtual agents. This will be assessed in subsequent field trials.
Comparative Analysis of Different Realities: Another area of interest is to dive deeper into understanding the user experience across different training environments: VR, MR and traditional real-world simulation training. Comparative studies will be conducted to evaluate the efficacy and user engagement in these various settings.
Smart Scenario Control and Performance Metrics: The current smart scenario control feature is limited in its automation capabilities. Future work will focus on enhancing this feature by incorporating enriched data and performance metrics to provide a more comprehensive and automated training environment.
Longitudinal Studies: Lastly, there is a need for longitudinal studies to compare the performance of MFRs trained with and without MR systems to gain a more robust understanding of the long-term benefits and effectiveness of MR-based training in preparing MFRs for MCIs.

5. Conclusions

In this article, we describe an innovative MR system for MCI training with MFRs based on end-user requirements, feedback collected through the agile development process of a prototype and two field trials. To form a foundation for the development of technical features, we investigate current training methods, different forms of immersive technology and the current status of VR/MR training for MFRs. Based on feedback from end-user organizations, common training objectives are established for this study, and we discuss how MR can support these objectives. We discuss scenario design with two specific MCI examples, potential stress factors within these scenarios, and how a scenario editor and smart scenario control feature could make scenarios more flexible and personalized to individual trainees. Furthermore, we discuss the importance of responsive worlds, especially in MFR training, and highlight aspects that were identified as large contributing factors, such as communication with patient avatars, realistic response of medical tools and patient simulation manikins to increase trainees’ immersion through haptics. Finally, the importance of objective performance assessment is highlighted, and we discuss the debriefing advantages of MR features over real-world simulation training based on the integration of analytical tools, including the highly complex data management and integration that goes along with such tools. MR technology is gaining prominence in the training industry, opening the need for research on end-user requirements and features. In this article, we aim to provide a framework for development based on insights from industry professionals transferred into a user-centered system design approach.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/mti7120113/s1, Table S1: Client Data (position data); Table S2: Event Data; Table S3: Event Types enumeration; Table S4: NPC Data; Table S5: Triage Colour enumeration; Table S6: Biosignals Data; Table S7: Manikin Medical Data; Table S8: Training Statistics; Table S9: Identified Stressors.

Author Contributions

Conceptualization, O.Z., H.S.-F. and D.G.G.; methodology, O.Z., L.G. and D.S.; software, O.Z., H.S.-F. and D.G.G.; validation, O.Z., L.G., D.S., J.C.U. and G.R.; formal analysis, O.Z., J.C.U., L.G. and D.S.; resources, O.Z., H.S.-F. and D.G.G.; data curation, O.Z., H.S.-F., J.C.U. and L.G.; writing—original draft preparation, O.Z., D.G.G., L.G. and D.S.; writing—review and editing, H.S.-F., G.R. and M.T.; visualization, O.Z.; supervision, M.T.; project administration, H.S.-F.; funding acquisition, M.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the European Union’s Horizon 2020 Research and Innovation Programme (grant number: 101021775). The content reflects only the authors’ view. The Research Executive Agency and European Commission is not liable for any use that may be made of the information contained herein.

Institutional Review Board Statement

All the studies within the MED1stMR project were approved by the Ethics Committee of the University of Heidelberg.

Informed Consent Statement

Informed consent was obtained from all subjects involved in all studies conducted within the MED1stMR project.

Data Availability Statement

Data is contained with the article.

Acknowledgments

We would like to express our special thanks to our academic partners (Austrian Institute of Technology, Umea University, Ruprecht-Karls-Universität Heidelberg, and University of Bern), technology partners (Refense and IDENER) and, most importantly, end-user organizations: SERMAS—Servicio Madrileño de Salud, HRT—Hellenic Rescue Team, SANO—Sanitätspolizei Bern, JOAFG—Johanniter Österreich Ausbildung und Forschung Gemeinnützige GmbH, Region Jämtland Härjedalen and Campus Vesta, who made the project and the research studies possible.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AARAfter Action Review
AIArtificial Intelligence
ARAugmented Reality
AITAustrian Institute of Technology
CBRNEChemical, Biological, Radiological, Nuclear, and high-yield Explosives
CPRCardiopulmonary Resuscitation
CNNConvolutional Neural Networks
ECGElectrocardiogram
EDAElectrodermal Activity
ERExtended Reality
EWKExtent of World Knowledge
HDFSHadoop Distributed File System
HMDHead-Mounted Display
HRHeart Rate
KPIKey Performance Indicator
MLMachine Learning
MCIMass Casualty Incident
MASMED1stMR Analytics and Statistics Tool
MFRsMedical First Responders
MSCREDMulti-Scale Convolutional Recurrent Encoder-Decoder
MRMixed Reality
NPCNon-Player Character
PSSProtected Sharepoint Server
RNNRecurrent Neural Network
UCDUser-centered Design
VASVisual Analogue Scale
VEVirtual Environment
VRVirtual Reality
Web UIWeb User Interface

References

  1. Commission, E. Overview of Natural and Man-Made Disaster Risks the European Union May Face; Technical Report; Publications Office of the European Union: Luxembourg, 2021. [Google Scholar]
  2. Feyen, L.; Ciscar, J.; Gosling, S.; Ibarreta, D.; Soria, A. Climate Change Impacts and Adaptation in Europe: JRC PESETA IV Final Report; Technical Report 30180; Publications Office of the European Union: Luxembourg, 2020. [Google Scholar]
  3. Europol. European Union Terrorism Situation and Trend Report; Technical Report; Publications Office of the European Union: Luxembourg, 2021. [Google Scholar]
  4. Shubeck, K.T.; Craig, S.D.; Hu, X. Live-action mass-casualty training and virtual world training: A comparison. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2016, 60, 2103–2107. [Google Scholar] [CrossRef]
  5. Baetzner, A.S.; Wespi, R.; Hill, Y.; Gyllencreutz, L.; Sauter, T.C.; Saveman, B.I.; Mohr, S.; Regal, G.; Wrzus, C.; Frenkel, M.O. Preparing medical first responders for crises: A systematic literature review of disaster training programs and their effectiveness. Scand. J. Trauma, Resusc. Emerg. Med. 2022, 30, 76. [Google Scholar] [CrossRef] [PubMed]
  6. Rall, M.; Oberfrank, S. Simulation-Was ist das überhaupt? In Handbuch Simulation; Bürger-Verlag GmbH: Edewecht, Germany, 2016; pp. 17–32. [Google Scholar]
  7. Biggs, J.; Tang, C. Train-the-Trainers: Implementing Outcomes-based Teaching and Learning in Malaysian Higher Education. Malays. J. Learn. Instr. 2011, 8, 1–19. [Google Scholar] [CrossRef]
  8. Biggs, J.; Tang, C.; Kennedy, G. Teaching for Quality Learning at University 5e, 5th ed.; McGraw-Hill Education (UK): Maidenhead, UK, 2022. [Google Scholar]
  9. Gout, L.; Hart, A.; Houze-Cerfon, C.H.; Sarin, R.; Ciottone, G.R.; Bounes, V. Creating a Novel Disaster Medicine Virtual Reality Training Environment. Prehospital Disaster Med. 2020, 35, 225–228. [Google Scholar] [CrossRef] [PubMed]
  10. Skarbez, R.; Smith, M.; Whitton, M.C. Revisiting milgram and kishino’s reality-virtuality continuum. Front. Virtual Real. 2021, 2, 647997. [Google Scholar] [CrossRef]
  11. Slater, M. Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philos. Trans. R. Soc. Biol. Sci. 2009, 364, 3549–3557. [Google Scholar] [CrossRef] [PubMed]
  12. Uhl, J.C.; Regal, G.; Schrom-Feiertag, H.; Murtinger, M.; Tscheligi, M. XR for First Responders: Concepts, Challenges and Future Potential. In Proceedings of the Virtual Reality and Mixed Reality: 20th EuroXR International Conference, EuroXR 2023, Rotterdam, The Netherlands, 29 November–1 December 2023; Proceedings. Springer Nature: Berlin/Heidelberg, Germany, 2023. [Google Scholar]
  13. Andreatta, P.B.; Maslowski, E.; Petty, S.; Shim, W.; Marsh, M.; Hall, T.; Stern, S.; Frankel, J. Virtual reality triage training provides a viable solution for disaster-preparedness. Acad. Emerg. Med. 2010, 17, 870–876. [Google Scholar] [CrossRef] [PubMed]
  14. Schneeberger, M.; Paletta, L.; Wolfgang Kallus, K.; Reim, L.; Schönauer, C.; Peer, A.; Feischl, R.; Aumayr, G.; Pszeida, M.; Dini, A.; et al. First Responder Situation Reporting in Virtual Reality Training with Evaluation of Cognitive-emotional Stress using Psychophysiological Measures. Cogn. Comput. Internet Things 2022, 43. [Google Scholar] [CrossRef]
  15. Paletta, L.; Schneeberger, M.; Reim, L.; Kallus, W.; Peer, A.; Ladstätter, S.; Schönauer, C.; Weber, A.; Feischl, I.R. Work-in-Progress—Digital Human Factors Measurements in First Responder Virtual Reality-Based Skill Training; IEEE: Vienna, Austria, 2022; pp. 9–11. [Google Scholar]
  16. Mossel, A.; Schoenauer, C.; Froeschl, M.; Peer, A.; Goellner, J.; Kaufmann, H. Immersive training of first responder squad leaders in untethered virtual reality. Virtual Real. 2021, 25, 745–759. [Google Scholar] [CrossRef]
  17. Zechner, O.; Schrom-feiertag, H.; Uhl, J. Mind the Heart: Designing a Stress Dashboard Based on Physiological Data for Training Highly Stressful Situations; Springer Nature: Cham, Switzerland, 2023; Volume 2, pp. 209–230. [Google Scholar] [CrossRef]
  18. Lima, R.; Os, D. Heart Rate Variability and Electrodermal Activity in Mental Stress Aloud: Predicting the Outcome; SciTePress: Setúbal, Portugal, 2019; pp. 42–51. [Google Scholar] [CrossRef]
  19. Koutitas, G.; Smith, S.; Lawrence, G. Performance evaluation of AR/VR training technologies for EMS first responders. Virtual Real. 2021, 25, 83–94. [Google Scholar] [CrossRef]
  20. Bucher, K.; Blome, T.; Rudolph, S.; von Mammen, S. VReanimate II: Training first aid and reanimation in virtual reality. J. Comput. Educ. 2019, 6, 53–78. [Google Scholar] [CrossRef]
  21. Al-Hiyari, N.N.; Jusoh, S.S. Healthcare Training Application: 3D First Aid Virtual Reality. In Proceedings of the International Conference on Data Science, E-Learning and Information Systems 2021, Ma’an, Jordan, 5–7 April 2021; pp. 107–116. [Google Scholar]
  22. Scherfgen, D.; Schild, J. Estimating the Pose of a Medical Manikin for Haptic Augmentation of a Virtual Patient in Mixed Reality Training. In Proceedings of the Symposium on Virtual and Augmented Reality, Virtual Event, Brazil, 18–21 October 2021; pp. 33–41. [Google Scholar] [CrossRef]
  23. Uhl, J.C.; Regal, G.; Gallhuber, K.; Tscheligi, M. Tangible Immersive Trauma Simulation: Is Mixed Re-ality the next level of medical skills training. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI’23), Hamburg, Germany, 23–28 April 2023; Volume 1. [Google Scholar] [CrossRef]
  24. Duda, S.; Warburton, C.; Black, N. Contextual Research. In Human-Computer Interaction. Design and User Experience. HCII 2020; Kurosu, M., Ed.; Springer: Cham, Switzerland, 2020; Volume 12181. [Google Scholar]
  25. Liaqat, A.; Axtell, B.; Munteanu, C.; Epp, C.D. Contextual Inquiry, Participatory Design, and Learning Analytics: An Example. In Proceedings of the Companion Proceedings 8th International Conference on Learning Analytics & Knowledge, Sydney, Australia, 7–9 March 2018. [Google Scholar]
  26. Graneheim, U.H.; Lundman, B. Qualitative content analysis in nursing research: Concepts, procedures and measures to achieve trustworthiness. Nurse Educ. Today 2004, 24, 105–112. [Google Scholar] [CrossRef] [PubMed]
  27. Graneheim, U.H.; Lindgren, B.M.; Lundman, B. Methodological challenges in qualitative content analysis: A discussion paper. Nurse Educ. Today 2017, 56, 29–34. [Google Scholar] [CrossRef] [PubMed]
  28. Blomkvist, S. Towards a Model for Bridging Agile Development and User-Centered Design. In Human-Centered Software Engineering—Integrating Usability in the Software Development Lifecycle; Seffah, A., Gulliksen, J., Desmarais, M.C., Eds.; Springer: Dordrecht, The Netherlands, 2005; pp. 219–244. [Google Scholar] [CrossRef]
  29. Erlam, G.; Smythe, L.; Clair, V. Simulation Is Not a Pedagogy. Open J. Nurs. 2017, 7, 779–787. [Google Scholar] [CrossRef]
  30. Hackstein, A.; Hagemann, V.; von Kaufmann, F.; Regner, H. Handbuch Simulation; Bürger-Verlag GmbH: Edewecht, Germany, 2016. [Google Scholar]
  31. Orgill, B.; Nolin, J. Learning Taxonomies in Medical Simulation; StatPearls Publishing: Treasure Island, FL, USA, 2020. [Google Scholar]
  32. Hugelius, K.; Becker, J.; Adolfsson, A. Five Challenges When Managing Mass Casualty or Disaster Situations: A Review Study. Int. J. Environ. Res. Public Health 2020, 17, 3068. [Google Scholar] [CrossRef]
  33. Schnack, A.; Wright, M.J.; Holdershaw, J.L. Does the locomotion technique matter in an immersive virtual store environment?—Comparing motion-tracked walking and instant teleportation. J. Retail. Consum. Serv. 2021, 58, 102266. [Google Scholar] [CrossRef]
  34. Curran, N. Factors of immersion. Wiley Handb. Hum. Comput. Interact. 2018, 1, 239–254. [Google Scholar]
  35. Zechner, O.; Kleygrewe, L.; Jaspaert, E.; Schrom-Feiertag, H.; Hutter, R.I.V.; Tscheligi, M. Enhancing Operational Police Training in High Stress Situations with Virtual Reality: Experiences, Tools and Guidelines. Multimodal Technol. Interact. 2023, 7, 14. [Google Scholar] [CrossRef]
  36. Kiran, M.; Murphy, P.; Monga, I.; Dugan, J.; Baveja, S.S. Lambda architecture for cost-effective batch and speed big data processing. In Proceedings of the 2015 IEEE International Conference on Big Data (Big Data), Santa Clara, CA, USA, 29 October–1 November 2015; pp. 2785–2792. [Google Scholar] [CrossRef]
  37. Sunny, J.S.; Patro, C.P.K.; Karnani, K.; Pingle, S.C.; Lin, F.; Anekoji, M.; Jones, L.D.; Kesari, S.; Ashili, S. Anomaly Detection Framework for Wearables Data: A Perspective Review on Data Concepts, Data Analysis Algorithms and Prospects. Sensors 2022, 22, 756. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Trainee with MR gear.
Figure 1. Trainee with MR gear.
Mti 07 00113 g001
Figure 2. Training environment setup.
Figure 2. Training environment setup.
Mti 07 00113 g002
Figure 3. MCI site areas.
Figure 3. MCI site areas.
Mti 07 00113 g003
Figure 4. Scenario 1: trainer view.
Figure 4. Scenario 1: trainer view.
Mti 07 00113 g004
Figure 5. Scenario 2: scenario editor view.
Figure 5. Scenario 2: scenario editor view.
Mti 07 00113 g005
Figure 6. Triage card system.
Figure 6. Triage card system.
Mti 07 00113 g006
Figure 7. MAS tool UI.
Figure 7. MAS tool UI.
Mti 07 00113 g007
Figure 8. MAS tool UI.
Figure 8. MAS tool UI.
Mti 07 00113 g008
Figure 9. MAS data architecture components.
Figure 9. MAS data architecture components.
Mti 07 00113 g009
Figure 10. Data generation and flow.
Figure 10. Data generation and flow.
Mti 07 00113 g010
Figure 11. Anomalies (red line) detected using MSCRED on synthetic datasets.
Figure 11. Anomalies (red line) detected using MSCRED on synthetic datasets.
Mti 07 00113 g011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zechner, O.; García Guirao, D.; Schrom-Feiertag, H.; Regal, G.; Uhl, J.C.; Gyllencreutz, L.; Sjöberg, D.; Tscheligi, M. NextGen Training for Medical First Responders: Advancing Mass-Casualty Incident Preparedness through Mixed Reality Technology. Multimodal Technol. Interact. 2023, 7, 113. https://doi.org/10.3390/mti7120113

AMA Style

Zechner O, García Guirao D, Schrom-Feiertag H, Regal G, Uhl JC, Gyllencreutz L, Sjöberg D, Tscheligi M. NextGen Training for Medical First Responders: Advancing Mass-Casualty Incident Preparedness through Mixed Reality Technology. Multimodal Technologies and Interaction. 2023; 7(12):113. https://doi.org/10.3390/mti7120113

Chicago/Turabian Style

Zechner, Olivia, Daniel García Guirao, Helmut Schrom-Feiertag, Georg Regal, Jakob Carl Uhl, Lina Gyllencreutz, David Sjöberg, and Manfred Tscheligi. 2023. "NextGen Training for Medical First Responders: Advancing Mass-Casualty Incident Preparedness through Mixed Reality Technology" Multimodal Technologies and Interaction 7, no. 12: 113. https://doi.org/10.3390/mti7120113

Article Metrics

Back to TopTop