Next Article in Journal
Predictive Analysis of Endoscope Demand in Otolaryngology Outpatient Settings
Next Article in Special Issue
Forecasting Survival Rates in Metastatic Colorectal Cancer Patients Undergoing Bevacizumab-Based Chemotherapy: A Machine Learning Approach
Previous Article in Journal
Reliability and Agreement of Free Web-Based 3D Software for Computing Facial Area and Volume Measurements
Previous Article in Special Issue
Machine Learning Approach to Identify Case-Control Studies on ApoE Gene Mutations Linked to Alzheimer’s Disease in Italy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Development and Usability Assessment of an Augmented Reality Decision Support System to Address Burn Patient Management

1
U.S. Army Institute of Surgical Research, Fort Sam Houston, TX 78234, USA
2
Bondwell Technologies, College Station, TX 77845, USA
*
Author to whom correspondence should be addressed.
BioMedInformatics 2024, 4(1), 709-720; https://doi.org/10.3390/biomedinformatics4010039
Submission received: 27 December 2023 / Revised: 13 February 2024 / Accepted: 21 February 2024 / Published: 1 March 2024
(This article belongs to the Special Issue Feature Papers in Computational Biology and Medicine)

Abstract

:
Critical care injuries, such as burn trauma, require specialized skillsets and knowledge. A clinical decision support system to aid clinicians in providing burn patient management can increase proficiency and provide knowledge content for specific interventions. In austere environments, decision support tools can be used to aid in decision making and task guidance when skilled personnel or resources are limited. Therefore, we developed a novel software system that utilizes augmented reality (AR) capabilities to provide enhanced step-by-step instructions based on best practices for managing burn patients. To better understand how new technologies, such as AR, can be used for burn care management, we developed a burn care application for use on a heads-up display. We developed four sub-set applications for documenting and conducting burn wound mapping, fluid resuscitation, medication calculations, and an escharotomy. After development, we conducted a usability study utilizing the System Usability Scale, pre- and post- simulation surveys, and after-action reviews to evaluate the AR-based software application in a simulation scenario. Results of the study indicate that the decision support tool has generalized usability and subjects were able to use the software as intended. Here we present the first use case of a comprehensive burn management system utilizing augmented reality capabilities to deliver care.

Graphical Abstract

1. Introduction

Burn care is challenging to manage; metabolic, immune, and inflammatory responses can quickly lead to multiple organ failure and pose substantial risk of morbidity and mortality [1]. Deployed military personnel face additional unique challenges, including limited burn-specific training, clinical experience, and/or exposure to acute burn patients. In the absence of experience, tools to assist with critical burn care tasks become vitally important. A comprehensive clinical decision support (CDS) system could facilitate timely and accurate intervention that would improve long-term patient survivability, especially in austere or remote locations [2].
CDS systems have been in use since the 1990’s and provide “a process for enhancing health-related decisions and actions with pertinent, organized, clinical knowledge, and patient information to improve health and healthcare delivery” [3]. Although there are CDS platforms for burn care, they use mobile phones, record sharing services, and video-based telemedicine to provide CDS for burn care [4,5,6]. However, these platforms depend on network connectivity, which may not be readily available in remote or conflict areas due to intermittent or no-communications protocols.
In this work, we posit that a CDS based on augmented reality (AR) technology could overcome many of the challenges posed by prior telemedicine and CDS systems. AR is an emerging technology which enables the overlay of interactive digital information onto the physical world. AR has been clinically tested in the fields of orthopedics [7], oncology [8], neurosurgery [9], and angiology [10], to name a few. Most AR technologies used by practicing clinicians are designed for surgical navigation, providing visualization of target structures and/or highlighting areas of avoidance [11,12]. An advantage of using AR in medicine is that the technology can remove distractions by shifting the focus of a procedure within a person’s direct visual field by adding overlayed information such as procedural steps, checklists, and orientation [13]. Unlike reference manuals or other devices which require direct physical contact, the AR user interface controls are provided directly within the AR virtual environment, mitigating potential cross-contamination. To our knowledge, no AR-based technology is currently available to provide comprehensive burn management and critical task support. Furthermore, there does not appear to be many AR technologies that provide non-procedural decision support features, especially not for complex injuries such as burn [14].
For this project, the study team developed and assessed the usability of a novel, AR-based CDS application to assist with burn patient management during a simulated scenario. The application, named the Augmented Reality Burn Assist Manager (ARBAM), was designed to assist military personnel in providing prolonged burn care in a combat environment experiencing degraded communication networks. Specifically, ARBAM was created to help clinicians in austere environments to quickly access the information they need to estimate burn size, reduce complications of over/under resuscitation, effectively manage pain, and surgically treat compartment syndrome. While AR-enabled technology can be useful in reducing cognitive workload and improving task performance, effectiveness is influenced by multiple factors, including the device used, how information is presented, user characteristics, and task characteristics [15]. Therefore, as an initial step in the development of AR-based clinical decision support technology, this study was designed to evaluate overall usability of the ARBAM software and sub-applications during care of a simulated burn patient. Our goal was to understand how this technology would be used by deployed military clinicians and determine what improvements could be made to better assist users.

2. Materials and Methods

2.1. Augmented Reality Hardware and Development

The Microsoft® HoloLensTM 2 (HL) (Microsoft, Redmond, WA, USA) was selected because it met the requirements for localized application deployment: hands free, untethered, internet-free availability, voice or gesture control, and embedded holographic overlays. The main ARBAM software application was developed using Microsoft® Visual Studio™ 2019 Community Edition (Microsoft, Redmond, WA, USA). We used Microsoft’s Universal Windows Platform (UWP) (Microsoft, Redmond, WA, USA) and implemented a common application programming interface (API) for cross-platform functionality across Windows 10 devices. The sub-applications, listed in the next section, were products of UWP application platform for 2D applications.

2.2. Design and Content Development

We used agile software application approaches for ARBAM design and content development. For the design stage, we designed the color schema, fonts, and graphics. We asked burn clinicians and educators (i.e., clinical subject matter experts [SMEs]) to provide feedback on the prototype and implemented suggested modifications based on verbal consensus. The research team and SMEs curated the instructional content based off of current USAISR burn center hospital guidelines, protocols, and the Joint Trauma System (JTS) clinical practice guidelines (CPGs) [16]. The CPGs that were referenced included; Burn Care [17], Burn Wound Management in Prolonged Field Care [18], Analgesia and Sedation Management in Prolonged Field Care [19]. We identified 6 core steps for broad management of a burn patient which included Primary/Secondary Survey, Assess Burn Size, Fluid Resusc/Urine Output, Vital Signs Monitoring, Pain and Sedation Management, Intervention. We developed the clinical content alongside relevant images for instructing the user through each step.

2.3. ARBAM Functionality

Upon execution of the ARBAM application, the initial screen required demographics data, which is saved to a backend database using SQLite [20]. Once patient information is entered, a home screen is displayed with sequential steps of performance listed on the left side, middle area contained textual content with media, and the bottom containing sub-applications and reference materials (Figure 1). The interactive buttons contained additional reference information and sub-applications. Certain steps asked users to deploy specific sub-applications to assist in certain tasks that required calculations: Assess Burn Size step asked the user to deploy the Burn Size Calculator sub-application, Fluid Resus/Urine Output step asked the user to deploy the Fluid Resuscitation sub-application, Pain Sedation Management step asked the user to deploy the Medication Calculator sub-application, and Interventions step asked the user to deploy the WorkLink® Escharotomy application. Access to this application was located outside of the ARBAM home screen. The escharotomy instructions and content were developed using the WorkLink® Platform (ScopeAR, San Francisco, CA, USA) and Unity® scripts were written and deployed to the HL through Visual Studio. The application required manual spatial mapping to a Laerdal SimMan® 3G (Wappingers Falls, NY, USA) human patient simulator, which was outfitted with an escharotomy leg sleeve trainer by Syndaver (Tampa, FL, USA). The escharotomy application used pictures, video, and mixed-reality overlays to guide users through performing the procedure.

2.4. Sub-Application Functionality

The Burn Size Calculator sub-application provides a human patient diagram to document areas of injury. Injury color corresponds to the depth of burn, either partial or full thickness. Other functions include ability to erase coloring errors, change brush size and shape, designate unburned skin and allowing for verbal commands. The total body surface area (TBSA) burned calculates in real-time based on the mapping. This data element is shared across other sub-applications that require TBSA. The Fluid Resuscitation sub-application can calculate and track hourly fluid administration, document fluid output, and monitor for under/over resuscitation. Finally, the Medication Calculator sub-application calculates drug administration rates and volume, accounting for ordered medication, route, patient weight, supply concentration, and tubing size. Within WorkLink®, the Escharotomy sub-application was developed to provide step-by-step instructions for performing an escharotomy. It features written and verbal instruction, graphic media, and holographic overlays for patient landmark identification.

2.5. Usability Testing

Regulatory Approvals and Standards: This research study was conducted under an approved protocol (M-10892) by the U.S. Army Medical Research & Development Command Institutional Review Board.
Recruitment: Study participants were recruited from staff working at Brooke Army Medical Center (military, civilian, or off-duty contract employees) and at the United States Army Institute of Surgical Research (USAISR) by means of flyers, word of mouth, and email. Participants were required to be at least 18 years of age, have a clinical background/experience in direct patient care, and have no more than 1 year of critical care burn experience within the last 5 years.
Standardized Training: Prior to usability testing, participants individually completed 1.5–2 h of training on the HL and ARBAM. A research team member read aloud a standardized script to instruct each participant on how to use the device and its embedded software. For each ARBAM sub-application, users were provided a verbal description of the software, followed by a pre-recorded video demonstration of its use. For hands-on practice, participants were given similar tasks to those that would be used in testing such as looking up clinical information related to that task, using either the paper CPGs or the electronic CPGs available in ARBAM. At the completion of training, participants were given a pre-simulation survey, which included demographic information and questions about individual experience and medical training.

2.6. Testing

Participants were asked to perform 4 primary tasks: calculate the TBSA of a burn, calculate pain medication volumes/dosages to be administered, manage an intravenous fluid resuscitation of a burn patient, and perform an escharotomy on a simulated full thickness, circumferential burn. Tasks were completed twice each, once using paper copies of the JTS CPGs, and once using the ARBAM software (AR). Using block randomization, participants were assigned to complete the first attempt of each task with either the paper CPGs or AR. The entire workflow of study procedures is presented in Figure 2.

2.7. Data Collection and Analysis

At the conclusion of each AR task, participants completed a System Usability Scale (SUS) survey [21]. This validated tool included a mix of ten positively and negatively worded questions along with five response options, ranging from strongly disagree to strongly agree. SUS scores (1–100) were calculated using a three-step process of assigning a score to each response. The first step was to convert the user ratings by taking every odd-numbered question answer and subtracting 1 for the total points for that question. For even numbered questions, the user rating was subtracted from 5, giving the total number of points for that question. The second step was to add all question points for the user’s total points. Third step was to multiply the user’s total points by 2.5 to give an individual user’s score. For every task, the mean for each user’s score was calculated with the standard deviation. A mean SUS score above 68 is considered “average”, meaning that the software is passable and is “usable”. Data collected from the surveys using continuous variables were expressed as mean +/− standard deviations. Data were analyzed using JMP®, Version 16 (SAS Institute Inc., Cary, NC, USA).

2.8. Post-Simulation Survey and After-Action Reviews

After completion of all tasks and usability surveys, participants completed a post-simulation survey which was developed in-house by the research team. The post-simulation survey included statements with a 5-point Likert scale. Questions included topics related to the importance of AR, adequacy of training, and overall system intuitiveness. An after-action review (AAR) discussion was also conducted at the end of each study session.

3. Results

A total of 11 participants were enrolled in the study with 7 female (64%) and 4 male (36%) participants. Table 1 shows the demographics of participant users, with varying age ranges from 18–24 to 50+. Each participant received standardized training, breaks, and completed all four required tasks. Total participation time lasted 4–5 h on average.
Prior to the usability assessment and evaluation of the software, each participant completed training to familiarize themselves with the main ARBAM application, sub-applications within ARBAM, and the secondary WorkLink® application that was used exclusively for the escharotomy task. Given the overlap between video games and AR use, a pre-simulation survey asked about experiences using video games and augmented reality technologies (Table 2). There were four participants that rated themselves as highly experienced (36%, n = 4), three with moderate experience (27%, n = 3), three with some experience (27%, n = 3) and only one participant with no experience (9%, n = 1). Since the application operated in first-person view, we asked about experience using first person shooter-type games. Only one rated themselves as highly experienced (9%, n = 1), two were moderately experienced (18%, n = 2), three with some experience (27%, n = 3), and five with no experience with first person shooter games (45%, n = 5).
Our application utilizes voice commands; therefore, we asked if participants were familiar with this feature. Two participants rated themselves as highly experienced with voice active technologies like Alexa or Siri (18%, n = 2), three as moderately experienced (27%, n = 3), five with some experience (45%, n = 5), and one with no experience at all (9%, n = 1). We also asked if they were comfortable with technologies such as smart watches, mobile tablets, or smart phones. Five participants rated themselves as moderately experienced (45%, n = 5) and six as highly experienced (55%, n = 6).
Most participants also knew of AR prior to the study (82%, n = 9) with only 2 that did not know about AR (18%, n = 2). One participant had experience using the HL prior to the study (9%, n = 1) with the remaining participants not having prior experience using the HL (91%, n = 10). All participants reported being comfortable with the gestures used while wearing the HL (100%, n = 11). When asked about using other head-mounted displays, about half of our participants (45%, n = 5) had not used these types of devices, while the others reported having had experience (55%, n = 6).
After each main burn task was completed, participants completed the 10 question SUS. The average SUS points were computed for each question and the final SUS score was calculated for each task (Table 3). Mean SUS score for the burn size calculation sub-application was 74.5 (SD = 13.8), medication dose calculation was 72.0 (SD = 17.9), fluid resuscitation was 75.0 (SD = 15.3), and escharotomy was 78.2 (SD = 12.4). Each task application exceeded SUS’s threshold score of 68 for generalizable usability.
At the completion of all tasks, participants completed a post-simulation survey. The first portion of the survey asked users to rate their agreement with the training using a scale of 1–5 (Table 4), with 1 being “Do not agree at all” to 5 “Extremely Agree”. Participants felt they were adequately trained on the equipment used (4.8 ± 0.4) and they believed they had adequate time to practice prior to the simulation (4.8 ± 0.6).
For the sub-applications, we asked about the software interface/layout intuitiveness and ease of navigation to understand if they were able to find the information needed. Each sub-application’s interface/layout was rated; the Burn size calculator had a mean score of 4.0, Medication calculator was 4.0, Fluid resuscitation was 4.1, and Escharotomy was 4.2 (Table 4). Participants also rated each sub-applications ease of navigation. They agreed that all sub-applications were easy to navigate; the Burn size calculator had a mean score of 4.1, Medication Calculator was 4.1, Fluid Resuscitation was 4.1 and Escharotomy was 4.4 (Table 4).

4. Discussion

We developed a novel, AR-based burn management application (ARBAM) that integrates the JTS CPGs with a standard of care workflow, to provide the most salient information to users in a step-by-step manner. The design and interface of ARBAM was developed primarily for users in austere environments where telemedicine or additional resources are unavailable. Once the software is installed onto the HL, it can be used without internet connectivity and has both gesture and voice control features. For surgical cuts, holographic overlays can be placed, albeit manually, to view surgical cut lines.
Although there are AR-based applications currently available to assist military personnel in performing select medical procedures [22], to our knowledge this is the first AR application with comprehensive CDS for burn care. The software contains six essential steps when taking care of a burn patient which includes primary and secondary surveys, assessing burn size, fluid resuscitation with urine output, vital signs monitoring, pain and sedation management, and intervention. Although there are more steps involved in burn care, such as airway management, burn wound management, transfusion strategies, nutrition, and transfer/transportation protocols [23], we focused on areas where automated calculations and decision support could best be utilized. We developed the software to be modular where we could add additional steps such as those listed above to be part of ARBAM in the future.
We initially anticipated at least 20 participants in our study but were only able to enroll and complete the study with 11 participants. However, we had at least one participant in each age range group, except for ages 45–50, and female participants slightly outnumbered males 7:4. Due to small sample sizes, we did not conduct additional analysis to determine if age or gender had any influence in the overall usability.
Prior to the usability testing, we had each participant fill out a custom survey asking about their overall experience or knowledge using current technologies because of its relevance in similar features in AR. Since AR-technology is newer compared to virtual reality (VR) gaming systems, we hypothesized that most participants were more familiar with VR than AR. We did find that 45% of participants have used other VR headsets compared to only 9% having used an AR headset, although 82% of participants knew of AR but have not used it previously. We also asked questions about their usage in first shooter games because the HL is also in first-person view with hand controls. About half (55%, n = 6) had experience in first person shooter games. In ARBAM, voice commands can also be used simultaneously, therefore we asked questions about their experience with voice active technologies that are common in commercialized products like Alexa or Siri. Most participants have used voice active technologies (91%, n = 10). This may explain why the participants found ARBAM, within the HL, were comfortable with both voice and gesture controls and we observed usage of these two features interchangeably throughout the study. Our intent for these questions were to describe the general usage of video games and AR/VR technologies from our participants, but we did not conduct sub-analysis to distinguish if participants with more experience with gaming or usage of AR/VR headsets had higher levels of usability because of our small sample size. Our results indicate an initial understanding of user experience and further research with larger sampling is required for any correlation between specific usability of previous AR/VR experience and their overall experience and comfort level in complex AR-enabled software.
We used the SUS survey because it is a validated tool for global usability of a system [24]. We partitioned ARBAM into the four main tasks (Burn size calculation, fluid resuscitation, medication calculations, and escharotomy) to determine if those sub-applications passed the threshold for usability, a score of 68 or above. All sub-applications passed usability with overall mean scores ranging from 72.0–78.2. The standard deviations had a wide range, thus individually, there were some participants that did not find the sub-applications as useable. Although raw SUS scores can be converted into percentile rankings to rank the order of usability from ok, good, excellent, or best, we did not use percentile rankings since our sample size was small and only used a threshold score for determining passable usability. Based on these results, we passed the preliminary usability assessments of these tools, but the SUS tool does not convey specific areas of improvement. Further work is needed to expand on specificity of software improvements and user satisfaction by adding additional custom questionnaires or other validated tools such as the Post-Study System Usability Questionnaire (PSSUQ) [25].
After testing, we performed AARs with participants using standardized questions (Supplemental Table S1) to discuss their overall experiences during the simulation and software testing, and to better understand issues and areas for software improvement that the custom survey and SUS did not capture. AAR comments were key to understanding other factors in ARBAM usability. Comments included that ARBAM was easier to find the information to complete the tasks compared to paper CPGs. Some also found use of the technology to be fun. The concerns were about the hardware device itself and its limitations including battery life, fragility, sound, and lighting. From a platform standpoint, some study participants also reported concern that use of AR at point of injury may obscure visibility [11] or distract users from what is happening with the patient in front of them. Even though participants felt positive about their ability to use gestures to control applications within AR, some felt uncomfortable and shoulder fatigue was noted but data was not captured. It is of particular interest to note that no participants experienced issues with dizziness or motion sickness when using AR during the study. From a software design perspective, comments were made regarding the level of trust that users would be required to place on the applications. Trust is an important factor for developing a relationship between the device and its users. It can be hindered if users do not agree with the guidelines, uncertainty of data or recommendation sources, or if users develop alert fatigue [26]. Some participants said that they would like to see more explanations behind the decision support output. Some also recommended minor software updates to improve intuitiveness. A few participants expressed that the simulation setup with AR application and materials could be useful for training environments.
Additionally, we conducted qualitative analysis to determine themes about the software. We used the Technology Acceptance Model [27] as an analytic framework and went through three rounds of coding of AAR comments. Using external variables, perceived usefulness, and perceived ease of use to frame participant responses, qualitative results were similar to our observed findings in the AAR (Supplemental Table S2). Four themes emerged from participants. First, participants expressed confidence in obtaining the correct answers or providing the right care using ARBAM compared to paper CPGs. Second, the practicality of using ARBAM specifically in military combat had participants questioning its end-goal usefulness. Third, participants were concerned about over-reliance on the technology. Finally, participants thought ARBAM could be used as a training tool rather than a tool for enhancing care at point of injury because of initial concerns with it being used in a battlefield environment. The qualitative analysis shed light on the need for technology innovation to consider the user’s perspective when designing and implementing AR-based CDS into the care of patients outside of a simulated environment.

5. Conclusions

Use of an AR-based decision support tool is a novel approach to burn care management. We were able to develop a decision support system for burn care that aided in clinical decision making and task guidance for four key burn tasks involving burn wound size, fluid resuscitation, medication calculations, and surgical escharotomy. The software resides in an AR-enabled device that can be used independently without internet connectivity, nor does it require integration into a patient monitoring system.
This study, based on a validated usability survey and qualitative analysis, showed that ARBAM is usable, intuitive, and easy to navigate when caring for a simulated burn patient. Based on the usability results and feedback from participants, we have a better understanding of how this software can be improved for future development with the end goal of creating a decision support system that could eventually improve speed and accuracy of interventions [28].
With further improvements in the software, it will be possible to integrate ARBAM into the U.S. Army’s Integrated Visual Augmentation System (IVAS), which is a specialized HL designed for military needs [29]. The IVAS system has been under continued development to improve qualities such as additional ruggedization, outdoor usage in sunlight, and battery life. A finalized version of the headset has yet to be made [29]. Although the US Army has continued its collaboration on IVAS devices, there are still efforts to address fundamental issues like nausea, pain, software reliability, and other factors which will delay the release of updated versions of the AR-headset well beyond 2025 [30].
Our study is not without limitations. First, the study’s small sample size impacts broader generalizability of the results. Recruitment and completion of the study were difficult due to the ongoing COVID-19 pandemic, especially given our target participant population of hospital clinicians, which resulted in dropouts. Second, participants did have exposure to the hardware/software during training which may increase their learning and usage compared to first time exposure. However, training was standardized and limited to the specific training tasks to normalize the exposure that was necessary to understand the unique device controls and software capabilities and features. Additionally, the usability study was simulated with mixed table-top exercises and manikin usage. Although table-top simulations are effective in evaluating medical approaches and education [31,32], a highly immersive simulation scenario would have been more ideal. Running realistic simulations are time-consuming and resource intensive especially if asked to manage patient care from primary assessment to surgical interventions [33].
Considering the results of our study and its limitations, future work should consider additional usability assessments that focus on user satisfaction, effectiveness, and other heuristics to better understand user interaction and outcomes. Additional burn specialty care such as radiological, chemical, or electrical burns would be of added value for the military community. It is also important to investigate how AR-based CDS for burn care can be effectively integrated into actual clinical care with burn patients. Translation of this work could have major impact in bridging knowledge gaps in specialty burn care, assist in faster interventions to decrease morbidity and mortality, and could be used as an educational tool, especially for just-in-time training.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/biomedinformatics4010039/s1. Table S1: After Action Review Questions; Table S2: Qualitative Analysis of Theoretical Framework Code.

Author Contributions

The opinions or assertions contained herein are the private views of the author and are not to be construed as official or as reflecting the views of the Department of the Army or the Department of Defense. The authors listed contributed to the work in the following ways: Conceptualization, S.V. and M.S.-M.; methodology, S.V., D.L., A.S., N.C., C.F. and M.S.-M.; software, D.L., S.V. and C.F.; validation, S.V. and A.S.; formal analysis, S.V.; investigation, M.S.-M., S.V., D.L., A.S., N.C. and A.M.; resources, S.V., D.L., P.C., A.M. and J.S.; data curation, S.V., D.L., A.S., N.C., A.M. and M.S.-M.; writing—original draft preparation, S.V., D.L., A.S. and N.C.; writing—review and editing, A.M., P.C., C.F., J.S., J.R., J.M. and M.S.-M.; visualization, S.V., D.L., J.R., J.M., and N.C.; supervision, J.S.; project administration, S.V. and M.S.-M.; funding acquisition, M.S.-M. and S.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Medical Simulation and Information Sciences Research Program (MSISRP), grant number XV5-2B_19_0001.

Institutional Review Board Statement

This study was conducted under a protocol reviewed and approved by the U.S. Army Medical Research and Development Command Institutional Review Board and in accordance with the approved protocol.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Jeschke, M.G.; van Baar, M.E.; Choudhry, M.A.; Chung, K.K.; Gibran, N.S.; Logsetty, S. Burn Injury. Nat. Rev. Dis. Primers 2020, 6, 11. [Google Scholar] [CrossRef]
  2. Nessen, S.C.; Gurney, J.; Rasmussen, T.E.; Cap, A.P.; Mann-Salinas, E.; Le, T.D.; Shackelford, S.; Remick, K.N.; Akers, K.S.; Eastridge, B.J.; et al. Unrealized Potential of the US Military Battlefield Trauma System: DOW Rate is Higher in Iraq and Afghanistan than in Vietnam, but CFR and KIA Rates are Lower. J. Trauma Acute Care Surg. 2018, 85, S4–S12. [Google Scholar] [CrossRef] [PubMed]
  3. Osheroff, J. Improving Outcomes with Clinical Decision Support: An Implementer’s Guide, 2nd ed.; Informa UK Limited: London, UK, 2012. [Google Scholar]
  4. Parvizi, D.; Giretzlehner, M.; Dirnberger, J.; Owen, R.; Haller, H.L.; Schintler, M.V.; Wurzer, P.; Lumenta, D.B.; Kamolz, L.P. The use of telemedicine in burn care: Development of a mobile system for TBSA documentation and remote assessment. Ann. Burn. Fire Disasters 2014, 27, 94–100. [Google Scholar]
  5. Wibbenmeyer, L.; Kluesner, K.; Wu, H.; Eid, A.; Heard, J.; Mann, B.; Pauley, A.; Peek-Asa, C. Video-Enhanced Telemedicine Improves the Care of Acutely Injured Burn Patients in a Rural State. J. Burn Care Res. 2016, 37, e531–e538. [Google Scholar] [CrossRef] [PubMed]
  6. Park, C.; Cho, Y.; Harvey, J.; Arnoldo, B.; Levi, B. Telehealth and Burn Care: From Faxes to Augmented Reality. Bioengineering 2022, 9, 211. [Google Scholar] [CrossRef]
  7. Farshad, M.; Furnstahl, P.; Spirig, J.M. First in Man In-situ Augmented Reality Pedicle Screw Navigation. N. Am. Spine Soc. J. 2021, 6, 100065. [Google Scholar] [CrossRef] [PubMed]
  8. Wierzbicki, R.; Pawlowicz, M.; Job, J.; Balawender, R.; Kostarczyk, W.; Stanuch, M.; Janc, K.; Skalski, A. 3D Mixed-Reality Visualization of Medical Imaging Data as a Supporting Tool for Innovative, Minimally Invasive Surgery for Gastrointestinal Tumors and Systemic Treatment as a New Path in Personalized Treatment of Advanced Cancer Diseases. J. Cancer Res. Clin. Oncol. 2022, 148, 237–243. [Google Scholar] [CrossRef] [PubMed]
  9. Bernard, F.; Haemmerli, J.; Zegarek, G.; Kiss-Bodolay, D.; Schaller, K.; Bijlenga, P. Augmented reality-assisted roadmaps during periventricular brain surgery. Neurosurg. Focus 2021, 51, E4. [Google Scholar] [CrossRef]
  10. Katayama, M.; Ueda, K.; Mitsuno, D.; Kino, H. Intraoperative 3-dimensional Projection of Blood Vessels on Body Surface Using an Augmented Reality System. Plast. Reconstr. Surg. Glob. Open 2020, 8, e3028. [Google Scholar] [CrossRef]
  11. Tsang, K.D.; Ottow, M.K.; van Heijst, A.F.J.; Antonius, T.A.J. Electronic Decision Support in the Delivery Room Using Augmented Reality to Improve Newborn Life Support Guideline Adherence: A Randomized Controlled Pilot Study. Simul. Healthc. 2022, 17, 293–298. [Google Scholar] [CrossRef]
  12. D’Urso, A.; Agnus, V.; Barberio, M.; Seeliger, B.; Marchegiani, F.; Charles, A.L.; Geny, B.; Marescaux, J.; Mutter, D.; Diana, M. Computer-assisted quantification and visualization of bowel perfusion using fluorescence-based enhanced reality in left-sided colonic resections. Surg. Endosc. 2021, 35, 4321–4331. [Google Scholar] [CrossRef]
  13. Andersen, D.; Popescu, V.; Cabrera, M.E.; Shanghavi, A.; Mullis, B.; Marley, S.; Gomez, G.; Wachs, J.P. An Augmented Reality-Based Approach for Surgical Telementoring in Austere Environments. Mil. Med. 2017, 182, 310–315. [Google Scholar] [CrossRef]
  14. Boissin, C. Clinical Decision-Support for Acute Burn Referral and Triage at Specialized Centres—Contribution from Routine and Digital Health Tools. Glob. Health Action 2022, 15, 2067389. [Google Scholar] [CrossRef]
  15. Jeffri, N.F.S.; Awang Rambli, D.R. A review of augmented reality systems and their effects on mental workload and task performance. Heliyon 2021, 7, e06277. [Google Scholar] [CrossRef] [PubMed]
  16. Clinical Practice Guidelines (CPGs). Available online: https://jts.health.mil/index.cfm/PI_CPGs/cpgs (accessed on 18 April 2023).
  17. Joint Trauma System Clinical Practice Guideline. Available online: https://jts.health.mil/assets/docs/cpgs/Burn_Care_11_May_2016_ID12.pdf (accessed on 30 August 2022).
  18. Joint Trauma System Clinical Practice Guideline. Available online: https://jts.health.mil/assets/docs/cpgs/Burn_Management_PFC_13_Jan_2017_ID57.pdf (accessed on 18 April 2023).
  19. Joint Trauma System Clinical Practice Guideline. Available online: https://jts.health.mil/assets/docs/cpgs/Analgesia_and_Sedation_Management_during_Prolonged_Field_Care_11_May_2017_ID61.pdf (accessed on 30 August 2022).
  20. Hipp, D.R. SQLite. Available online: https://www.sqlite.org/index.html (accessed on 18 April 2023).
  21. Lewis, J.R. The System Usability Scale: Past, Present, and Future. Int. J. Hum.-Comput. Interact. 2018, 34, 577–590. [Google Scholar] [CrossRef]
  22. Rojas-Muñoz, E.; Lin, C.; Sanchez-Tamayo, N.; Cabrera, M.E.; Andersen, D.; Popescu, V.; Barragan, J.A.; Zarzaur, B.; Murphy, P.; Anderson, K.; et al. Evaluation of an Augmented Reality Platform for Austere Surgical Telementoring: A Randomized Controlled Crossover Study in Cricothyroidotomies. npj Digit. Med. 2020, 3, 75. [Google Scholar] [CrossRef] [PubMed]
  23. Tejiram, S.; Romanowski, K.S.; Palmieri, T.L. Initial management of severe burn injury. Curr. Opin. Crit. Care 2019, 25, 647–652. [Google Scholar] [CrossRef]
  24. Brooke, J. SUS—A Quick and Dirty Usability Scale; Redhatch Consulting Ltd.: Reading, UK, 1996; pp. 189–194. [Google Scholar]
  25. Lewis, J.R. IBM Computer Usability Satisfaction Questionnaires: Psychometric Evaluation and Instructions for Use. Int. J. Hum.-Comput. Interact. 1995, 7, 57–78. [Google Scholar] [CrossRef]
  26. Sutton, R.T.; Pincock, D.; Baumgart, D.C.; Sadowski, D.C.; Fedorak, R.N.; Kroeker, K.I. An overview of clinical decision support systems: Benefits, risks, and strategies for success. npj Digit. Med. 2020, 3, 17. [Google Scholar] [CrossRef] [PubMed]
  27. Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  28. Serio-Melvin, M.; Caldwell, N.; Luellen, D.; Samosorn, A.; Fenrich, C.; McGlasson, A.; Colston, P.; Scott, L.; Salinas, J.; Veazey, S. 87 An Augmented Reality Burn Management Application to Guide Care in Austere Environments. J. Burn Care Res. 2023, 44, S48–S49. [Google Scholar] [CrossRef]
  29. Shear, F. Army accepts prototypes of the most advanced version of IVAS. 2023. [Google Scholar]
  30. Shakir, U. US Army Orders More Microsoft AR Headsets Now that They No Longer Make Soldiers Want to Barf. The Verge. Available online: https://www.theverge.com/2023/9/13/23871859/us-army-microsoft-ivas-ar-goggles-success-new-contract-hololens (accessed on 13 September 2023).
  31. Guise, J.M.; Mladenovic, J. In situ simulation: Identification of systems issues. Semin. Perinatol. 2013, 37, 161–165. [Google Scholar] [CrossRef] [PubMed]
  32. Henriksen, K.; Battles, J.B.; Keyes, M.A.; Grady, M.L. (Eds.) Advances in Patient Safety. In Advances in Patient Safety: New Directions and Alternative Approaches (Vol. 1: Assessment); Agency for Healthcare Research and Quality: Rockville, MD, USA, 2008. [Google Scholar]
  33. Pamplin, J.C.; Veazey, S.R.; De Howitt, J.; Cohen, K.; Barczak, S.; Espinoza, M.; Luellen, D.; Ross, K.; Serio-Melvin, M.; McCarthy, M.; et al. Prolonged, High-Fidelity Simulation for Study of Patient Care in Resource-Limited Medical Contexts and for Technology Comparative Effectiveness Testing. Crit. Care Explor. 2021, 3, e0477. [Google Scholar] [CrossRef] [PubMed]
Figure 1. ARBAM and Worklink® application workflow. Main applications, shown in blue can be independently deployed from the menu. Upon deployment, there are input or instructional screens. Sub-applications can be deployed as additional screens from the main application. Each sub-application requires user interaction or data inputs to generate outputs shown in oval diagram. BSC= Burn size calculator; FR = Fluid Resuscitation; MC = Medication Calculator; ES = Escharotomy; CPG = Clinical practice guideline; HTML = hypertext markup language.
Figure 1. ARBAM and Worklink® application workflow. Main applications, shown in blue can be independently deployed from the menu. Upon deployment, there are input or instructional screens. Sub-applications can be deployed as additional screens from the main application. Each sub-application requires user interaction or data inputs to generate outputs shown in oval diagram. BSC= Burn size calculator; FR = Fluid Resuscitation; MC = Medication Calculator; ES = Escharotomy; CPG = Clinical practice guideline; HTML = hypertext markup language.
Biomedinformatics 04 00039 g001
Figure 2. Study Workflow. Study workflow began with recruitment and enrollment, study instructions with reviews, hands-on training, administer pre-simulation survey, participant testing and evaluation with surveys, final post simulation survey, and simulation after-action review.
Figure 2. Study Workflow. Study workflow began with recruitment and enrollment, study instructions with reviews, hands-on training, administer pre-simulation survey, participant testing and evaluation with surveys, final post simulation survey, and simulation after-action review.
Biomedinformatics 04 00039 g002
Table 1. Participant demographics.
Table 1. Participant demographics.
Age Range18–2425–2930–3435–3940–4445–5050+
Participants Age % (n)9% (1)27% (3)9% (1)18% (2)9% (1)0% (0)27% (3)
GenderFemaleMale
Gender which you identify % (n)64% (7)36% (4)
Table 2. AR Experience Response.
Table 2. AR Experience Response.
Pre-Task Survey QuestionsResponses % (n)
Video Games & TechnologyNo
Experience
Some
Experience
Moderate ExperienceHighly
Experienced
I play video games (of any kind on phone, PC, console)9% (1)27% (3)27% (3)36% (4)
I play first person shooter games45% (5)27% (3)18% (2)9% (1)
I use voice active technologies (i.e., Alexa or Siri)9% (1)45% (5)27% (3)18% (2)
I am comfortable with advanced technologies (ie smart watch, mobile tablets, smart phone)0% (0)0% (0)45% (5)55% (6)
Augmented Reality TechnologiesYesNo
I knew of AR prior to this study82% (9)18% (2)
I have used the Microsoft HoloLensTM9% (1)91% (10)
I am comfortable with the gestures used while wearing the HoloLensTM100% (11)0% (0)
I have used other head-mounted displays (i.e., Oculus Rift, Google Glass, Magic Leap, Playstation VR, etc.)45% (5)55% (6)
Table 3. Results of System Usability Scale (SUS) Survey.
Table 3. Results of System Usability Scale (SUS) Survey.
SUS QuestionsBurn Size
(Mean ± Std)
MedCalc
(Mean ± Std)
FluidResus
(Med ± Std)
Eschar
(Mean ± Std)
I think that I would like to use this application frequently4.1 ± 1.0 4.3 ± 0.94.1 ± 0.84.2 ± 0.8
I found this application unnecessarily complex2.1 ± 1.02.3 ± 1.02.0 ± 0.91.7 ± 0.6
I thought this application was easy to use4.0 ± 0.9 3.8 ± 0.84.1 ± 0.74.5 ± 0.5
I think that I would need assistance to be able to use this application2.3 ± 1.02.4 ± 1.32.0 ± 0.81.9 ± 0.7
I found the various functions in this application were well integrated4.0 ± 1.04.0 ± 0.93.8 ± 1.03.9 ± 0.9
I thought there was too much inconsistency in this application2.0 ± 0.92.0 ± 0.91.8 ± 0.61.8 ± 0.9
I would imagine that most people would learn to use this application very quickly3.9 ± 0.53.8 ± 0.84.0 ± 0.64.2 ± 0.4
I found this application very cumbersome/awkward to use2.4 ± 1.22.0 ± 0.82.0 ± 0.92.0 ± 1.1
I felt very confident using this application4.3 ± 0.63.9 ± 0.94.0 ± 0.84.1 ± 0.7
I needed to learn a lot of things before I could get going with this application1.7 ± 1.02.3 ± 1.12.2 ± 0.82.1 ± 0.7
Total SUS Score74.5 ± 13.872.0 ± 17.975.0 ± 15.378.2 ± 12.4
SUS Scores: Each question contained an attitude Likert scale from 1–5 with 1 being “Strongly Disagree” to 5 being “Strongly Agree”. Final SUS scores were converted and averaged among all participants for each burn task. Burn Size = Burn Size Calculator application, MedCalc = Medication Calculator application, FluidResus = Fluid Resuscitation application, Eschar = Escharotomy application.
Table 4. Post-testing survey questions.
Table 4. Post-testing survey questions.
Training QuestionOverall Mean (std. dev.)
I was adequately oriented to the equipment used4.8 ± 0.4
I had adequate time to practice with the HoloLens™ prior to beginning the simulation4.8 ± 0.6
Scale 1–5 (1 = Do not agree at all, 2 = Slightly agree, 3 = Moderately Agree, 4 = Quite Agree, 5 = Extremely Agree)
Application QuestionsBurn SizeMedCalcFluid ResusEschar
The software interface/layout was intuitive4.0 ± 1.24.0 ± 1.34.1 ± 1.04.2 ± 1.0
The software was easy to navigate4.1 ± 0.94.1 ± 1.04.1 ± 1.14.4 ± 0.9
Scale 1–5 (1 = Do not agree at all, 2 = Slightly agree, 3 = Moderately Agree, 4 = Quite Agree, 5 = Extremely Agree)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Veazey, S.; Caldwell, N.; Luellen, D.; Samosorn, A.; McGlasson, A.; Colston, P.; Fenrich, C.; Salinas, J.; Mike, J.; Rivera, J.; et al. The Development and Usability Assessment of an Augmented Reality Decision Support System to Address Burn Patient Management. BioMedInformatics 2024, 4, 709-720. https://doi.org/10.3390/biomedinformatics4010039

AMA Style

Veazey S, Caldwell N, Luellen D, Samosorn A, McGlasson A, Colston P, Fenrich C, Salinas J, Mike J, Rivera J, et al. The Development and Usability Assessment of an Augmented Reality Decision Support System to Address Burn Patient Management. BioMedInformatics. 2024; 4(1):709-720. https://doi.org/10.3390/biomedinformatics4010039

Chicago/Turabian Style

Veazey, Sena, Nicole Caldwell, David Luellen, Angela Samosorn, Allison McGlasson, Patricia Colston, Craig Fenrich, Jose Salinas, Jared Mike, Jacob Rivera, and et al. 2024. "The Development and Usability Assessment of an Augmented Reality Decision Support System to Address Burn Patient Management" BioMedInformatics 4, no. 1: 709-720. https://doi.org/10.3390/biomedinformatics4010039

Article Metrics

Back to TopTop