Next Article in Journal
Low-Power Beam-Switching Technique for Power-Efficient Collaborative IoT Edge Devices
Next Article in Special Issue
The Cognition of Audience to Artistic Style Transfer
Previous Article in Journal
Development of Cryogenic Detectors for Neutrinoless Double Beta Decay Searches with CUORE and CUPID
Previous Article in Special Issue
Impact of Different Types of Head-Centric Rest-Frames on VRISE and User Experience in Virtual Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Designing Procedure Execution Tools with Emerging Technologies for Future Astronauts

1
NASA Ames Research Center, Moffett Field, CA 94035, USA
2
San José State University Research Foundation, Moffett Field, CA 94035, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(4), 1607; https://doi.org/10.3390/app11041607
Submission received: 24 December 2020 / Revised: 4 February 2021 / Accepted: 4 February 2021 / Published: 10 February 2021
(This article belongs to the Special Issue User Experience for Advanced Human–Computer Interaction)

Abstract

:
NASA’s human spaceflight efforts are moving towards long-duration exploration missions requiring asynchronous communication between onboard crew and an increasingly remote ground support. In current missions aboard the International Space Station, there is a near real-time communication loop between Mission Control Center and astronauts. This communication is essential today to support operations, maintenance, and science requirements onboard, without which many tasks would no longer be feasible. As NASA takes the next leap into a new era of human space exploration, new methods and tools compensating for the lack of continuous, real-time communication must be explored. The Human-Computer Interaction Group at NASA Ames Research Center has been investigating emerging technologies and their applicability to increase crew autonomy in missions beyond low Earth orbit. Interactions using augmented reality and the Internet of Things have been researched as possibilities to facilitate usability within procedure execution operations. This paper outlines four research efforts that included technology demonstrations and usability studies with prototype procedure tools implementing emerging technologies. The studies address habitat feedback integration, analogous procedure testing, task completion management, and crew training. Through these technology demonstrations and usability studies, we find that low- to medium-fidelity prototypes, evaluated early in the design process, are both effective for garnering stakeholder buy-in and developing requirements for future systems. In this paper, we present the findings of the usability studies for each project and discuss ways in which these emerging technologies can be integrated into future human spaceflight operations.

1. Introduction

1.1. Motivation

In 2021, NASA will have spent sixty years conducting human spaceflight operations. From Mercury onwards, including the current International Space Station (ISS) programs, astronauts and flight controllers in Mission Control Center (MCC) have had near real-time communications. Even during the Apollo program, the landing sites selected enabled line-of-sight communication with Earth [1]. Space-to-Ground communication between these two teams is essential and allows for astronauts to have real-time access to the vast knowledge and resources available on Earth. Flight controllers can supervise and manage the various subsystems of the spacecraft, allowing astronauts to focus on other tasks. If something goes awry in space, astronauts and flight controllers can collaborate to diagnose, repair, and recover from the issue. Astronauts can also receive advice, training, and guidance to complete scientific or maintenance tasks.
The new challenge that NASA faces in human spaceflight operations for future long-duration exploration missions (LDEMs) stems from the dramatic change in communications between astronauts and ground. As missions explore farther afield from Earth, one-way light time communication transmissions increase. Communication latencies start from a few seconds on the Moon and grow to several minutes while in transit to Mars. Once on Mars, astronauts will encounter up to 44-min round trip communication delays [2]. Communication is unlikely to be constant, and MCC and astronauts will only be able to exchange data during certain times of the day. Depending on the length of the stay, astronauts will also contend with periods of time where there is no communication with Earth. As such, astronauts need to become more Earth-independent, completing assigned tasks and dealing with unexpected events more autonomously. This significant shift in operations requires innovative, new methods, tools, and processes that enable astronauts to conduct themselves autonomously from MCC. At NASA Ames Research Center (ARC), the Human-Computer Interaction (HCI) Group has started to investigate the use of emerging technologies to make crew autonomy possible under high communication latencies. Through a series of prototypes and user testing, we aim to identify how best to map these emerging technologies to tasks conducted in human spaceflight operations in order to design future systems with high user adoption and acceptable usability.

1.2. Training and Task Execution Research

The future of training and procedure execution for NASA missions to the Moon (with Artemis targets in 2024 [3]) and Mars (targets in the 2030s and beyond [4]) is likely to include a combination of virtual and augmented reality to display information collected by a ubiquitous collection of Internet of Things (IoT) sensors within the spacecraft, habitat, and nearby robotic agents. NASA has been integrating advanced display technologies into training and procedure execution since the early 1990s. NASA Johnson Space Center’s Virtual Reality Lab was created in 1991, and astronauts have been using virtual reality (VR) there since 1993 to train and plan for spacewalks and operate robotic arms. More recently, astronauts onboard the ISS have used commercially available augmented reality (AR) headsets such as Google Glass and the Microsoft HoloLens to assist with scientific experiments and demonstrate potential uses for enabling astronauts to better complete procedures with the help of remote support from MCC. These opportunities demonstrate NASA’s desire to integrate these advanced displays into future missions, but research is required to identify opportunities where VR/AR can provide the most benefit over traditional training. Care must also be taken to avoid instances where training and simulated task execution in VR/AR does not accurately align with reality in order to avoid poor transfer of training to actual operation while in space.
Astronauts onboard the Space Shuttle flew with large volumes containing detailed procedures of the tasks they were expected to conduct during their missions (see Figure 1). Though these procedures have largely been digitized to Portable Document Format (PDF) files, emergency procedures for the ISS continue to be available on paper in case of power failure. In order to advance the state of the art for spaceflight procedures, we have looked at analogous work processes embedded with technicians on Earth. Technicians often carry large manuals or tablets with reference material into the field with them, where a hands-free digital interface would greatly improve ground procedure execution. Like astronauts, technicians typically need their hands free to execute work. Heads-up displays are a demonstrated technology solution with successful adoption in the domain of technician field execution. Field service technician work execution technologies within industrial manufacturing domains are already heavily invested in hands-free mixed reality experiences. Additionally, aviation, aeronautics, and product manufacturing contexts have demonstrated the utility of checklists. However, much of the research around the use of procedures and checklists has focused on training.
Research investigating the effects of virtual and augmented reality training has previously shown improved performance compared to conventional types of training. This research in virtual and augmented reality training has primarily focused on surgery, assembly, and maintenance skills. All three of these tasks are analogous to the types of tasks astronauts are required to accomplish during the completion of procedures in their missions. Surgical training has seen particular focus by researchers due to the inherent high-risk nature of the task, as well as the expense and limited time of expert instructors [5,6,7,8].
The use of VR to simulate surgical tasks was first proposed by Satava [5] in the early 1990s [5]. They used an off-the-shelf VR head mounted display (HMD) and a “DataGlove”, which essentially acted as a joystick, for the user to interact with the scene. Satava [5] described five areas that must be met to provide a realistic simulation:
  • Fidelity: the graphics must have an acceptable level of resolution for the task
  • Object properties: the objects in the scene must behave with sufficient reality
  • Interactivity: the user must be able to interact with the virtual scene
  • Sensory input: the user must receive appropriate sensory feedback
  • Reactivity: the objects must behave appropriately when the user interacts with them
At that point in time, computers were only capable of meeting these standards at the most basic level. Despite this, Satava [5] noted recent rapid advances in computing power, and that VR training could be particularly useful “in this era of animal-rights sensitivity and of fear of exposure to blood-borne diseases such as AIDS and hepatitis” [5].
Less than ten years later, Seymour et al. [7] showed that virtual reality training could improve operating room performance [7]. Seymour et al. [7] presented the results of a double-blind study demonstrating that “virtual reality training transfers technical skills to the operating room (OR) environment” [7]. In the study, surgical residents were split into either a non-VR-trained control group or a VR-trained group and trained to perform laparoscopic cholecystectomy. The authors showed that, while overall task performance time did not significantly decrease, the VR-trained group made significantly fewer errors. This result indicated that students could be trained to perform better without any risk to patients, and “validated the transfer of training skills from VR to OR sets the stage for more sophisticated uses of VR in assessment, training, error reduction, and certification of surgeons” [7].
Around the same time, Boud et al. [9] found significantly improved assembly times for subjects who used VR or AR training over conventional 2D engineering drawings [9]. In their task, subjects had to assemble a water pump after receiving instructions on paper “conventional” drawings, in VR, or in AR. Subjects in the VR and AR conditions wore HMDs to see their environment. Subjects who received training in VR or AR performed significantly better than subjects trained with conventional drawings, and subjects who received training with the AR system performed significantly better than the subjects trained with VR. More recently, Webel et al. [10] developed an augmented reality training platform for assembly and maintenance skills [10]. The authors combined an augmented reality video aid with a vibrotactile bracelet to assist with augmented reality training. The video aid was displayed on a tablet computer that combined predefined augmented reality cues with a video feed of the real world. The bracelet had six vibration segments, which could be activated independently, allowing for both translational and rotational “channels” that guide the user. Webel et al. [10] ran a study that grouped subjects into two groups to determine if training with the AR system was more effective than traditional training. To measure the effects of the training, the authors investigated both the task completion time and the number of “unsolved errors.” The authors found that overall task completion times were not significantly different between the control and AR groups, but the AR group had significantly less unsolved errors. This result is consistent with other studies using VR, including the above-mentioned Seymour et al. [7] study.
This research has shown that measuring human performance and the effects of training can be challenging. Even in experiments with seemingly improved and subjectively preferred training techniques, the results of training rarely reflect performance changes. While overall task completion times generally do not change as a result of training in VR or AR, the number of task errors has been shown to be decreased [5,7]. Despite these challenges, VR has been used onboard the ISS when astronauts are planning for complicated spacewalks or conducting tasks with the robotic arms. Until recently, researchers have noted that the computational limits of the hardware used in these VR and AR experiments have reduced their effectiveness. Recent advances in hardware allow for fully mobile, head mounted augmented reality solutions which may ultimately prove to be more useful.
The integration of additional task execution data has the potential to increase the efficiency of training. Internet of Things sensors have the potential to provide additional sources of data about interactions with objects in the environment which can then be provided back to the trainee in real-time or after the task is complete. These sensors and feedback strategies can similarly be used by astronauts during task execution in lieu of real-time communications with MCC. The recent integration of augmented reality and the Internet of Things has led to a large number of publications in the past decade, often between fields of research that had previously had little interaction before (see reviews by Norouzi et al. [11] and Kim [12]). The Internet of Things is a term which describes a “network of sensors and actuators within or attached to real-world objects ” [13]. IoT sensors rarely have built-in displays to provide their information, but “AR has… been touted as one of the ideal interfaces for IoT” [14]. This prediction has manifested with a diverse collection of uses which include many topics essential to future space operations such as city planning and monitoring [15], precision farming [16], robotic movement planning and execution [17], and maintenance procedures for monitoring and safety systems [18]. While this breadth of research shows ongoing interest in the union of AR and IoT, many usability questions remain which have yet to be addressed in the literature.
Evaluating the usability of early prototypes is an essential part of the design and implementation of new tools. Low fidelity prototypes are usually required when considering features and capabilities of future software and hardware tools which have yet to be developed, and their evaluation can lead to meaningful requirements for these future systems. While some studies have found that higher fidelity prototypes offer benefits over lower fidelity prototypes, the majority of studies show that reduced fidelity prototypes provide equivalent results to their high-fidelity counterpoints [19]. As lower fidelity prototypes are easier to create, evaluate, and iterate on, they can provide a useful tool for designing future systems. These benefits can prove particularly valuable early in the design process, when the designer is trying to “find the manifestation that, in its simplest form, will filter the qualities in which the designer is interested without distorting the understanding of the whole” [20]. This fundamental prototyping principle is valuable for evaluation and testing, especially when investigating complex user interactions with new technology.

1.3. Usability of Emerging Technologies

The process of integrating these emerging technologies into operations is not straightforward, particularly in safety-critical domains like human spaceflight. AR interactions and IoT sensors have increased the possibilities for user centered interactions of products, potentially enhancing user experience. Yet how exactly can these emerging technologies enhance, not detract, from procedure training and execution? The HCI Group has been building software for space mission control and operations for more than ten years (e.g., [21,22,23]). Fundamental to our software development approach is pairing it with HCI research and methods, emphasizing usability and user experience through thoughtful and purposeful designs. As such, NASA Ames Research Center’s HCI Group, in collaboration with San Jose State University Research Foundation and HCI graduate students at Carnegie Mellon University, have studied the use of emerging technology to enhance user experience during procedure training and execution for future LDEMs.
The emerging technologies we have focused our prototypes and investigations on are AR and IoT. We believe these technologies can revolutionize LDEM human spaceflight operations, as they have the potential to significantly improve astronauts’ task performance without the constant assistance traditionally provided by MCC. Currently, astronauts onboard ISS depend on procedures to complete assigned tasks and resolve unexpected issues. Procedures are digital written documents with images and, occasionally, embedded videos delineating step-by-step instructions. Procedures are developed and certified by a large team of ground controllers and are made available to crew when completing tasks. While astronauts are trained on the tasks they must complete in space, it is often years between training and execution. Due to the long time between training and execution, concepts such as just-in-time training and refresher training may be needed for future LDEMs. Astronauts rely on these procedures, in addition to the support provided by MCC, to clarify any steps or fix missteps. We believe that AR and IoT can improve procedure execution by providing more contextual information, preventing missteps, and guiding astronauts, all without relying on real-time, continuous communication with Earth.
The HCI Group at NASA Ames Research Center, collaborating with the San Jose State University Research Foundation and Carnegie Mellon University, have conducted several research investigations with the purpose of testing and validating whether these emerging technologies can provide, at an early stage, promising feedback on successful usability. In this paper, we outline the different approaches that we have taken to test the interaction benefits that such tools provide. The goal of these approaches is to increase crew autonomy by providing crew members with additional sources of information, execution tools, or both, by leveraging emerging technologies. This helps to enable crew members to act without the need for constant real-time communication with MCC by better integrating them with data generated by the habitat, assisting with procedure execution, and by providing just-in-time training. While we do not explicitly aim to replace MCC, these tools and techniques are designed to help crew members complete their tasks in missions with increased communication latencies. Four studies are outlined herein, describing the research conducted and key findings for each specific usability topic. The paper discusses the different approaches in which emerging technology in procedure execution tools have been addressed by the HCI team. These studies focus on habitat state feedback, integration of a digital procedure in an analogous domain, designing a prototype AR system, and crew training improvements.

2. Habitat Integration

2.1. Project Description

This project investigated the integration between crew and habitat systems in order to assist astronauts during procedure execution. Smart Habitat Augmented Reality Procedure (SHARP) was designed and constructed to demonstrate how real-time spacecraft state feedback can guide a user to diagnose and solve a system-specific issue. SHARP translates traditionally formatted procedures into a real-time step-by-step visualization displayed through a user interface on an iPad. The SHARP application aimed to decrease response time and cognitive loads the users might face by clearly illustrating the steps necessary to resolve an incident. To walk the user through the procedure in an interactive manner, SHARP integrated IoT sensors and AR to visualize the actions required. The system automatically marked steps as completed when the prototype’s sensors identified that the expected action had been performed.

2.2. Research

In this demo, we aimed to reduce the cognitive load the user received at once by displaying singular procedure steps in sequence. The demo also looked to test technology-related features such sensor detection and AR visual output. The prototype was internally validated by a naive user with no prior knowledge or experience with the prototype by running through an ISS-like procedure utilizing the SHARP tool. The user was asked to execute a procedure aiming to remove and replace a failed Fan Power Supply Unit. SHARP presented the task, grouped into three sections: actions required before the procedure, actions required during the procedure, and actions required after the procedure was completed. In the demo, the user pointed the iPad’s camera at the locker prototype, which allowed the iOS application to locate the orientation of the object and display a border around the detected volume in AR. Once the object was captured, the interface provided the user with the first step of the procedure. Sensors were placed in key locations within the physical mock-up where the user’s action would be detected. Once the sensor detected the expected motion for that specific procedure step shown, the interface displayed the next step for the user to perform. By representing sensor action within the digital interface, the SHARP tool demonstrated how to provide users with real-time feedback on the progress of the procedure.

2.3. Prototype

For this project, both a physical and a software prototype were constructed. The physical prototype consisted of an ISS locker constructed using foam core and integrated with IoT sensors (see Figure 2a,b). These sensors, connected to an ESP8266 chip, detected changes in the state of the habitat, such as the locker door being opened or closed. Based on these changes, feedback was provided to the users which allowed them to react accordingly in the procedure. Simultaneously, an iOS application was designed and built to enable a user to walk through a given procedure with AR assistance. This application was built using ARKit, Apple’s AR platform for iOS devices, which enables developers to produce apps that interact with the world using the device’s cameras and sensors. ARKit is able to recognize 3D objects and anchor augmented objects using world orientation tracking. By graphically rendering 3D models, the application provides visual instruction on the steps the user should follow. The iOS application published information to the server about the user’s progress and the real-time information from each of the IoT sensors’ state. This prototype demo allowed users to complete several steps of the procedure successfully through the illustration of individual steps.
In this demo, we were able to successfully demonstrate that users could complete procedure steps with guidance provided by the digital tool and its overlaid visualization to the real-life locker prototype. For this initial research, the ability of the user to complete the demo using the SHARP tool was the main feedback demonstrating the usability of AR technology displayed through an interface. This was the first step in demonstrating that an untrained user could follow a complex set of instructions as those seen in spaceflight operations. The digital tool allowed for automated step-by-step guidance of the procedure. The IoT sensors placed within the prototype enabled the prototype tool to acknowledge the step the crew member had just completed and to automatically reflect the changes in the procedure viewer. Augmented reality provided the opportunity to look beyond what the crew member could physically see, including systems placed behind covered panels in the prototype, which would have otherwise called for a dismantling of the panel just for diagnosis purposes.

2.4. Conclusions

SHARP focused on how IoT sensors and AR could provide real-time feedback of the vehicle state to the user. SHARP included both a physical and digital prototype to demonstrate how a user would be guided through each step of an incident solution. The ability to successfully integrate both aspects of the prototype serves as a proof-of-concept that the technology can guide a user to the completion of tasks. Using the learnings from this demo on the capabilities and limitations of this technology allow for the opportunity to develop a more informed process for any further exploration of this tool. The IoT placement within the physical prototype allowed for specific actions to be detected throughout the demo. Through AR, the digital prototype was able to visually communicate the next step to be taken by the participant (see Figure 3). Through the use of these technologies, we discovered that object detection precision within the prototype is a key factor to provide accurate feedback to the user. If the sensors do not capture the necessary action, they are unable to detect when to provide the next step to be taken. Another important factor is the clarity in visual feedback once a step is completed. The interface must provide clear queues when the completion of a step has been accomplished in order to inform the user when to move on to the following step. This demo proved that when these two emerging technologies are integrated effectively, users can use the vehicle feedback and perform procedure steps.

3. Ground Procedures

3.1. Project Description

This project introduced a proof-of-concept mixed reality application to improve ground procedures for service technicians at NASA Kennedy Space Center (KSC). Future crew members in LDEMs will perform complex hardware maneuvering, handling, and maintenance without real-time communications support from MCC. In this project, a ground procedure for handling and transport of delicate and complex flight hardware was addressed within a mixed reality interface. Ground procedures are performed at KSC for NASA by the Test and Operations Support Contract (TOSC), which supports the handling and transport of flight hardware from original equipment manufacturers to the launchpad. Technician procedure execution occurs by way of a Work Authorization Document (WAD), which is required to perform specific and complex ground processing tasks and work steps. WAD procedures are distributed in a PDF format and printed on paper to be brought to the site where tasks are performed. The printed PDF procedures tend to be long, repetitive, and can potentially lead to missteps and mistakes. Our research indicated that there is substantial precedent for emerging technology solutions such as AR within other industrial manufacturing domains, and an emerging technology solution may improve collaborative task execution for stakeholders. The primary research aims of this project were to improve the efficiency and reliability of technician procedure execution and enhance situational awareness for collaborative task execution of ground procedures. A mixed reality application was proposed to enable a hands-free interface for technician work execution, introduce a checklist format improving buyoff of work-steps from all collaborators, provide improved foresight to later parts of the procedure document, and integrate supplementary resources or information within a single interface.
Future LDEMs will require astronauts to perform complex collaborative procedures with highly sensitive hardware. Different astronauts may possess different certifications for certain repair or transport tasks, and procedures must support efficient and intuitive collaborative execution. The prototype for this project focused on a task requiring personnel at NASA KSC to safely transport flight-ready hardware from one facility to another. The task presents highly targeted work steps demanding buyoff from a single ground service technician, with collaborative buyoff required from other ground service personnel. While our research indicated the collaborative nature of ground service and processing is a significant aspect of procedure execution, due to time constraints the prototype was designed for a single service technician. Additionally, while a proof-of-concept was derived for a heads-up display, a functional prototype was instead tested with multiple users on an iPad.

3.2. Research

For this project, we conducted a literature review of repair, maintenance, and servicing tasks at NASA, contextual inquiry within primary and analogous domains, subject matter interviews, needs identification and validation for a primary user group, and formulation of requirements for a proof-of-concept technology interface. Twelve subject matter interviews were conducted at NASA KSC with TOSC field service technicians, engineers who author procedures, as well as facility managers. Despite the fact that there are a host of stakeholders involved in the execution of ground procedures, technicians became the target user for the project based on an evaluation of challenges (“pain points”). Our research revealed the following challenges for technician execution of ground procedures: current procedure documents do not provide sufficient foresight to upcoming tasks relevant for efficient team management, planning, or organization, team communications are not linked, tied, or associated with specific work steps, comments are not captured within the execution workflow and are typically communicated among collaborators via text message, and supplementary materials (manuals and resource information) are not inherently linked to the PDF procedure (and may be missing from TOSC-specific workflows). Technicians typically need their hands free to execute work, yet our research observed that they often carry large manuals or tablets with reference materials. Our research also indicated that a significant challenge of collaborative ground procedure execution included that buyoff tended to be difficult using paper copies of procedures in the field. Ultimately, ensuring task safety, efficiency, reliability, and troubleshooting were identified as significant areas of concern and potential areas for improvement. A mixed reality interface was proposed to overlay a checklist within a hands-free interface. Our team hypothesized that a mixed reality interface would: improve WAD execution efficiency, provide additional foresight into upcoming tasks, and enable the user to quickly access task-specific information.

3.3. Prototype

After identifying user needs and requirements, design iterations were developed through paper prototyping, low fidelity digital prototyping, and user testing with new and repeat participants. The primary design objectives of the prototype were to improve the efficiency and reliability of technician procedure execution. A proof-of-concept was created for the mixed reality interface, however constraints related to sharing the Microsoft HoloLens with remote users at KSC led us to create and evaluate a functional prototype on an iPad (see Figure 4a and Figure 5). The proof-of-concept mixed reality interface focused on the selected task of transferring flight hardware from one facility to another and made use of car repair parts and a makeshift dolly as stand-ins. In the proof-of-concept, video footage of the performed procedure was overlaid with a design representing the procedure as a checklist, and information beyond the current work step was given limited visibility. The proof-of-concept was designed for our primary user group, technicians, and the procedure execution video was taken from the technician’s point-of-view. Both in-person and remote user testing of the prototype was conducted through the InVision web application. The features incorporated within the iPad prototype included: the capability to contact collaborators and other teammates over the course of the procedure, access to supplementary materials such as manuals, help and troubleshooting information, a dropdown menu “hiding” information unrelated to a current work step, as well as a condensed “checklist” format providing improved foresight to later steps of the procedure and improved navigation during time-sensitive work steps.
To evaluate the prototype, subject matter experts in ground processing at KSC and researchers within the NASA Ames HCI Group were identified for user testing. Multiple rounds of usability testing were conducted using both heuristic evaluation as well as think-aloud evaluation methods [24,25]. Users are asked to share thoughts with each interaction in the demo as a means of identifying what information the user pays attention to, how the user brings prior knowledge to bear, and what the predominant usability issues may be based on the user’s reasoning. A series of five iterations of the prototype were tested and evaluated using the above methods. The success metrics of the interface included the interface’s intelligibility, ease of use, and usability.

3.4. Conclusions

This project investigated how to improve execution of ground procedures for field service technicians at NASA KSC. A mixed reality proof-of-concept was proposed to improve procedure execution efficiency and reliability. Field service technician work execution technologies within industrial manufacturing domains are already heavily invested in hands-free mixed reality solutions—the goal being to improve the efficiency and ease of work execution tasks without additional training or instruction. Most subject matter experts from KSC noted positive improvements to the clarity and efficiency of procedure execution using the iPad but expressed concern over implementation of new solutions due to limited Wi-Fi availability in certain facilities at KSC. The mixed reality proof-of-concept proved beneficial to work execution by empowering the user to be “hands-free” and capable of using manual repair and maintenance tools while visibility of procedure steps is maintained within the user’s field of vision, and by providing an intuitive interface for procedure execution that did not require additional instruction or training. While additional prototyping or development relative to ground procedures for TOSC-specific workflows is not planned in the near future, the research findings of the project present implications for future work in mixed reality applications for ground and field servicing activities.

4. Procedure Execution

4.1. Project Description

In current ISS missions, astronauts are tasked with conducting a variety of science experiments and maintenance activities requiring heavy memory recall. As such, we designed various fidelities of prototypes to support a complex procedure that involves restocking food in a rodent habitat aboard the ISS that has often been used to conduct long-term animal experiments in space. This procedure involves identifying and locating specific tools needed to open the habitat, restocking food supplies for animals inside the habitat, and following steps in a particular order to ensure the safety of the animals and the astronaut carrying out the procedure. We iterated the designs of the prototypes based on feedback from usability tests of low-, medium-, and high-fidelity prototypes.

4.2. Research

The research process started with a literature review about what levels of instruction are necessary to complete a task [26,27,28,29], as well as how sensors [30,31] and augmented reality [32,33,34] can help to complete a task. We performed contextual inquiries within primary and analogous domains and conducted subject matter interviews with engineers and technicians at various NASA centers as well as retired astronauts. Performing contextual inquiries at the Arc Jet complex at ARC and the Thermal Projection System Facility at KSC allowed us to develop an awareness of the technical and environmental conditions analogous to how astronauts perform tasks aboard the space station. We also participated in three experiential learning sessions within domains analogous to astronauts in which we were able to perform the activities within the domain ourselves to give us hands-on knowledge of the experience. Our research findings revealed the particular importance of implementing a feedback loop that empowers workers to influence procedure improvements, being able to meaningfully track status within a procedure, providing spatial references to help guide workers through a procedure, and having the flexibility to adjust the granularity of instructions based on a worker’s skill level and experience. After several refinements of increasing technology fidelity, we leveraged various technologies to eventually develop an intelligent, connected device that can maximize astronauts’ mobility—particularly by displaying procedure instructions on the AR interface and freeing both of their hands to execute the procedure—and provide meaningful guidance as they perform unfamiliar procedures.
We conducted usability tests using three different physical forms and various methods of spatial navigation and cognitive orientation using low fidelity prototypes which included augmented reality, a wearable with audio instructions, and a tablet. Four participants performed a simple but unfamiliar cooking procedure. We determined that the augmented reality form was the most appropriate for astronaut context since it enables hands-free control that travels with the user while still allowing for visual elements and easy scannability. More specifically, a visual interface can also provide visual affordances for verbal commands to help lower the stress on the user’s working memory.
Usability research using a mid fidelity prototype was conducted on five NASA users with backgrounds in mission planning and user experience design, and usability research on the final high fidelity version was performed with six people from the same background. None of the participants had prior familiarity with the procedure that the prototype asked them to execute, which was restocking food in a rodent habitat, as is performed on the ISS. At the end of each usability test, we solicited feedback on various aspects of the prototype. We received positive feedback on the use of images alongside procedure instructions for easier execution, the display of previews in next step that allowed users to accelerate step completion or consolidate multiple steps, the display of directional arrows on the user interface to locate tools or objects needed to execute the procedure, and the helpfulness of the speech recognition implementation as users found the use of verbal commands to be natural. Some areas of improvement captured in user feedback included finer granularity in the directional arrows to guide users during asset tracking, making the overall progress bar of the procedure more prominent; users found it to be useful, but they had not noticed it until much later in the procedure.

4.3. Prototype

We designed and built a working prototype of an augmented reality headset that provides astronauts hands-free instructions for procedure execution. The prototype used three single-board computers, one Intel Edison and two Raspberry Pi devices (see Figure 6 for a networking diagram). The Edison ran a web server that stored the procedure and kept track of the user’s progress. One of the Raspberry Pi computers served as the server for the front-end of the web application whose user interface was displayed on an iPad mini, which was mounted face-down from a helmet worn by the user. The AR interface was achieved by reflecting the user interface on the iPad mini running the web application onto a sheet of teleprompter glass that hung in front of the user (see Figure 7a).
A second Raspberry Pi computer, which was worn by the user, served as a Bluetooth beacon reader to scan for Bluetooth Low Energy (BLE)-based beacons affixed to tools needed to execute the procedure. This Raspberry Pi examined the strength of a Bluetooth signal being broadcast from a tool to roughly determine its distance (in increments of tens of feet) from the user in a room. The Raspberry Pi passed this distance information to the Intel Edison over Wi-Fi, and, combined with an experimenter who controlled a user interface directional arrow, the AR interface then displayed a visual indicator of distance and direction to help the user determine how to locate and retrieve the tool or object needed to perform the current procedure step (see Figure 7b). Hands-free operation of this prototype was achieved by incorporating speech recognition, made possible with a speech recognition engine on a Windows 8-based mobile phone with a headset worn by the user that captured verbal commands (e.g., “Ready for next step”), translated them to text, and sent them to the Intel Edison. If this converted phrase corresponded to a valid voice command, the system updated the user interface accordingly for the user.

4.4. Conclusions

Usability testing was conducted in-person in a laboratory. We provided each participant with the same task to complete and asked them to perform a think-aloud—a method that has participants verbalizing their thoughts as they used the prototype—during the procedure. We measured whether the participant was able to complete the task successfully and the time needed to complete the task. Performing user tests with this solution on a number of NASA employees led to feedback confirming the usefulness of the augmented reality interface design to execute the procedure and successfully locate the assets needed to complete the procedure. User tests also confirmed the intuitive nature of issuing verbal commands to navigate through the interface hands-free. Users noted that they found the distance information and directional arrows overlaid in their field of vision to help with asset tracking effective. The use of speech recognition was natural to navigate between steps of the procedure, as both of their hands were available to complete the tasks; previously, one of their hands would normally be occupied in holding a device that displayed the procedure instructions. The integration of all these elements has shown that this solution can be applied to astronauts, enabling them with increased autonomy during procedure execution by reducing the dependencies that they have with current methods. Participants credited the usefulness of the augmented reality and speech recognition aspects to successfully complete the given task using the prototype, and future application versions would continue to refine these aspects. Improving the speech interface to be more natural and reducing the weight of the augmented reality hardware so it can be more easily worn are two specific areas of focus for the next iteration of this prototype.

5. Just-in-Time Training

5.1. Project Description

Future long-duration spaceflight missions will require new training skills and tools to assist in intra-vehicular (IV) procedure execution. Skills which are currently taught before flight will need to be provided as just-in-time training in-flight due to expanding mission timelines and requirements. We designed and prototyped a mobile procedure viewer for future IV crew members using augmented reality with the goals of increasing crew autonomy, decreasing training time, and reducing procedure execution errors. This project used the Microsoft HoloLens, an augmented reality headset capable of displaying semi-transparent “holograms” to the wearer which can be fixed to the user’s viewpoint or to an aspect of the environment. The HoloLens also allows users to fix objects in a specific location within a room which is ideal for providing information in context. This project used a similar ISS rodent research procedure as the project described in the previous section, but instead used a commercial augmented reality headset. This prototype integrates with an analog for a future space habitat outfitted with an IoT sensor network. Our final prototype was created by iterating over three design cycles with sixteen usability participants from the NASA Ames HCI Group.

5.2. Research

Our work on developing a prototype was divided into two main tasks. The first task which we called “path visualization” focused on guiding the participants to a tool or location. The reason we chose this first task is because tools and objects are frequently lost aboard the ISS and AR could be used to guide future astronauts to these locations based on embedded sensor data [35]. The second task, procedure execution, focused on providing just-in-time training and real-time feedback to an astronaut performing a typical procedure onboard the station. Before participants began their first task, they were asked to complete a training exercise (referred to as a warmup) to get them familiar with HoloLens interactions and selectors. Combined, these two aspects provided guidance which allowed users with no previous knowledge of the layout of the environment or details of the task to successfully navigate to a lost tool and complete a procedure execution task similar to what would be required onboard the ISS. We recruited sixteen participants across three versions of the prototype and asked them to complete path visualization and procedure execution tasks. We ran six participants using version one of the prototype, five participants in version two, and another five participants in version three of the prototype. After each usability study, we conducted a semi-structured interview where we asked participants to describe their experience using the HoloLens and any discussion difficulties they encountered completing our tasks.
The focus of the path visualization task was to create a guidance system that would help astronauts find their desired tool or destination onboard the space station. We simulated the task of locating a missing tool by requiring users to retrieve tools from another room in the building. The guidance was presented as a continuous line of holographic arrows in the augmented reality display. These arrows led users to the room and then, within the room, guided them to the precise location of the tool. The arrows on the line were generated at fixed intervals, and the path was fixed within the environment such that users would always have an arrow indicating the direction they should head in their field of view. Initial prototypes placed individual holographic arrows on walls with no line to follow, which produced confusion and made it more challenging for users to begin walking towards their desired destination. We were able to identify these points of confusion based on interviews where we asked participants to describe their subjective experience out loud during the entirety of both tasks. This feedback was used to measure subjective usability and that data was used to improve the interface in subsequent versions. We also measured performance by counting the number of tools each participant could successfully identify. If participants routinely failed at finding a tool, that was a clear indication that we were not providing the appropriate cues and affordances.
During the procedure execution part of the usability studies, we focused on providing new features and functionality beyond what is currently available for astronauts with PDF or web-based procedures. Our prototype guided the user through the procedure by providing a holographic animation for each step within the procedure and provided a holographic user interface (UI) to display the procedure step in context. We developed a shortened procedure based on a complex research task onboard ISS. This task was selected not just because of its similarity to ISS activities, but also because it required objects to be in specific orientations, which is difficult to communicate properly to astronauts in microgravity. The procedure required the user to manipulate objects and place them in specific orientations, which leveraged holographic animation overlays. The goal of this procedure was to remove an object from one box and transfer it into a separate box. After the user completed each step within the procedure, the next step would be presented, both in the text and animation, without any required input from the user. This was accomplished by taking advantage of the IoT sensors built into the task and with a Wizard of Oz approach. During the procedure execution task, we asked participants to think out loud so that we could identify moments of confusion either with the task or the use of augmented reality overlays. We used this data to evaluate usability and identify features or interface elements that we could improve.

5.3. Prototype

The prototype developed takes advantage of the Microsoft HoloLens to display information to the user. The HoloLens is a hands-free AR HMD that uses gesture and voice to make selections and commands. This AR form factor is considered ideal (as opposed to hand-held mobile AR) for future astronauts as they often need both hands to complete complex procedures. Our prototype used fixed-placed holograms that provide relevant information near a specific hardware or location, as well as procedure instructions which stay in view and follow the user’s movements (see Figure 8). While we were able to collect a large amount of information from the environment (such as the user’s progress in the procedure and their proximity to tools), only the most relevant information was presented to the users so as to not overload them. We found that the HoloLens’s small field of view also constrained the amount of information we could communicate at one time.
We embedded IoT sensors in the tools we required participants to find and on a wristband that participants would wear as they completed our task (see Figure 9b). The IoT sensors used in our prototype were based on an ESP8266 chipset which were powered by small lithium batteries. This chipset was then attached to each tool which allowed us to provide information on location, orientation, and movement. Each of these chips was equipped with an accelerometer as well as a light-emitting diode (LED) and had the ability to communicate back and forth to a central server. In order to determine locations of the chipsets embedded on our tools we used a Wi-Fi transceiver to provide a rough estimate of distance between objects. This relative signal strength identification can continuously perform scans which provide real-time location information. This location information allowed us to instantiate holograms within the headset at specific times and locations within the study. The connected IoT network paired with the HoloLens enabled us to reveal and remove relevant information when it was needed. As an example, we used the HoloLens to indicate and guide the user to tools that they were tasked with finding, identify when the tools were picked up using the accelerometer data collected by each tool, and then update the interface to move users onto the next step in the usability study. Showing and hiding salient information was a requirement because of the limited field of view within the HoloLens and the large amount of information we were trying to communicate like tool location, procedure steps, and holographic overlays (see Figure 8 and Figure 9a).

5.4. Conclusions

User testing at NASA Ames led to several best practices and guidelines for AR and IoT assisted procedure execution. Based on our interviews and our success criteria, we found the use of a paper prototype warmup exercise to be useful, as most users have no experience or context interacting with augmented reality. It can be challenging to describe relative orientation of mechanical parts for maintenance or assembly tasks. The use of holograms to provide 3D translation and rotation information was found to be especially useful compared to traditional, text-based procedural steps according to participants. As the HoloLens can be operated hands-free, users can continue a procedure while in the middle of complex assembly. When the holograms were aligned and overlaid with real world parts, the 3D animation removed ambiguity regarding the procedure. Users could watch and follow along with the animation in real-time, without any need to interpret written procedures. The HoloLens’s capability to persist location-specific holograms provided feedback to the user which could not have been provided by tablet or phone-based AR solutions. Future versions of this prototype should explore holographic animation overlays more thoroughly to identify what tasks within ISS procedures are best suited to AR assistance. Our preliminary findings suggest that more complex maintenance and installation tasks are best suited for this interface however more exploration is needed. The use of a wristband sensor allowed for the prototype to sense the user’s proximity to relevant objects, and it helped to confirm when objects had been picked up. Combined with sensor data provided by the IoT devices, our prototype enabled novice users to complete our procedure correctly, without any instructor guidance.

6. Discussion and Concluding Remarks

Current space exploration missions rely on procedures to be executed by crew members with full time assistance from MCC. Historically and in spaceflight operations aboard the International Space Station, crew members have relied on extensive paper-based documentation to successfully complete procedures with real time audio assistance from MCC. Real-time communications have ensured procedures are completed with high reliability and efficiency. As NASA ventures into missions beyond low Earth orbit, where communications delays might reach up to 22 min one way, 44 min round trip, this assistance cannot be relied on. NASA and other spaceflight agencies actively research technologies to mitigate risks to crew members associated with long duration spaceflight. Ongoing research addresses how to empower crew members to complete procedures and achieve mission success with greater Earth-independence. As part of this effort, research in emerging technologies aims to ensure crew self-sufficiency when real time communications with ground will not be available. Our research has focused on how emerging technologies augment the efficiency and reliability of crew performance in certain examples of procedure execution.
NASA Ames Research Center’s HCI team has been evaluating the possibility of emerging technologies as a possible solution to mitigate the risk brought by this new challenge. In LDEMs, crew members are going to be challenged into acting independently to solve these mission-specific issues that might arise. Our team has addressed this research question in four different focuses: the use of emerging technology for the improvement of training, prototype proof-of-concepts using NASA analog missions, developing tools for procedure execution prototypes, and testing the integration between habitat and procedure tools. As a result, these studies have proven that the utilizations of emerging technologies such as augmented reality and Internet of Things sensors have a potential in increasing intuitiveness in procedure execution for crew members, allowing for more user centric procedure experience than current paper-based documents.
Through our different research studies, emerging technologies proved beneficial in different aspects of future space flight crew operations. Due to the extended time between training and in-flight procedure execution, skills will have to be provided as just-in-time training. In our first study, it was demonstrated that the use of animated holograms to provide overlayed information allowed an improved performance in maintenance operations. Our following study utilizing analogous domains, such as technician procedure execution at KSC, allowed our team to develop a proof-of-concept tool to improve procedures. Users were ultimately empowered by having a mixed reality interface that allowed them to perform procedures hands-free with clear visibility of procedure steps. Within procedure execution, our next study detailed explored the benefits of AR-enabled tools. Through the headset prototype, users were able to intuitively execute procedures using verbal commands and visual cues. Finally, we investigated the benefits of human-vehicle integrations for efficiency in procedure execution. Here, the IoT allowed users to understand the real-time vehicle state and successfully complete several procedure steps using the interface guidance.
Our goal was to identify how best to map emerging technologies to human spaceflight operations tasks. Integrating AR and IoT was most helpful for tasks that require astronauts to conduct work with their hands, such as manual or maintenance tasks. Furthermore, these technologies were also most beneficial in communicating complex procedural instructions. Visualizations of spatially driven instructions were successful in assisting naïve, minimally trained users in complete tasks. Animations showing object orientations and tool or hardware locations facilitated task execution. Indicators of successful procedural step completion were essential in guiding users. Finally, sensor integration was easily incorporated on tools and users; multiple sensors were necessary to robustly model accurate procedural step completion. Future research will aim at further identifying key design elements that will enable us to design future systems with high user adoption and acceptable usability.
We explored the benefits and limitations that integrating technologies such as AR and IoT provide for assisting with procedure execution. From training to in-flight operations, these technologies can provide intuitiveness and efficiency within future spaceflight tools. Based on our studies’ results, we are able to draw early conclusions that emerging technologies provide opportunities for increased crew autonomy for future spaceflight missions.

Author Contributions

Conceptualization, J.J.M.; methodology, J.A.K., I.C.T.V., H.L.B., J.W.G., R.K. and M.Y.; software, J.A.K., I.C.T.V., H.L.B., J.W.G., R.K. and M.Y.; validation, J.A.K., I.C.T.V., H.L.B., J.W.G., R.K. and M.Y.; formal analysis, J.A.K., I.C.T.V., H.L.B., J.W.G., R.K. and M.Y.; investigation, J.A.K., I.C.T.V., H.L.B., J.W.G., R.K. and M.Y.; resources, J.J.M.; data curation, J.A.K., I.C.T.V., H.L.B., J.W.G., R.K., and M.Y.; writing—original draft preparation, J.A.K., I.C.T.V., H.L.B., J.W.G., R.K, M.Y. and J.J.M.; writing—review and editing, J.A.K., I.C.T.V., H.L.B., J.W.G., R.K., M.Y., and J.J.M.; visualization, I.C.T.V., H.L.B., J.W.G., R.K. and M.Y.; supervision, J.J.M.; project administration, J.J.M.; funding acquisition, J.A.K and J.J.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was performed under a US Govt. Contract in the Human-Systems Integration Division at NASA Ames Research Center. This research was funded in part by the NASA Human Research Program’s Human Factors and Behavior Performance Element (NASA Program Announcement number 80JSC017N0001-BPBA) Human Capabilities Assessment for Autonomous Missions (HCAAM) Virtual NASA Specialized Center of Research (VNSCOR) effort (NASA grant number 80NSSC19K0657).

Institutional Review Board Statement

The studies were conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of NASA Ames Research Center.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the studies.

Data Availability Statement

Data sharing not applicable.

Acknowledgments

The authors would like to acknowledge the participants that took part in each of the usability studies and technology demonstrations, without whom this work would not be possible. The authors would like to acknowledge Ken St. Clair, Tait Wayland, Ansaria Mohammed, Frank Teng, Vivek Pai, Amalya Henderson, Hyunsoo Andrew Park, Richard Joyce, Colleen Carroll, and Steven Hillenius for their contributions to these projects. The authors would also like to acknowledge the many summer student interns that contributed to these projects.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ARAugmented Reality
ARCAmes Research Center
BLEBluetooth Low Energy
HCIHuman-Computer Interaction
HMDHead Mounted Display
IoTInternet of Things
ISSInternational Space Station
IVIntra-Vehicular
KSCKennedy Space Center
LDEMsLong-Duration Exploration Missions
LEDLight-Emitting Diode
MCCMission Control Center
NASANational Aeronautics and Space Administration
OROperating Room
PDFPortable Document Format
SHARPSmart Habitat Augmented Reality Procedure
TOSCTest and Operations Support Contract
UIUser Interface
VRVirtual Reality
WADWork Authorization Document

References

  1. Sonett, C. Report of the Ad Hoc Working Group on Apollo Experiments and Training on the Scientific Aspects of the Apollo Program; Technical Report; NASA: Washington, DC, USA, 1963. [Google Scholar]
  2. Drake, B.G. Human Exploration of Mars Design Reference Architecture 5.0; Technical Report NASA-SP-2009-566; NASA: Washington, DC, USA, 2009. [Google Scholar]
  3. Smith, M.; Craig, D.; Herrmann, N.; Mahoney, E.; Krezel, J.; McIntyre, N.; Goodliff, K. The Artemis Program: An Overview of NASA’s Activities to Return Humans to the Moon. In Proceedings of the 2020 IEEE Aerospace Conference, Big Sky, MT, USA, 7–14 March 2020; pp. 1–10. [Google Scholar] [CrossRef]
  4. Drake, B.G.; Hoffman, S.J.; Beaty, D.W. Human exploration of Mars, Design Reference Architecture 5.0. In Proceedings of the 2010 IEEE Aerospace Conference, Big Sky, MT, USA, 6–13 March 2010; pp. 1–24. [Google Scholar] [CrossRef] [Green Version]
  5. Satava, R.M. Virtual reality surgical simulator: The first steps. Surg. Endosc. 1993, 7, 203–205. [Google Scholar] [CrossRef] [PubMed]
  6. Kühnapfel, U.; Çakmak, H.; Maaß, H. Endoscopic surgery training using virtual reality and deformable tissue simulation. Comput. Graph. 2000, 24, 671–682. [Google Scholar] [CrossRef]
  7. Seymour, N.E.; Gallagher, A.G.; Roman, S.A.; O’Brien, M.K.; Bansal, V.K.; Andersen, D.K.; Satava, R.M. Virtual Reality Training Improves Operating Room Performance: Results of a Randomized, Double-Blinded Study. Ann. Surg. 2002, 236, 458–464. [Google Scholar] [CrossRef] [PubMed]
  8. Grantcharov, T.P.; Kristiansen, V.B.; Bendix, J.; Bardram, L.; Rosenberg, J.; Funch-Jensen, P. Randomized clinical trial of virtual reality simulation for laparoscopic skills training. Br. J. Surg. 2004, 91, 146–150. [Google Scholar] [CrossRef] [PubMed]
  9. Boud, A.; Haniff, D.; Baber, C.; Steiner, S. Virtual reality and augmented reality as a training tool for assembly tasks. In Proceedings of the 1999 IEEE International Conference on Information Visualization (Cat. No. PR00210), London, UK, 16–18 July 1999; pp. 32–36. [Google Scholar] [CrossRef]
  10. Webel, S.; Bockholt, U.; Engelke, T.; Gavish, N.; Olbrich, M.; Preusche, C. An augmented reality training platform for assembly and maintenance skills. Robot. Auton. Syst. 2013, 61, 398–403. [Google Scholar] [CrossRef]
  11. Norouzi, N.; Bruder, G.; Belna, B.; Mutter, S.; Turgut, D.; Welch, G. A Systematic Review of the Convergence of Augmented Reality, Intelligent Virtual Agents, and the Internet of Things. In Artificial Intelligence in IoT; Al-Turjman, F., Ed.; Springer International Publishing: Cham, Switzerland, 2019; pp. 1–24. [Google Scholar] [CrossRef]
  12. Kim, J.C. Multimodal Interaction with Internet of Things and Augmented Reality: Foundations, Systems and Challenges; Luleå University of Technology: Luleå, Sweden, 2020. [Google Scholar]
  13. Al-Fuqaha, A.; Guizani, M.; Mohammadi, M.; Aledhari, M.; Ayyash, M. Internet of Things: A Survey on Enabling Technologies, Protocols, and Applications. IEEE Commun. Surv. Tutor. 2015, 17, 2347–2376. [Google Scholar] [CrossRef]
  14. Giménez, R.; Pous, M.; Tecnològic, B.C. Augmented reality as an enabling factor for the Internet of Things. In Proceedings of the W3C Workshop: Augmented Reality on the Web, Barcelona, Spain, 15–16 June 2010. [Google Scholar]
  15. Zhang, L.; Chen, S.; Dong, H.; El Saddik, A. Visualizing Toronto City Data with HoloLens: Using Augmented Reality for a City Model. IEEE Consum. Electron. Mag. 2018, 7, 73–80. [Google Scholar] [CrossRef]
  16. Phupattanasilp, P.; Tong, S.R. Augmented Reality in the Integrative Internet of Things (AR-IoT): Application for Precision Farming. Sustainability 2019, 11, 2658. [Google Scholar] [CrossRef] [Green Version]
  17. Cao, Y.; Xu, Z.; Li, F.; Zhong, W.; Huo, K.; Ramani, K.V. Ra: An In-Situ Visual Authoring System for Robot-IoT Task Planning with Augmented Reality. In Proceedings of the 2019 on Designing Interactive Systems Conference; ACM: San Diego, CA, USA, 2019; pp. 1059–1070. [Google Scholar] [CrossRef]
  18. Alam, M.F.; Katsikas, S.; Beltramello, O.; Hadjiefthymiades, S. Augmented and virtual reality based monitoring and safety system: A prototype IoT platform. J. Netw. Comput. Appl. 2017, 89, 109–119. [Google Scholar] [CrossRef]
  19. Sauer, J.; Sonderegger, A. The influence of prototype fidelity and aesthetics of design in usability tests: Effects on user behaviour, subjective evaluation and emotion. Appl. Ergon. 2009, 40, 670–677. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Lim, Y.K.; Stolterman, E.; Tenenberg, J. The anatomy of prototypes: Prototypes as filters, prototypes as manifestations of design ideas. Acm Trans. Comput. Hum. Interact. 2008, 15, 1–27. [Google Scholar] [CrossRef]
  21. McCurdy, M.; Connors, C.; Pyrzak, G.; Kanefsky, B.; Vera, A. Breaking the fidelity barrier: An examination of our current characterization of prototypes and an example of a mixed-fidelity success. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems—CHI, Montréal, QC, Canada, 23–28 June 2006; ACM Press: Montréal, QC, Canada, 2006; p. 1233. [Google Scholar] [CrossRef]
  22. McCurdy, M. Planning tools for Mars surface operations: Human-Computer Interaction lessons learned. In Proceedings of the 2009 IEEE Aerospace Conference, Big Sky, MT, USA, 7–14 March 2009; pp. 1–12. [Google Scholar] [CrossRef]
  23. Marquez, J.J.; Pyrzak, G.; Hashemi, S.; McMillin, K.; Medwid, J. Supporting Real-Time Operations and Execution through Timeline and Scheduling Aids. In Proceedings of the 43rd International Conference on Environmental Systems; American Institute of Aeronautics and Astronautics: Vail, CO, USA, 2013. [Google Scholar] [CrossRef] [Green Version]
  24. Nielsen, J.; Phillips, V.L. Estimating the relative usability of two interfaces: Heuristic, formal, and empirical methods compared. In Proceedings of the INTERACT ’93 and CHI ’93 Conference on Human Factors in Computing Systems; Association for Computing Machinery: Amsterdam, The Netherlands, 1993; pp. 214–221. [Google Scholar] [CrossRef]
  25. Nielsen, J. Designing Web Usability; New Riders: Indianapolis, IN, USA, 2000. [Google Scholar]
  26. Clark, R.C.; Mayer, R.E. E-Learning and the Science of Instruction: Proven Guidelines for Consumers and Designers of Multimedia Learning; Pfeiffer: San Francisco, CA, USA, 2003. [Google Scholar]
  27. Chandler, P.; Sweller, J. Cognitive Load While Learning to Use a Computer Program. Appl. Cogn. Psychol. 1996, 10, 151–170. [Google Scholar] [CrossRef]
  28. Koedinger, K.R.; Booth, J.L.; Klahr, D. Instructional Complexity and the Science to Constrain It. Science 2013, 342, 935–937. [Google Scholar] [CrossRef] [PubMed]
  29. Margulieux, L.E.; Guzdial, M.; Catrambone, R. Subgoal-labeled instructional material improves performance and transfer in learning to develop mobile applications. In Proceedings of the Ninth Annual International Conference on International Computing Education Research—ICER ’12; ACM Press: Auckland, New Zealand, 2012; p. 71. [Google Scholar] [CrossRef]
  30. Fogarty, J.; Hudson, S.E.; Atkeson, C.G.; Avrahami, D.; Forlizzi, J.; Kiesler, S.; Lee, J.C.; Yang, J. Predicting human interruptibility with sensors. Acm Trans. Comput. Hum. Interact. 2005, 12, 119–146. [Google Scholar] [CrossRef]
  31. Hooper, C.J.; Olivier, P.; Preston, A.; Balaam, M.; Seedhouse, P.; Jackson, D.; Pham, C.; Ladha, C.; Ladha, K.; Plötz, T. The french kitchen: Task-based learning in an instrumented kitchen. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing—UbiComp ’12; ACM Press: Pittsburgh, PA, USA, 2012; p. 193. [Google Scholar] [CrossRef]
  32. Haniff, D.; Baber, C. User evaluation of augmented reality systems. In Proceedings of the Seventh International Conference on Information Visualization (IV 2003), London, UK, 18 July 2003; pp. 505–511. [Google Scholar] [CrossRef]
  33. Henderson, S.J.; Feiner, S. Evaluating the benefits of augmented reality for task localization in maintenance of an armored personnel carrier turret. In Proceedings of the 2009 8th IEEE International Symposium on Mixed and Augmented Reality, Orlando, FL, USA, 19–22 October 2009; pp. 135–144. [Google Scholar] [CrossRef] [Green Version]
  34. Marner, M.R.; Irlitti, A.; Thomas, B.H. Improving procedural task performance with Augmented Reality annotations. In Proceedings of the 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Adelaide, SA, Australia, 1–4 October 2013; pp. 39–48. [Google Scholar] [CrossRef]
  35. Fink, P.W.; Kennedy, T.F.; Rodriguez, L.; Broyan, J.L.; Ngo, P.H.; Chu, A.; Yang, A.; Schmalholz, D.M.; Stonestreet, R.W.; Adams, R.C.; et al. Autonomous Logistics Management Systems for Exploration Missions. In AIAA SPACE and Astronautics Forum and Exposition; American Institute of Aeronautics and Astronautics: Orlando, FL, USA, 2017. [Google Scholar] [CrossRef]
Figure 1. Astronaut Lee Archambault, STS-119 commander, looks over checklists from the commander’s station on the flight deck of Space Shuttle Discovery. Courtesy NASA.
Figure 1. Astronaut Lee Archambault, STS-119 commander, looks over checklists from the commander’s station on the flight deck of Space Shuttle Discovery. Courtesy NASA.
Applsci 11 01607 g001
Figure 2. (a) ISS Locker Prototype (b) Initial locker demonstration.
Figure 2. (a) ISS Locker Prototype (b) Initial locker demonstration.
Applsci 11 01607 g002
Figure 3. User interface demonstrating detected volume and providing a procedure execution step.
Figure 3. User interface demonstrating detected volume and providing a procedure execution step.
Applsci 11 01607 g003
Figure 4. (a) iPad Interface. (b) Proof-of-concept for heads-up display.
Figure 4. (a) iPad Interface. (b) Proof-of-concept for heads-up display.
Applsci 11 01607 g004
Figure 5. Detail of iPad Procedure Interface.
Figure 5. Detail of iPad Procedure Interface.
Applsci 11 01607 g005
Figure 6. Networking diagram showing the connections over Wi-Fi of the two Raspberry Pis, Intel Edison, Windows Phone, and Web App.
Figure 6. Networking diagram showing the connections over Wi-Fi of the two Raspberry Pis, Intel Edison, Windows Phone, and Web App.
Applsci 11 01607 g006
Figure 7. (a) AR display reflected from iPad mini running web application onto teleprompter glass in user’s field of view. (b) AR user interface displaying the current procedure step with distance and direction of object’s location that the user needs to retrieve. To aid the user, a list of valid verbal commands is also available at the bottom on the user interface, as well as a progress bar of the current procedure at the top.
Figure 7. (a) AR display reflected from iPad mini running web application onto teleprompter glass in user’s field of view. (b) AR user interface displaying the current procedure step with distance and direction of object’s location that the user needs to retrieve. To aid the user, a list of valid verbal commands is also available at the bottom on the user interface, as well as a progress bar of the current procedure at the top.
Applsci 11 01607 g007
Figure 8. Holographic animations of the rodent research habitat showed users how to complete the task.
Figure 8. Holographic animations of the rodent research habitat showed users how to complete the task.
Applsci 11 01607 g008
Figure 9. (a) HoloLens holographic overlays. (b) Wristband with ESP8266 chipset.
Figure 9. (a) HoloLens holographic overlays. (b) Wristband with ESP8266 chipset.
Applsci 11 01607 g009
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Karasinski, J.A.; Torron Valverde, I.C.; Brosnahan, H.L.; Gale, J.W.; Kim, R.; Yashar, M.; Marquez, J.J. Designing Procedure Execution Tools with Emerging Technologies for Future Astronauts. Appl. Sci. 2021, 11, 1607. https://doi.org/10.3390/app11041607

AMA Style

Karasinski JA, Torron Valverde IC, Brosnahan HL, Gale JW, Kim R, Yashar M, Marquez JJ. Designing Procedure Execution Tools with Emerging Technologies for Future Astronauts. Applied Sciences. 2021; 11(4):1607. https://doi.org/10.3390/app11041607

Chicago/Turabian Style

Karasinski, John A., Isabel C. Torron Valverde, Holly L. Brosnahan, Jack W. Gale, Ron Kim, Melodie Yashar, and Jessica J. Marquez. 2021. "Designing Procedure Execution Tools with Emerging Technologies for Future Astronauts" Applied Sciences 11, no. 4: 1607. https://doi.org/10.3390/app11041607

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop