Next Article in Journal
Similarities and Differences between Immersive Virtual Reality, Real World, and Computer Screens: A Systematic Scoping Review in Human Behavior Studies
Next Article in Special Issue
A Digital Coach to Promote Emotion Regulation Skills
Previous Article in Journal / Special Issue
Linking Personality and Trust in Intelligent Virtual Assistants
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hand-Controlled User Interfacing for Head-Mounted Augmented Reality Learning Environments

Department of Games and Visual Effects, School of Digital, Technologies and Arts, University Quarter, College Road, Stoke-On-Trent, Staffordshire ST42 DE, UK
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2023, 7(6), 55; https://doi.org/10.3390/mti7060055
Submission received: 14 April 2023 / Revised: 18 May 2023 / Accepted: 19 May 2023 / Published: 26 May 2023

Abstract

:
With the rapid expansion of technology and hardware availability within the field of Augmented Reality, building and deploying Augmented Reality learning environments has become more logistically viable than ever before. In this paper, we focus on the development of a new mobile learning experience for a museum by combining multiple technologies to provide additional Human–computer interaction possibilities. This is both to reduce barriers to entry for end-users as well as provide natural interaction methods. Using our method, we implemented a new approach to gesture-based interactions for Augmented Reality interactions by combining two devices, a Leap Motion and a Microsoft HoloLens (1st Generation), via an intermediary device with the use of local-area networking. This was carried out with the intention of comparing this method against alternative forms of Augmented Reality to determine which implementation has the largest impact on adult learners’ ability to retain information. A control group has been used to establish data on memory retention without the use of Augmented Reality technology, along with three focus groups to explore the different methods and locations. Results found that adult learners retain the most overall information when being educated through a traditional lecture, with a statistically significant difference between the methods; however, the use of Augmented Reality resulted in a slower rate of knowledge decay between testing intervals. This contrasts with existing research as adult learners did not respond to the technology in the same way that child and teenage audiences previously have, which suggests that prior research may not be generalisable to all audiences.

1. Introduction

Head-mounted devices offer bespoke opportunities for the wearer to engage with their local environment; however, this is also true for other forms of Augmented Reality (AR) [1]. The main attraction of head-mounted devices (HMDs) is the concept of physical immersion into the AR environment, the ability to survey the scene with little more than the turn of a head [2] and examine virtual objects in a more natural way [3].
Within some cases where the only requirement of the AR project is the visualisation of virtual objects [4,5,6], the technology functions precisely as intended with no additional devices required. When the experience is designed with the intention of user interaction with the AR environment, however, there are severe limitations depending upon the technology being used. For example, the Microsoft HoloLens (1st Generation) offers the possibility of interacting with virtual objects using an air-tap hand gesture or voice commands [7], but only within a limited capacity, as all interactions must be structured to adhere to a single tap or a few spoken syllables.
Other approaches to interaction have included manipulating physical objects in the real-world [8,9,10] or even using depth cameras to trigger interactions when a user’s hand enters a pre-configured event-handling zone [11]. However, these systems are limited and are either unsuitable for deployment to an HMD or lack the complexity for users to interact with a system in a way that feels natural to them. Even the HoloLens 2, with its Instinctual Interactions system [12], only expands to proximity-based interactions and actively recommends against creating additional custom gestures, which is understandable from the perspective of a developer with foreknowledge of the limitations of the hardware, but may be confusing to a user who would otherwise attempt to use their hands in a different way.
For both this reason and to facilitate further research into the impact of AR technology on long-term memory retention, we developed a method for bridging the gap between HMDs and Natural Interactions to allow for a truly mobile Learning Experience—an experience where the user can interact with AR objects entirely with their hands. This experience was designed specifically for the purpose of creating an educational AR learning environment [13] for the “Journey” exhibition at the National Holocaust Centre and Museum in Newark, an exhibition that is dedicated to the preservation of artefacts and stories from the Kindertransport.
The purpose of this research is twofold; the first reason is to create an Augmented Reality Learning Experience for the National Holocaust Centre & Museum, and the second is to answer the following research questions:
  • What impact does Augmented Reality have on long-term memory retention compared to a classroom learning experience?
  • Does the technology used to implement Augmented Reality impact how much information is retained? (handheld versus head-mounted).
  • Does the location of the learner have any impact on the amount of information they retain? (classroom versus museum environment).
  • Does the age of the participant impact the amount of information they retain from either learning method?
These questions, when answered, should provide a clearer understanding of the potential benefits or detriments to utilising Augmented Reality as a learning tool, along with which environments it may be most suited for and which methods users respond better to. Within the context of our research, we explore how these questions apply to adult audiences, who are typically not the focus of research into Augmented Reality for educational purposes and may not respond to the technology in the same way as younger audiences.

2. Review of Literature

Interaction methods and methods of implementing AR technologies have been broken down into specific categories in a report by Aliprantis et al. [14], which categorises them as follows:
  • Information Browsers (using mobile devices to align virtual content with the real world);
  • Three-dimensional interaction with spatial input devices (devices such as 3D mice, joysticks, wand pointing devices, etc.);
  • Tangible user interfaces (manipulating virtual objects by using physical objects);
  • Natural user interfaces (body motions, gestures, and other natural interactions);
  • Multimodal user interfaces (combining different input modalities).
Whilst these categories do cover many interaction methodologies, they are not all-encompassing and do not provide categories for certain input types. For example, the AR game Pokémon GO [15] utilises mobile AR functionality to align the player with the real world and integrate physical presence into the core gameplay loop [16], which may be considered an Information Browser; however, this is only applicable to the first half of the gameplay cycle when the player is searching for a Pokémon to catch. During the second half of the gameplay loop, the player must interact with a mobile user interface via touchscreen inputs to simulate the actions of throwing a ball or using items. Other mobile-based AR games and studies have utilised similar input methods [17,18], and so a sixth category may be required to encompass these; virtual user interfaces. As the field of AR grows, it is possible that additional categories may need to be added to this list.
Interaction devices such as the Leap Motion Sensor [19] have been considered for other AR applications; for example, a study by [3] utilised it in their research into interaction methods for AR devices. This study involved using Google Cardboard with a Samsung Galaxy Note 3 Neo as the medium for AR rendering, with the participant sitting at a desk with a Leap Motion Sensor connected to a laptop. This study did permit participants to engage in natural interactions in an AR environment; however, it was very limited as the participant was forced to remain at the desk to perform any gestures. Although functional, this solution would only be appropriate for desk-based activities. For an environment such as a museum or a heritage where the entire available space has the potential to be augmented for learning activities, full mobility for the user would be required.
Gesture-based controls have been used in other AR studies, although with more limitations. Research on stress management by Na et al. [2] featured a virtual cat within an AR environment for participants to play with. Participants wore a Microsoft HoloLens and interacted with the cat via the use of its air tap gesture. By interacting with a ball, food, or the cat itself, participants could simulate various interactions to play with it, along with using some voice commands to give the cat some basic directions. Although the focus of the study was managing stress, it also showed value for gesture-based controls within AR environments to allow users a full range of natural interactions. Being able to pet the cat or throw the ball with natural interactions may feel more engaging for users rather than performing the same tapping gesture for each interaction.
Virtual Reality (VR) has also seen innovations made in interaction methods for users to engage with their virtual environments, which could potentially be co-opted for usage within AR environments. An example of this would be the Oculus Rift touch controllers [20], which can be used to simulate hand position, rotation, and finger movement using gyroscopic controls and buttons. These were used in a study by Cho et al. [21] in their work on creating an asymmetrical interaction system between AR and VR environments. This study did not utilise the controllers specifically for AR; instead, the authors created an application where one user controlled a scene with a mobile AR device, and another user could control objects within it by using VR and using the Oculus controllers. Although this did not allow for direct interactions within an AR environment by using the Oculus controllers, both applications synchronising the inputs do suggest that they could be used for an AR environment as a means of gesture-based controls or natural interactions.
Interfacing with VR systems using gesture-based controls has also been considered by prior developers, including the developers of the Leap Motion Sensor, who developed a mount specifically for attaching the device to VR headsets. A demonstration using a VR headset and two Leap Motion Sensors was created by Worallo and Hartley [22] in their research on Hand Interactions for VR, in which the authors tested participant reactions to three different input methods. Participants in this study used HTC Vive controllers, Manus VR Gloves, and Leap Motion Sensor interaction methods. It was concluded that although the average task time took longer when using Leap Motion Sensors, the participants expressed that it was the “most natural and realistic experience”. VR configurations have a distinct advantage for deploying interaction methods such as the Leap Motion Sensor: the user must remain within proximity to a computer due to the requirements of their headset, meaning that an additional cable tethering to them adds no additional risk factor. However, this would not work for an Augmented Reality solution as the user should not be tethered to a physical location at all.
Research has also been undertaken on user perception of different AR hardware in a study by Baumeister et al. [23] on the impact of the cognitive cost of using AR displays. This study required a peripheral device to facilitate inputs, and a Bluetooth keyboard was used as it was compatible with multiple devices for their research. Participants were tasked with using their AR device to find annotations in AR space and press a corresponding button. Participants were also asked to rate the mental effort required to complete the tasks. Although this study did not focus on the interaction methods, it did identify that whilst participant reaction time was faster with an HMD, participants recorded a higher mental effort rating. This could hinder potential users if they are dissuaded by cognitive overload when using this hardware.
It is estimated that the AR groups will perform better based on existing research; for example, the inspiration for this study was research performed by Billinghurst and Duenser [24] on the usage of AR inside a classroom. In that study, participants used an AR application to enhance storybooks to test for motivation and performance and were tested both immediately after their learning experience as well as with a second test four weeks later. Participants who used AR were found to retain 12% more information immediately after their learning experience and 10% more after a four-week period. This study was limited to a period of four weeks and was only tested within a classroom environment; our experiments will include additional locations, AR implementations, and test over a longer period. We originally aspired to test over a much longer period for a deeper analysis into the impact of AR on long-term memory retention, but it was not logistically possible during the scope of this research; however, we do intend to perform longer-term studies at a later date.
Another study suggests that the technology has the potential to assist with memory retention for history education, although to a lesser extent. Research performed by Lim and Lim [25] experimented with using AR for memory retention; however, this study had a limited sample of five participants, with no control group, and only utilised a single mobile AR application. Within the context of the study, participants responded well to the technology and acknowledged their opinions of how it could be useful to their future learning; however, additional research is required to substantiate the results. Our research endeavours to assist with substantiating these claims and also provide additional context into how the role of the environment and the means of deploying AR may contribute to better memory retention for learners.
Other studies have also examined the possibility of AR being used to further memory retention in participants. Such was the case in a study by Menon et al. [26] that examined how technology could be used to enhance nursing education. This study used QR code-based fiduciary markers to align its AR content, which took the form of a visualised human anatomy overlayed against a training mannequin. Participants used a Magic Leap One AR headset, and the study mentions a controller, but does not describe what controller was used or what interactions the participants could perform by using it. This study used both a control and a focus group to assess the impact of AR, and both groups were tested on their ability to perform a physical examination, and later were tested again at an interval of “two to four weeks”; however, the authors did not provide a precise timeframe for the secondary testing of each group. The authors also did not disclose how many participants were in each group. Participants from the focus group scored marginally better; however, the authors stated that all participants were already performing at a high level prior to the experiment, which suggests that whilst the AR may have been beneficial, it only served to provide additional context to information the participants already knew. As such, this study may not be a reliable indicator of the effect of the technology on memory retention, as prior knowledge could potentially have skewed the results.
This was also the case in research by Cook [27], which investigated the value of AR when deployed in a Music Technology classroom to identify if students could use the technology to form visual relationships between various components and recognise their functions for practical usage. This study made use of the HP Reveal app for smart devices to augment a paper booklet with a set of instructions upon it. Upon scanning a page of the book, the app would begin to play an instructional video regarding the equipment displayed on that page, with the AR app keeping the video positioned over the correct location within the book so as not to disrupt the participants’ ability to read the accompanying information on the page. Cook proceeded to observe participants using the system and relied upon observation as the primary method of data collection used, only using questionnaires to record student feedback regarding their perceived usefulness of the application. As this study did not utilise a control group, nor did it collect or record any quantifiable data, the results may not be reliable as there are variables within the study that are not accounted for. For example, one of the recorded observations was that students who had previously been unable to set up the music system in under an hour had since been able to complete it within 40–50 min, which is attributed to the AR application; however, this suggests that participants had prior knowledge and experience with the system that could influence their technical understanding. Their newfound speed could also be attributed to prior experience in setting up the system. Whilst it is possible that the use of AR may have contributed toward participant performance, additional data on the matter is required.

3. Methodology

3.1. Technical Specification

Implementing gesture-based controls for an HMD device requires a peripheral capable of interpreting hand gestures and converting them into inputs for the HoloLens to use. The Microsoft Kinect for Windows [28] was considered, but it was too cumbersome to be worn by a user. A dedicated input device such as the Essential Reality P5 Glove [29] would have been appropriate; however, the technology is sixteen years old, out of production, and extremely difficult to obtain. The Gest controller [30] would also have worked, but these have not been available for purchase since their initial Kickstarter campaign. The SenseGlove Nova [31] was also a possibility but was unavailable to us due to its price being out of budget, at €4499. A solution was identified that was both appropriate for HMDs and was available for purchase: the Leap Motion [19] sensor. The Leap Motion is an infrared sensor that can detect human hands and allows for a full range of gesture-based controls whilst also being small and lightweight enough to combine with an HMD.
As with many HoloLens applications, our project was created using the Unity Engine (Version 2018.4.7f1) [32] and thus only required the Leap Motion SDK [33] to integrate the functionality into the existing project. Due to updates to the SDK, the range of pre-configured gestures had been removed, and consequently, an external third-party plugin was required to restore them, titled The Essential Leap-Motion Gesture Detection [34], which provides pre-set gestures, along with the framework to both create and implement custom gestures/event triggers. The HoloLens (1st gen) and Leap Motion controller would, however, require an intermediary device to facilitate the transmission of inputs as it was not possible to connect the following directly: the Leap Motion requires a USB connection to a device, and the HoloLens 1st Generation only supports select Bluetooth peripherals.
Dependency on an intermediary device was an unfortunate consequence of attempting this setup, which also left some logistical concerns given the intended mobility of the project. Our intention was for the user to have complete mobility to explore and interact with a full AR learning environment, but the Leap Motion would need to be deployed either at a fixed location within proximity of a computer [3], or have an extended cable trailing behind the user which was a potential hazard for health and safety.
To solve this problem, we explored the possibility of implementing a Raspberry Pi [35] to act as the intermediary device; however, this did not work as the standard Raspberry Pi uses an ARM-type processor, and the drivers for the Leap Motion require a processor with x86 architecture [33]. An alternative version of the Raspberry Pi, dubbed the Rock Pi X [36], would later release with an x86 processor installed and may be appropriate for similar projects, but it was not released until 2020, when project implementation was already in progress.
Instead, we used an ACEPC AK2 Mini PC [37] running Windows 10 as it was small, lightweight, and able to run the drivers for the Leap Motion. To solve for mobility, we placed the Mini PC inside of a backpack for the user to wear and powered it with a RAVPower 80-watt power bank [38], therefore allowing complete mobility. Attaching the Leap Motion to the user was a concern as it needed to be in a central location with full vision of their hands; however, this problem had already been solved by HoloLab Inc. [39], who released an open-source 3D printable mount for attaching the two devices [40]. With the hardware ready, we needed a method by which the devices could communicate, and the method devised was to utilise local-area networking by converting the Unity application into a multiplayer project; one device would act as a server, and the other would connect as a client. To achieve this, the Mirror Networking API [41] was implemented to provide multiplayer functionality. As both devices have wireless functionality, the only requirement was a wireless router to be set up somewhere nearby, although we opted to provide our own to ensure a consistent connection regardless of location (Figure 1, Figure 2 and Figure 3).
With the original goal of the project being to create a fully mobile AR Learning environment, the hardware logistics were achieved, but the application needed to be constructed to effectively utilise the new range of control options. Hand gestures were designed and implemented to mimic real-life interactions in an attempt to make interactions feel as authentic as possible; for example, a virtual book can have its pages browsed by flicking with an index finger (Figure 4), or objects can be moved around by grabbing them with a fist motion.
Any gesture can be re-used as often as required; however, a problem did present itself in the form of object targeting; attempting to perform a gesture for one object would also trigger events for every object that used the same gesture. To avoid confusion and to provide visible clarity for the user, a rendered outline effect was added to the scene; now, whenever the user looks around, a ray cast is fired from the camera, and if a virtual object is hit by it, that object will draw a glowing outline of itself. If any other object has an outline, it will be disabled upon a new object being hit with the ray cast. Gesture controls will only work on objects with a drawn outline, which provides a visual indicator to the user as to which object they will be controlling (Figure 5). Proximity-based controls were also considered; however, these proved to be incompatible as issues arose when two objects were close together, and also the function for detecting the distance between the user and the object had to be called too frequently to be effective, which caused performance issues on the HoloLens. The implementation of a networked build was already impacting performance, so any subsequent performance-hindering functions had to be mitigated or avoided where possible.
This solved the initial technical challenge proposed. The user has complete freedom to move around the environment without being hindered by any technological limitations and retains complete gesture-based control over the scene. By looking around and using their hands, the user can interact with the virtual artefacts, along with the ability to give them to the virtual educator character in the scene, who can provide additional context regarding their history or tell stories about their original owners. Within the context of this project, this allows users to explore and learn about any of the objects at their own preferred pace; however, this technology has the potential to be applied to any learning situation and grants the user a newfound freedom to interact with digitised artefacts in a safe environment; it would not be prudent to permit museum visitors to interact with real artefacts, but there is no potential for destruction in allowing them to interact with digitised re-constructions within an AR space.
To assist with the learning process and make the experience more comfortable for the user, we added a virtual educator character to the scene, an elderly man by the name of Gerald, who communicates with the user and educates them on any of the artefacts in the scene. Gerald is a composite character based on the physical appearances of some of the survivors who are affiliated with the National Holocaust Centre & Museum. He addresses the user upon initialisation of the scene and will provide context on any of the artefacts if they are handed to him. Gerald himself is also capable of a simple interaction; in the event he is waved at, he will wave back (Figure 6).
Gerald is fully voice-acted by a paid voice actor to sound appropriate for his role. To facilitate a more naturalised environment for the user, Gerald’s voice lines are also fully motion-captured. Motion capture was achieved in a COVID-safe environment by using a webcam and FaceRig Studio [42] to fully record the actor’s face as lines were spoken. This solution did not require a complex recording setup or any specialised equipment, just a simple webcam and a room with appropriate light for the face to be detected. These lines of audio and animation were placed into the engine to ensure that when Gerald speaks to the user, the experience feels authentic. Originally Gerald had a head-tracking script to allow his head to turn to face the camera in an attempt to make the experience feel more personal; however, this was perceived by alpha testers to be unnerving, so the feature was removed.

Evaluation of Technical Specification

Both applications were designed to facilitate learning amongst adult audiences; however, to establish the technical efficacy of the applications, they were evaluated against the criteria by Krug et al. [43] in their proposed evaluation grid for AR teaching/learning scenarios (AR-LLS) by using the parameters of Immersion, Interactivity, Congruency with Reality, Contextual Proximity to Reality, Adaptivity, Gamification and Complexity. These criteria allow the applications created for this project to be compared against other AR learning environments created for other studies, along with providing a visualised frame of reference for the capabilities of each application. The evaluation criteria use some subject-specific language for this field of science; however, it is still applicable to applications for other learning areas, such as history education. All parameters were represented in a heptagon as per the example provided by Krug et al. for both the head-mounted application (Figure 7) and the mobile application (Figure 8).
With both applications being built to utilise the same content, the parameters for both were mostly scored the same; however, there were some variances caused by the implementation methods. For example, the mobile application scored slightly lower on the Interactivity scale due to the simplified control scheme available that does not permit participants to engage with the virtual objects to the same extent. Conversely, the mobile application scored higher for Adaptivity as it was designed to adapt to the environment of the user using local spatial mapping, whilst the head-mounted application is statically designed to always adhere to the same layout. The mobile application also scored higher in the Gamification category as the users’ inventory also functioned as a score system to determine how far through the experience the user had progressed.

3.2. Testing Methods

Testing this setup alone would not provide qualitative results for identifying trends in learning behaviours or benefits of the control method versus the outright usage of AR technology. To design the experimentation and ensure collected data would provide value to future research endeavours, four tests were planned to fully answer all of the research questions whilst also maintaining consistency in the educational content provided. These are detailed as follows:
  • Test One: A traditional classroom learning experience with no AR technology used (control).
  • Test Two: An AR learning experience using a mobile device deployed in participants’ homes (focus group one).
  • Test Three: An AR learning experience using the HoloLens + Leap Motion setup deployed in a classroom environment (focus group two).
  • Test Four: An AR learning experience using the HoloLens + Leap Motion setup deployed at the National Holocaust Centre & Museum (focus group three).
Multiple focus groups are required to discover all appropriate themes [44] and examine the potential variables within the scope of the study. Each group contained fifteen participants (as negotiated with the National Holocaust Centre & Museum due to the logistics of arranging research during museum hours). These can be further broken down in Table 1:
Testing required two tests: an immediate post-test and a delayed post-test to identify how much information had been retained by each participant at two different intervals [24]. Due to the scope of this study and the available timeframe, the delayed post-test was scheduled for three months after the initial testing date. Both tests are standardised and contained the same data but in different orders, with questions exclusively related to content from the exhibit at the museum. Most of this information was only available at the museum and would render participants unable to attempt to search for the information online if they sought to cheat on either of the post-tests. As such, the questions were created after collaborating with the National Holocaust Centre & Museum to determine which information from their exhibits could feasibly be taught across the three different delivery mediums. These questions are as follows:
  • Whose likeness is depicted upon the five Reichsmark coin?
  • Which town was the hometown of Bernard Grunberg?
  • Who was the original owner of the Knaurs Konversations Lexikon shown in the learning experience?
  • Which belonging of the Frank family is depicted in the Learning Experience?
  • Which profession was Dorothy’s father part of?
  • What was Bernard studying at college prior to the November Pogrom?
  • Which item of Ruth’s was depicted in the learning experience?
  • What did Sigmar Berenzweig’s dolls represent?
  • What was inscribed upon the children’s coat hanger?
  • What did Hedi’s father have to earn in order to travel to the United Kingdom?
  • (Multiple Choice) The ex-libris in the Knaurs Konversations Lexicon was designed to include which elements of the artist’s life?
  • Why were Bernard’s tools branded?

3.3. Mobile AR Configuration

To facilitate the third focus group, a mobile version of the AR application was designed to be deployed entirely on Android mobile devices. This application uses the same 3D models, animations, and audio files as the HoloLens application, but is instead designed to be deployed directly to a participant’s own mobile device. This version of the application required alternative design considerations due to the different methods of AR deployment, the input methods for the user, and also the available space the application would be deployed in: the HoloLens build had been designed to function in a very specific environment with precise measurements and positioning of furniture. Participants using the mobile build would not be able to make use of this arrangement, and so an alternative was required to fully utilise the available space inside of the participants’ homes. Participants would also require a different means of interacting with the system, as the Leap Motion Sensor setup would not be appropriate for a mobile phone application (Figure 9).
For the purposes of using a completely new environment each time the application initialises, the mobile version of the application was instead built inside Unity 2019.4.18f1 to utilise Unity’s AR Foundation framework [45]. This framework provided two core benefits; the first was the functionality to render only AR objects with the device camera being used as the scene background, and the second was the functionality for Spatial Mapping to begin analysing the user’s environment.
Upon initialising the application, the Spatial Mapping function begins immediately and will attempt to generate an invisible triangulated 3D mesh over the local area, with a ray cast being fired from the camera to record successful collision hit locations inside a list. If a ray cast is successful (the application successfully registered a flat surface facing the positive Y axis), a holographic circle will track over the surface. The user is then instructed to point their device at as many flat surfaces as possible within a 30 s window to scan them, after which the virtual educator character and the virtual artefacts will be instantiated at random positions from the collision list with their respective collision volumes used as positional boundaries to prevent overlap.
Touching the screen will trigger a ray cast to be fired from the camera to the area touched by the user, and if it collides with the collision volume of a virtual object, the player will “pick up” that object, causing the 3D object to be removed from the scene and placed into the player’s inventory as a button on their user interface. Touching this button will replicate the action of giving the educator an object in the HoloLens application, which will trigger his voice lines and animations for him to begin educating on the subject. As a substitute for the virtual object no longer appearing in the scene as a 3D mesh, a 2D image of a photograph of the real artefact will appear alongside the educator for the duration of his voice lines.
Due to these design decisions, the application can be deployed in any environment providing there is enough light for the application to detect surfaces.

3.4. Classroom Lesson Design

The classroom lesson was written to be structured as a standard 30 min class and presented in PowerPoint format. The slides contained information about individual survivors of the Kindertransport with a focus on the artefacts brought with them during their journey to the United Kingdom, as these items are also the focus of exhibits at the museum. This included photographs along with recorded video testimony from the survivors to provide further context regarding the artefacts to educate future generations on their emotional and cultural significance.
As the classroom experiment was structured to replicate any other classroom lesson, participants were permitted to ask questions (although they were asked to wait until the end), and a supplementing prop was also used. One of the artefacts was a Knaurs Konversations Lexikon [46], an encyclopaedia that was printed in the 1930s. Another copy of this book was purchased early in production to serve as a reference and was also used in the classroom experiment by being passed around to participants, who were encouraged to peruse it in the same manner that other supplementing resources would be used in a classroom. Unfortunately, the other artefacts were far too personal and consequently did not allow for similar/duplicate objects to be used as props.

4. Data Collection

Each test collected the same data from each group by using a standardised multiple-choice questionnaire pertaining to the content of the learning experience. As each group covered the same content, they were all tested for the same data to determine which group had retained the most. This questionnaire was hosted through Microsoft Forms [47] and deployed to participants digitally on two occasions; once immediately post-experiment, and again after a three-month period. During the first round of data collection, the questionnaire was distributed to participants via a QR code that participants could access through their phones or tablets. The second time, it was sent out via an emailed link directly to the questionnaire using the email addresses they provided. On the second round of collection, a feature of Forms was used to randomly arrange the questions to ensure that participants were recalling the subject matter rather than attempting to remember how they answered the previous time.
During the first data collection, participants were given the option to leave any question blank if they did not recall the correct answer, and this was communicated to them prior to the questionnaire. Despite this, every single participant from each group still elected to select an option for each question, choosing to guess rather than confess they did not remember the correct answer. In the follow-up questionnaire, a fifth response was added to each question for “I do not remember”, and participants elected to select this response rather than leave a question blank, which would have had the same result for the purposes of data collection.
The results from each test were compared against each other to identify which groups retained the most information overall and what the rate of decay was for retention. These data would then be applied to the research questions to identify which group had retained the most information and how implementation/location/age group had impacted the overall results in an attempt to find correlations between method and outcomes.
Scoring was out of a total of twelve, with one point for each correct answer and no points for an incorrect/I do not know answer. The only exception to this was Question 10, which as a multiple-choice question, required five correct answers. Consequently, a score of 0.2 was given for each correct answer to that question.

Sample Sizes

Each group recruited fifteen participants to be tested at their required locations. This decision was made because of the exhibition size and surrounding and logistical concerns with facilitating the research whilst other visitors would be present at the museum. Eleven of the fifteen volunteers attended the museum on the designated day of testing, and due to the logistics of travelling there along with transporting the equipment, it was not possible to arrange a second testing date to finish up the full sample of fifteen.
Additionally, there was a falloff of participants between the two data collection points as several participants did not undertake the secondary questionnaire. A mean of 75.9% of participants were retained across all four groups. The details of this can be seen below in Table 2:

5. Results

To answer the research questions, varying types of statistical analysis were required, and so were separated into different sections.

5.1. What Impact Does Augmented Reality Have on Long-Term Memory Retention Compared to a Classroom Learning Experience?—One Way ANOVA

One-way ANOVA was used to analyse if there was any statistical relationship between the testing group and the test scores. The purpose of this analysis was to assist with answering research questions one, two, and three. Participants from this study were measured using their corresponding group: Control (n = 15, 11), Focus One (n = 11, 7), Focus Two (n = 15, 12), and Focus Three (n = 15, 13) with the value of N varying between the initial sample and the retained sample as listed above for each. Outliers were present in the data; however, as the data refer to test scores, these were retained in the sample in an unmodified state. The test scores were normally distributed for all four groups, as assessed by Shapiro–Wilk’s test (p > 0.05). Data are presented as mean ± standard deviation. The ability to retain information was found to be highest in the control group in both the immediate post-test (n = 15, 9.50 ± 1.45) and the delayed post-test (n = 11, 6.09 ± 2.83), followed by Focus Group Two in both the immediate post-test (n = 15, 7.84 ± 2.54) and the delayed post-test (n = 11, 4.49 ± 2.25). Focus Group One followed on, initially outperforming Focus Group Three in the immediate post-test (n = 11, 7.30 ± 1.95 compared to n = 15, 7.06 ± 1.86); however, the inverse was true for the delayed post-test (n = 7, 3.57 ± 1.08 compared to n = 13, 3.67 ± 1.49). Within the context of the immediate post-test data, the ability to retain information was statistically significant for different methods of learning, F(3, 52) = 4.408, p = 0.008. Tukey post hoc analysis revealed that the difference in mean test scores between the control group and Focus One (2.19, 95% CI (0.094 to 4.30)) was statistically significant (p = 0.037), as well as the difference in mean test scores between the control group and Focus Three (2.44, 95% CI (0.505 to 4.375, p = 0.008) but no other group differences were statistically significant. When tested for homogeneity of variances using Levene’s test for equality of variances, the immediate post-test results were found to have homogeneity of variances (p = 0.485); however, the delayed post-test results did not (p = 0.044), and were consequently subject to a Welch ANOVA. Under the Welch ANOVA, there were no statistically significant differences in the test score means for the delayed post-test: Welch’s F(3, 19.935) = 2.443, and p = 0.084 (Table 3 and Figure 10).
The results from this test suggest that the traditional classroom learning experience from the control group was the superior method of learning new information as participants from this group retained the most overall information from their learning experience in both the immediate and delayed post-tests, with all three AR groups performing worse on both sets of tests.

5.2. Does the Technology Used to Implement Augmented Reality Impact How Much Information Is Retained?/Does the Location of the Learner Have Any Impact on the Amount of Information They Retain?—One Way ANOVA

The purpose of this analysis was to facilitate answering research questions one, two, and three. The decay rate of information retained between tests was calculated using ((A/12) × 100) − ((B/12) × 100) (A being the immediate post-test score and B being the delayed post-test score), with the output being considered as a percentage. Another one-way ANOVA was used to analyse if there was a statistically significant relationship between the testing groups and the average amount of information retained between tests. Outliers were once again present in the data; however, these data also pertain to test scores, and so outliers were retained in an unmodified state. The decay rate was normally distributed for all four testing groups, as assessed by Shapiro–Wilk’s test (p > 0.05). Homogeneity of variances was identified using Levene’s test for equality of variances (p = 0.283). Data are presented as mean ± standard variation. The decay rate is the lowest with Focus One (n = 7, 26.190 ± 14.456), followed by Focus Three (n = 13, 28.333 ± 16.722), then the control group (n = 11, 29.090 ± 24.589), with Focus Group Two having the highest rate of decay (n = 11, 35.000 ± 14.925). There were no statistically significant differences in retention decay rates between the different groups (F(3, 38) = 0.414, p = 0.744) (Table 4 and Figure 11).
The data from this analysis suggest that participants from Focus Group One (HMD AR—Museum Environment) had the lowest overall rate of information decay, which contrasts with Focus Group Two (HMD AR—university environment), who had the highest overall rate of decay. This indicates that the environment itself may have contributed to the information decay rates, although the difference is not statistically significant. Regarding technological implementation, Focus Group Three (Mobile AR) also had a lower rate of decay than the control group.

5.3. Does the Age of the Participant Impact the Amount of Information They Retain from Either Learning Method?—Two-Way ANOVA

The purpose of this data analysis was to assist with answering research question four. The Information Retention Rate was calculated using ((B × 100)/A), with A being the immediate post-test and B being the delayed post-test. A two-way ANOVA was used to identify if there were any statistically significant differences in how much information was retained between tests for different age groups, using the participant groups as the first independent variable and grouped age ranges as the second independent variable. A single outlier was present in the data, but as it again regarded test scores, it remained in the data in an unmodified state. The data were tested for normal distribution as assessed by Shapiro–Wilk’s test (p > 0.05), and it was found that the data were normally distributed for all but one group; however, this was left in an untransformed state [48]. There was homogeneity of variances as assessed by Levene’s test for equality of variances (p = 0.185). The interaction effect between participant groups and age groups was not statistically significant: F(6, 27) = 0.566, p = 0.754, and partial η2 = 0.112. Therefore, an analysis of the main effect of the age group was performed, which also did not indicate that the main effect was statistically significant: F(4, 27) = 0.868, p = 0.114, and partial η2 = 0.114 (Table 5 and Figure 12).
The data from this analysis indicate that participants from multiple groups had the highest rate of retention if they were between the ages of 26 and 29, and the lowest rate of retention was in participants between the ages of 30 and 39; however, there was no statistically significant difference outcomes for any group or age range.

6. Discussion

Analysis of the data was performed using varying ANOVA methodologies to identify any statistically significant implications from the data collected to answer the research questions provided in the introduction section. As such, each question will now be compared against the data to determine if it was answered by the analysis.

6.1. What Impact Does Augmented Reality Have on Long-Term Memory Retention Compared to a Classroom Learning Experience?

Initial data samples demonstrated that the control group outperformed the Focus groups in both the immediate post-test and the delayed post-test, in contrast with the study by Billinghurst and Duenser [24], which suggested that the AR methods would outperform a traditional classroom learning experience. ANOVA analysis showed that there was a significant difference between the scores obtained during the immediate post-test with Focus Groups One and Three, but not with Focus Group Two; however, the delayed post-test did not show any significant difference between the outcomes of any group. When analysed for the amount retained, Focus Group One was shown to have the lowest amount of retention decay between testing intervals, with a mean of 26.2% ± 14.5% (rounded to one decimal place), followed by Focus Group Three at 28.3% ± 16.7%, then the control group at 29.1% ± 24.6%, and lastly Focus Group Two at 35.0% ± 14.9%; however, there was no statistical significance in information decay rates. The results suggest that although the control group retained the most information from the overall educational experience, two of the three AR groups were able to retain more of that information between the initial point of learning and the delayed post-test. This suggests that the use of AR was more beneficial for the overall rate of retention but may not have been as impactful for the initial learning of new information.

6.2. Does the Technology Used to Implement Augmented Reality Impact How Much Information Is Retained? (Handheld Versus Head-Mounted)

Analysis of the data showed that there was no statistically significant difference between the testing groups for methodology. Participants of Focus Group Two retained a higher amount of information overall, with Focus Groups One and Three having comparable outcomes. This does not provide any definitive answers but suggests a possibility that a head-mounted implementation has the possibility for a higher amount of initial learning compared to a handheld device. The decay rate between the immediate post-test and the delayed post-test was the lowest for Focus Group One, but the highest for Focus Group Two, both of which utilised the same implementation of AR but at different locations and with varying results, which suggests that additional factors may have been more significant than the AR implementation method.

6.3. Does the Location of the Learner Have Any Impact on the Amount of Information They Retain? (Classroom Versus Museum Environment)

The data suggest that the location of the learner impacts the amount of information retained in a learning experience, as all three of the groups tested at the university attained higher immediate and delayed post-test scores than Focus Group One, which was tested at a museum. However, it is also important to analyse the information decay rates, as although there was no significant difference identified between the groups, Focus Group One had the lowest rate of decay for all the testing groups. This may imply that the Museum environment was beneficial towards the volume of information retained between the initial learning experience and the delayed-posted test; however, the margin was small, and so additional testing would be required to further analyse this impact.

6.4. Does the Age of the Participant Impact the Amount of Information They Retain from Either Learning Method?

Participant age did not show any statistical significance in the amount of information retained when analysed under two-way ANOVA. When further analysed to try to identify if age had any significant impact on retention regardless of the participant group, no statistically significant outcomes were identified. The mean averages of each group suggest that retention rates were highest in the 26–29 age band; however, the rates fluctuated several times, and there is no statistically significant difference between age groups.

7. Limitations

Whilst this study is attempting to investigate the impact of AR on memory retention in education, there are limitations to it that must be mentioned. One of these is the topic of the research; the AR applications were designed to teach history education only, and consequently, the content being delivered, as well as the interactions with virtual objects, may not translate to other subject areas or even fully utilise the potential of the technology. For example, other studies have used AR to teach a variety of subjects, such as physics [24,49] or mathematics [50,51], art [52], linguistics [53], chemistry [9,54], and many more. There may be benefits or detriments to using natural interactions for these other subjects that will not be covered by our research.
The design of the HMD application required the participants to wear several devices to facilitate the gesture-based inputs, which necessitated some method of attaching those devices to each user. The devices were quite lightweight and did not cause discomfort to the participants; however, our method of attaching them via placing them into a backpack meant that each participant had to compensate for the extra bulk with their positioning when exploring the scene. Participants did not report any discomfort, nor did the use of the backpack appear to hinder their arm movements. Nevertheless, in the future, an alternative method of attaching the devices, such as a bespoke holster, may allow for fewer spatial constraints when designing such a system.
Our research also does not include a control for HMDs without the natural interactions, as the system was designed specifically to accommodate them. It is possible that a different HMD without natural interactions could yield alternative results. As we used Microsoft HoloLens for our project, it has already been identified by existing research that the device can contribute to cognitive overload [23] when using its basic input methods. This suggests that redesigning the existing application to accommodate those input methods may result in an experience that is frustrating for participants or detrimental to their learning if they are trying to navigate an unfamiliar system instead of focusing on the educational content. However, it is possible that future research could explore the topic further.
A limitation of the participant sample for this study related to the means by which participants were collected for the study. For the three groups experimented on at the university, participants were collected from on-campus, which skewed the age groups primarily toward the younger age bands, mainly between the ages of 18 and 26. For the collection of participants for the museum test, the museum staff arranged for participants from their local area due to the large distance between the university and the museum, which rendered the transportation of participants between locations to be logistically implausible. The museum primarily collected participants from local community groups, which skewed to older age bands between the ages of 30 and 59, with a single participant from the 60–69 age band. Consequently, there was not an equal distribution of participant ages across each group. Additional study may be required with balanced participant groups for a more detailed exploration of how participant age influences retention rates when educating using AR as a delivery medium.
Another potential limitation of our participant sample was that participants were not asked about their prior experiences with AR systems before the experiment. The outcomes of the study may have been altered by prior experiences using AR technology, and familiarity with similar systems could have provided participants with an advantage when using either version of the application. Future studies on the subject would benefit from identifying the potential prior experience of participants to identify if any use or frequency of use may impact the outcomes of a study.
With regard to the design of the system, an identified limitation became apparent during the classroom learning experiment: the responses of the virtual educator were much more limited compared to an actual educator. Whilst the virtual educator could explain the history of each object and the historical significance of it, it could not provide any additional clarity for learners. During the classroom experiment, as the experiment was designed to simulate a typical classroom lesson, participants were permitted to ask questions. During the study, a participant asked why hat boxes were used as they did not understand why they would be needed, and another asked if the Kindertransport had influenced the future refugee policy of the United Kingdom. These questions were very valid, and responses were provided; however, within the context of the AR experiences, the virtual educator was only able to provide the exact context it had been programmed with and nothing more. Until recently, that has been a consistent limitation of many technology-enhanced learning experiences; however, with recent advances in Artificial Intelligence, it is theoretically possible that the virtual educator could be expanded upon to provide live responses to participant questions. This is not covered in this study, but it is a potential scope for future research.
Another note regarding data collection is the actual data collected; we only sought quantifiable test scores from the participants and not their opinions on the experiences overall. Adult audiences are not often the targets of research into AR learning experiences, as many studies instead seek to enhance education at the school level. As such, we only identified how adults performed and not their opinions regarding the system design or the content, which will not inform future research into the types of content or interactions adult audiences would desire in an AR learning application.

8. Conclusions

Investigation into the various impacts of AR technology on education for adult audiences found that there was a statistically significant impact on the amount of information retained in the immediate aftermath of a learning experience between a traditional classroom lesson and an AR lesson; however, no other statistically significant differences were identified. Adult audiences responded better to a typical classroom lesson than with the use of AR technology, although there was no statistically significant difference found between age ranges, and the sample of participants did not allow for this question to be fully explored. The mean averages of responses indicate that head-mounted device-based solutions have the potential to allow adult audiences to retain more information than handheld device implementations; however, this was only true when deployed in a classroom. Participants who learned with a head-mounted device in a museum environment did not learn as much overall but retained a higher amount of that information over a three-month period.
This research stands at odds with prior research, which indicated that participants from AR groups would outperform the control group [24,25]; however, these studies only examined child–teenage audiences or only had a limited sample, and thus the data from them cannot be generalised to adult audiences. Although adults may not necessarily be seen as the primary audiences for such applications, there is still the potential for AR to be used as an educational tool in museum or university environments, and as such, the data found in this study demonstrate that outcomes from research into the use of the technology for children and teenagers may not apply to older learners. Further research may be required to fully assess how adult audiences differ with the use of this technology and how it can be further refined to make optimised learning environments for grown learners.

Author Contributions

Conceptualisation, J.C. and D.W.; methodology, J.C. and D.W.; software, J.C.; validation, J.C. and D.W.; formal analysis, J.C.; investigation, J.C.; resources, J.C.; data curation, J.C.; writing—original draft preparation, J.C.; writing—review and editing, D.W. and D.M.; visualisation, J.C.; supervision, D.W. and D.M.; project administration, J.C., D.W. and D.M.; funding acquisition, D.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was approved by the Digital Technologies Ethics Panel of Staffordshire University (Approved 21 December 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from Robert Ellis (Pictured in Figure 2) to publish this paper.

Data Availability Statement

Research Data have been published in this paper where applicable. Additional datasets are available upon request.

Acknowledgments

Special thanks are given to Robert Ellis for agreeing to be photographed for Figure 2.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pacheco, D.; Wierenga, S.; Omedas, P.; Oliva, L.S.; Wilbricht, S.; Billib, S.; Knoch, H.; Verschure, P.F.M.J. A Location-Based Augmented Reality System for the Spatial Interaction with Historical Datasets. In Proceedings of the 2015 Digital Heritage, Granada, Spain, 28 September–2 October 2015; IEEE: Piscataway, NJ, USA, 2016; pp. 393–396. [Google Scholar]
  2. Na, H.; Park, S.; Dong, S.-Y. Mixed Reality-Based Interaction between Human and Virtual Cat for Mental Stress Management. Sensors 2022, 22, 1159. [Google Scholar] [CrossRef] [PubMed]
  3. Kyriakou, P.; Hermon, S. Can I Touch This? Using Natural Interaction in a Museum Augmented Reality System. Digit. Appl. Archaeol. Cult. Herit. 2019, 12, e00088. [Google Scholar] [CrossRef]
  4. Gruenefeld, U.; Prädel, L.; Illing, J.; Stratmann, T.; Drolshagen, S.; Pfingsthorn, M. Mind the ARm: Realtime Visualization of Robot Motion Intent in Head-Mounted Augmented Reality. In Proceedings of the ACM International Conference Proceeding Series; Association for Computing Machinery, Magdeburg, Germany, 6 September 2020; pp. 259–266. [Google Scholar]
  5. Gruenefeld, U.; Hsiao, D.; Heuten, W. EyeSeeX: Visualization of out-of-View Objects on Small Field-of-View Augmented and Virtual Reality Devices. In Proceedings of the PerDis 2018—Proceedings of the 7th ACM International Symposium on Pervasive Displays, Munich, Germany, 6 June 2018; Association for Computing Machinery, Inc.: New York, NY, USA, 2018; pp. 1–2. [Google Scholar]
  6. Zolotas, M.; Elsdon, J.; Demiris, Y. Head-Mounted Augmented Reality for Explainable Robotic Wheelchair Assistance. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Madrid, Spain, 27 December 2018; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2018; pp. 1823–1829. [Google Scholar]
  7. Microsoft HoloLens (1st Gen) Basics 101—Complete Project with Device—Mixed Reality|Microsoft Docs. Available online: https://docs.microsoft.com/en-us/windows/mixed-reality/develop/unity/tutorials/holograms-101 (accessed on 15 June 2022).
  8. Fleck, S.; Hachet, M.; Christian Bastien, J.M. Marker-Based Augmented Reality: Instructional-Design to Improve Children Interactions with Astronomical Concepts. In Proceedings of the IDC 2015: The 14th International Conference on Interaction Design and Children, Medford, MA, USA, 21 June 2015; Association for Computing Machinery, Inc.: New York, NY, USA, 2015; pp. 21–28. [Google Scholar]
  9. Chun Lam, M.; Kei Tee, H.; Soleha Muhammad Nizam, S.; Che Hashim, N.; Asylah Suwadi, N.; Yee Tan, S.; Aini Abd Majid, N.; Arshad, H.; Yee Liew, S. Interactive Augmented Reality with Natural Action for Chemistry Experiment Learning. TEM J. 2020, 9, 351. [Google Scholar] [CrossRef]
  10. Rehman, I.U.; Ullah, S.; Khan, D. Multi Layered Multi Task Marker Based Interaction in Information Rich Virtual Environments. IJIMAI 2020, 6, 57–67. [Google Scholar] [CrossRef]
  11. Seo, D.W.; Lee, J.Y. Direct Hand Touchable Interactions in Augmented Reality Environments for Natural and Intuitive User Experiences. Expert Syst. Appl. 2013, 40, 3784–3793. [Google Scholar] [CrossRef]
  12. Microsoft Instinctual Interactions—Mixed Reality|Microsoft Docs. Available online: https://docs.microsoft.com/en-us/windows/mixed-reality/design/interaction-fundamentals (accessed on 15 June 2022).
  13. Santos, M.E.C.; Chen, A.; Taketomi, T.; Yamamoto, G.; Miyazaki, J.; Kato, H. Augmented Reality Learning Experiences: Survey of Prototype Design and Evaluation. IEEE Trans. Learn. Technol. 2014, 7, 38–56. [Google Scholar] [CrossRef]
  14. Aliprantis, J.; Konstantakis, M.; Nikopoulou, R.; Mylonas, P.; Caridakis, G. Natural Interaction in Augmented Reality Context; VIPERC@IRCDL: Piza, Italy, 2019. [Google Scholar]
  15. Niantic, I. Pokémon GO 2016. Available online: https://apps.apple.com/us/app/pok%C3%A9mon-go/id1094591345 (accessed on 15 June 2022).
  16. Jang, S.; Liu, Y. Continuance Use Intention with Mobile Augmented Reality Games: Overall and Multigroup Analyses on Pokémon Go. Inf. Technol. People 2020, 33, 37–55. [Google Scholar] [CrossRef]
  17. Keil, J.; Zollner, M.; Becker, M.; Wientapper, F.; Engelke, T.; Wuest, H. The House of Olbrich—An Augmented Reality Tour through Architectural History. In Proceedings of the 2011 IEEE International Symposium on Mixed and Augmented Reality—Arts, Media, and Humanities, Basel, Switzerland, 26–29 October 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 15–18. [Google Scholar]
  18. Herbst, I.; Braun, A.-K.; McCall, R.; Broll, W. TimeWarp. In Proceedings of the 10th International Conference on Human Computer Interaction with Mobile Devices and Services—MobileHCI ’08, Amsterdam, The Netherlands, 2–5 September 2008; ACM Press: New York, NY, USA, 2008; p. 235. [Google Scholar]
  19. Ultraleap Tracking|Leap Motion Controller|Ultraleap. Available online: https://www.ultraleap.com/product/leap-motion-controller/ (accessed on 11 February 2020).
  20. Oculus Oculus Touch Controllers|Oculus Developers. Available online: https://developer.oculus.com/documentation/native/pc/dg-input-touch-overview/ (accessed on 29 July 2022).
  21. Cho, Y.; Kang, J.; Jeon, J.; Park, J.; Kim, M.; Kim, J. X-person Asymmetric Interaction in Virtual and Augmented Realities. Comput. Animat. Virtual Worlds 2021, 32, e1985. [Google Scholar] [CrossRef]
  22. Worrallo, A.G.; Hartley, T. Robust Optical Based Hand Interaction for Virtual Reality. IEEE Trans. Vis. Comput. Graph. 2021, 28, 4186–4197. [Google Scholar] [CrossRef]
  23. Baumeister, J.; Ssin, S.Y.; Elsayed, N.A.M.; Dorrian, J.; Webb, D.P.; Walsh, J.A.; Simon, T.M.; Irlitti, A.; Smith, R.T.; Kohler, M.; et al. Cognitive Cost of Using Augmented Reality Displays. IEEE Trans. Vis. Comput. Graph. 2017, 23, 2378–2388. [Google Scholar] [CrossRef]
  24. Billinghurst, M.; Duenser, A. Augmented Reality in the Classroom. Computer 2012, 45, 56–63. [Google Scholar] [CrossRef]
  25. Lim, K.Y.T.; Lim, R. Semiotics, Memory and Augmented Reality: History Education with Learner-Generated Augmentation. Br. J. Educ. Technol. 2020, 51, 673–691. [Google Scholar] [CrossRef]
  26. Menon, S.S.; Holland, C.; Farra, S.; Wischgoll, T.; Stuber, M. Augmented Reality in Nursing Education—A Pilot Study. Clin. Simul. Nurs. 2022, 65, 57–61. [Google Scholar] [CrossRef]
  27. Cook, M.J. Augmented Reality: Examining Its Value in a Music Technology Classroom. Practice and Potential. Waikato J. Educ. 2019, 24, 23–37. [Google Scholar] [CrossRef]
  28. Microsoft Kinect—Starting to Develop with Kinect|Microsoft Docs. Available online: https://docs.microsoft.com/en-us/archive/msdn-magazine/2012/june/kinect-starting-to-develop-with-kinect (accessed on 19 July 2022).
  29. Jasandre Pty. Ltd. MINDFLUX—Essential Reality—P5 Glove. Available online: http://www.mindflux.com.au/products/essentialreality/p5glove.html (accessed on 1 July 2022).
  30. Gest Gest. Available online: https://gest.co/ (accessed on 1 July 2022).
  31. SenseGlove Nova Haptic Glove|SenseGlove. Available online: https://www.senseglove.com/product/nova/ (accessed on 1 July 2022).
  32. Unity Technologies Unity Real-Time Development Platform|3D, 2D VR & AR Engine. Available online: https://unity.com/ (accessed on 1 July 2022).
  33. Ulrealeap Leap Motion Orion 4.1.0—Ultraleap for Developers. Available online: https://developer.leapmotion.com/releases/leap-motion-orion-410-99fe5-crpgl (accessed on 1 July 2022).
  34. The Great Alpaca The Essential Leap-Motion Gesture Detection|Input Management|Unity Asset Store. Available online: https://assetstore.unity.com/packages/tools/input-management/the-essential-leap-motion-gesture-detection-111791 (accessed on 1 July 2022).
  35. Raspberry Pi Foundation Raspberry Pi. Available online: https://www.raspberrypi.com/ (accessed on 1 July 2022).
  36. Radxa RockpiX—Radxa Wiki. Available online: https://wiki.radxa.com/RockpiX (accessed on 1 July 2022).
  37. ACEPC ACEPC. Available online: https://www.iacepc.com/ (accessed on 1 July 2022).
  38. Ravpower PD Pioneer 20000mAh 80W AC Portable Laptop Charger—RAVPower. Available online: https://www.ravpower.com/products/rp-pb054-pd-20000mah-80w-ac-portable-charger (accessed on 1 July 2022).
  39. Holo Lab Inc. Available online: https://hololab.co.jp/ (accessed on 1 July 2022).
  40. Holo Lab Inc GitHub—HoloLabInc/LeapMotionInputForMRTK: LeapMotionInputForMRTK Simulates Hand Inputs for MRTK with Leap Motion. Available online: https://github.com/HoloLabInc/LeapMotionInputForMRTK (accessed on 1 July 2022).
  41. Mirror Networking Mirror Networking—Open Source Networking for Unity. Available online: https://mirror-networking.com/ (accessed on 1 July 2022).
  42. Animaze Animaze by Facerig. Available online: https://www.animaze.us/ (accessed on 1 July 2022).
  43. Krug, M.; Czok, V.; Müller, S.; Weitzel, H.; Huwer, J.; Kruse, S.; Müller, W. AR in Science Education—An AR Based Teaching-learning Scenario in the Field of Teacher Education. Chemkon 2022, 29, 312–318. [Google Scholar] [CrossRef]
  44. Guest, G.; Namey, E.; McKenna, K. How Many Focus Groups Are Enough? Building an Evidence Base for Nonprobability Sample Sizes. Field Methods 2017, 29, 3–22. [Google Scholar] [CrossRef]
  45. Unity’s AR Foundation Framework|Cross Platform Augmented Reality Development Software|Unity. Available online: https://unity.com/unity/features/arfoundation (accessed on 30 July 2022).
  46. Knaur, T. Knaurs Konversations Lexikon A-Z; Verlag Von Th. Knaur Nachf: Munich, Germany, 1932. [Google Scholar]
  47. Microsoft Microsoft Forms|Surveys, Polls, and Quizzes. Available online: https://www.microsoft.com/en-gb/microsoft-365/online-surveys-polls-quizzes (accessed on 29 April 2021).
  48. Laerd Statistics Two-Way ANOVA in SPSS Statistics|Laerd Statistics Premium. Available online: https://statistics.laerd.com/premium/spss/twa/two-way-anova-in-spss-8.php (accessed on 13 April 2023).
  49. Wojciechowski, R.; Cellary, W. Evaluation of Learners’ Attitude toward Learning in ARIES Augmented Reality Environments. Comput. Educ. 2013, 68, 570–585. [Google Scholar] [CrossRef]
  50. Coimbra, M.T.; Cardoso, T.; Mateus, A. Augmented Reality: An Enhancer for Higher Education Students in Math’s Learning? In Proceedings of the Procedia Computer Science, Amsterdam, The Netherlands, 1 January 2015; Elsevier B.V.: Amsterdam, The Netherlands, 2015; Volume 67, pp. 332–339. [Google Scholar]
  51. Bujak, K.R.; Radu, I.; Catrambone, R.; MacIntyre, B.; Zheng, R.; Golubski, G. A Psychological Perspective on Augmented Reality in the Mathematics Classroom. Comput. Educ. 2013, 68, 536–544. [Google Scholar] [CrossRef]
  52. Di Serio, Á.; Ibáñez, M.B.; Kloos, C.D. Impact of an Augmented Reality System on Students’ Motivation for a Visual Art Course. Comput. Educ. 2013, 68, 586–596. [Google Scholar] [CrossRef]
  53. Parmaxi, A.; Demetriou, A.A. Augmented Reality in Language Learning: A State-of-the-art Review of 2014–2019. J. Comput. Assist. Learn. 2020, 36, 861–875. [Google Scholar] [CrossRef]
  54. Habig, S. Who Can Benefit from Augmented Reality in Chemistry? Sex Differences in Solving Stereochemistry Problems Using Augmented Reality. Br. J. Educ. Technol. 2020, 51, 629–644. [Google Scholar] [CrossRef]
Figure 1. Mobile natural-interaction diagram.
Figure 1. Mobile natural-interaction diagram.
Mti 07 00055 g001
Figure 2. Mobile natural-interaction setup (with permission from Robert Ellis).
Figure 2. Mobile natural-interaction setup (with permission from Robert Ellis).
Mti 07 00055 g002
Figure 3. Mobile natural-interaction setup (unworn).
Figure 3. Mobile natural-interaction setup (unworn).
Mti 07 00055 g003
Figure 4. Swipe gesture illustrated.
Figure 4. Swipe gesture illustrated.
Mti 07 00055 g004
Figure 5. Selective asset targeting for gesture controls (debug view).
Figure 5. Selective asset targeting for gesture controls (debug view).
Mti 07 00055 g005
Figure 6. Waving interaction with the virtual educator.
Figure 6. Waving interaction with the virtual educator.
Mti 07 00055 g006
Figure 7. Evaluation criteria—head-mounted application.
Figure 7. Evaluation criteria—head-mounted application.
Mti 07 00055 g007
Figure 8. Evaluation criteria—mobile application.
Figure 8. Evaluation criteria—mobile application.
Mti 07 00055 g008
Figure 9. (A) Mobile application diagram and (B) application screenshot.
Figure 9. (A) Mobile application diagram and (B) application screenshot.
Mti 07 00055 g009
Figure 10. Questionnaire mean scores by group.
Figure 10. Questionnaire mean scores by group.
Mti 07 00055 g010
Figure 11. Retention decay rate by group.
Figure 11. Retention decay rate by group.
Mti 07 00055 g011
Figure 12. Retention rate by age group.
Figure 12. Retention rate by age group.
Mti 07 00055 g012
Table 1. Comparison of all testing methods and variables.
Table 1. Comparison of all testing methods and variables.
GroupTesting LocationTechnology UsedTesting VariablesDesired Outcomes
ControlClassroomProjector/PowerPointPedagogical methodologyImpact on memory retention from a traditional learning experience.
Focus OneMuseumMicrosoft HoloLens + Leap Motion/AR programHead-mounted device with natural interactions/museum environmentImpact on memory retention from a head-mounted AR learning experience in a museum environment.
Focus TwoClassroomMicrosoft HoloLens + Leap Motion/AR programHead-Mounted device with natural interactions/classroom environmentImpact on memory retention from a head-mounted AR learning experience in a classroom environment.
Focus ThreeClassroomMobile device/AR applicationAR deployment method/learner locationImpact on memory retention from an AR learning experience.
Table 2. Participant retention rates.
Table 2. Participant retention rates.
GroupInitial SampleRetained SampleSample
Retention Rate
Control151173.33%
Focus One11763.63%
Focus Two151280.00%
Focus Three151386.67%
Table 3. Post-test immediate ANOVA table.
Table 3. Post-test immediate ANOVA table.
Sum of SquaresdfMean SquareFSig.
Between Groups52.717317.5724.4080.008
Within Groups207.288523.986
Total260.00555
Table 4. Decay rate ANOVA table.
Table 4. Decay rate ANOVA table.
Sum of SquaresdfMean SquareFSig.
Between Groups421.5253140.5080.4140.744
Within Groups12,883.76638339.046
Total13,305.29141
Table 5. Test of between-subject effects on retention rates against participant group and age group.
Table 5. Test of between-subject effects on retention rates against participant group and age group.
Type III Sum of SquaresdfMean SquareFSig.Partial Eta Squared
Corrected Model6983.852.13537.2190.7980.6570.278
Intercept90,123.878190,123.878133.893<0.0010.832
Participant Group825.5523275.1840.4090.7480.043
Age Group2336.4944584.1230.8680.4960.114
Participant Group * Age Group2284.4216380.7370.5660.7540.112
Error18,173.83927673.105
Total153,815.31541
Corrected Total25,157.69040
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Challenor, J.; White, D.; Murphy, D. Hand-Controlled User Interfacing for Head-Mounted Augmented Reality Learning Environments. Multimodal Technol. Interact. 2023, 7, 55. https://doi.org/10.3390/mti7060055

AMA Style

Challenor J, White D, Murphy D. Hand-Controlled User Interfacing for Head-Mounted Augmented Reality Learning Environments. Multimodal Technologies and Interaction. 2023; 7(6):55. https://doi.org/10.3390/mti7060055

Chicago/Turabian Style

Challenor, Jennifer, David White, and David Murphy. 2023. "Hand-Controlled User Interfacing for Head-Mounted Augmented Reality Learning Environments" Multimodal Technologies and Interaction 7, no. 6: 55. https://doi.org/10.3390/mti7060055

Article Metrics

Back to TopTop