Advanced Technologies and Applications of Augmented and Virtual Reality

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 September 2024 | Viewed by 6110

Special Issue Editors


E-Mail Website
Guest Editor
Huawei Technologies, Herikerbergweg 36, 1101 CM Amsterdam, The Netherlands
Interests: augmented reality; emotion recognition; computer vision; artificial intelligence

E-Mail Website
Guest Editor
1. School of Information Technology and Mathematical Sciences, University of South Australia, Adelaide, SA 5000, Australia
2. Empathic Computing Laboratory, The University of Auckland, 1010 Auckland, New Zealand
Interests: augmented reality; virtual reality; HCI; empathic computing; bioengineering
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science and Engineering, Delft University of Technology, Mekelweg 5, 2628 Delft, The Netherlands
Interests: multimodal communication; knowledge-based systems; dynamic routing; simulation

Special Issue Information

Dear Colleagues,

We are inviting submissions to the Special Issue “Advanced Technologies and Applications of Augmented and Virtual Reality”.

Augmented (AR) and virtual reality (VR) technologies make use of computer-generated content that can be overlaid on the user’s view of the real world (AR) or replace it entirely (VR). The interface synthesizes input to one or more sensory modalities, creating environments in which people can learn, work, shop, communicate, create, entertain or heal, among other activities. Such experiences are easy to simulate and show significant user engagement. This technology facilitates real-time interaction and immersion, and can enhance face-to-face and remote collaboration.

AR and VR already play important roles in many areas, having the potential to become amongst the most prevalent technologies of the next decade, especially when combined with emerging technologies, including next-generation networking, physiological sensors and small-form factor displays.

Ongoing research aims at pushing the technology boundaries by addressing technical challenges, the limited user-experience, the use cases and interaction design or ethical issues. Some important questions that should be addressed include:

  • What are the key features and applications of AR and VR that will enable the technology to become even more accessible?
  • How can the risk of injury and sensory disruption be reduced during exposure to AR and VR?
  • How can AR and VR be used to improve situational awareness in safety and security applications?
  • How can (work) productivity be supported while engaging in shared collaborative experiences?
  • How can emotion-sensing and generating technologies boost the user experience in AR and VR?

For this Special Issue, we invite original submissions highlighting state-of-the-art theoretical and applied research on AR and VR, including reviews and surveys addressing these questions and other relevant topics.

In particular, topics of interest include, but are not limited to:

  • Augmented reality and virtual reality;
  • Emotion recognition, empathic computing and affective computing in AR and VR;
  • Applications of AR and VR in science, technology, engineering, art and mathematics;
  • Using AR and VR for enhancing remote collaboration;
  • Using AR and VR to improve situational awareness;
  • AR and VR usability studies;
  • (Multimodal) human–computer interaction for augmented and virtual reality;
  • Using AR and VR for learning and serious games;
  • Augmented and virtual reality for safety and security;
  • Augmented and virtual reality simulations;
  • Digital twin, multiverse, metaverse and omniverse;
  • Virtual fitting rooms for trying on products in (online) shopping;
  • AR and VR healthcare applications;
  • Augmented and virtual reality for logistics;
  • Augmented and virtual reality for arts and entertainment;
  • Augmented and virtual reality for the manufacturing industry;
  • Augmented humans.

Dr. Dragos Datcu
Prof. Dr. Mark Billinghurst
Prof. Dr. Leon Rothkrantz
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • augmented reality
  • virtual reality
  • emotion recognition
  • remote collaboration
  • situational awareness
  • virtual fitting rooms

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 2171 KiB  
Article
AR Presentation of Team Members’ Performance and Inner Status to Their Leader: A Comparative Study
by Thomas Rinnert, James Walsh, Cédric Fleury, Gilles Coppin, Thierry Duval and Bruce H. Thomas
Appl. Sci. 2024, 14(1), 123; https://doi.org/10.3390/app14010123 - 22 Dec 2023
Viewed by 597
Abstract
Real-time and high-intensity teamwork management is complex, as team leaders must ensure good results while also considering the well-being of team members. Given that stress and other factors directly impact team members’ output volume and error rate, these team leaders must be aware [...] Read more.
Real-time and high-intensity teamwork management is complex, as team leaders must ensure good results while also considering the well-being of team members. Given that stress and other factors directly impact team members’ output volume and error rate, these team leaders must be aware of and manage team stress levels in combination with allocating new work. This paper examines methods for visualizing each team member’s status in mixed reality, which, combined with a simulated stress model for virtual team members, allows the team leader to take into account team members’ individual statuses when choosing whom to allocate work. Using simulated Augmented Reality in Virtual Reality, a user study was conducted where participants acted as team leaders, putting simulated team members under stress by allocating several required work tasks while also being able to review the stress and status of each team member. The results showed that providing Augmented Reality feedback on team members’ internal status increases the team’s overall performance, as team leaders can better allocate new work to reduce team members’ stress-related errors while maximizing output. Participants preferred having a graph representation for stress levels despite performing better with a text representation. Full article
Show Figures

Figure 1

14 pages, 11728 KiB  
Article
Cross-View Outdoor Localization in Augmented Reality by Fusing Map and Satellite Data
by René Emmaneel, Martin R. Oswald, Sjoerd de Haan and Dragos Datcu
Appl. Sci. 2023, 13(20), 11215; https://doi.org/10.3390/app132011215 - 12 Oct 2023
Cited by 1 | Viewed by 867
Abstract
Visual positioning is the task of finding the location of a given image and is necessary for augmented reality applications. Traditional algorithms solve this problem by matching against premade 3D point clouds or panoramic images. Recently, more attention has been given to models [...] Read more.
Visual positioning is the task of finding the location of a given image and is necessary for augmented reality applications. Traditional algorithms solve this problem by matching against premade 3D point clouds or panoramic images. Recently, more attention has been given to models that match the ground-level image with overhead imagery. In this paper, we introduce AlignNet, which builds upon previous work to bridge the gap between ground-level and top-level images. By making multiple key insights, we push the model results to achieve up to 4 times higher recall rates on a visual position dataset. We use a fusion of both satellite and map data from OpenStreetMap for this matching by extending the previously available satellite database with corresponding map data. The model pushes the input images through a two-branch U-Net and is able to make matches using a geometric projection module to map the top-level image to the ground-level domain at a given position. By calculating the difference between the projection and ground-level image in a differentiable fashion, we can use a Levenberg–Marquardt (LM) module to iteratively align the estimated position towards the ground-truth position. This sample-wise optimization strategy allows the model to align the position better than if the model has to obtain the location in a single step. We provide key insights into the model’s behavior, which allows us to increase the model’s ability to obtain competitive results on the KITTI cross-view dataset. We compare our obtained results with the state of the art and obtain new best results on 3 of the 9 categories we look at, which include a 57% likelihood of lateral localization within 1 m in a 40 m × 40 m area and a 93% azimuth localization within 3 when using a 20 rotation noise prior. Full article
Show Figures

Figure 1

18 pages, 2095 KiB  
Article
Predictors of Engagement in Virtual Reality Storytelling Environments about Migration
by Cecilia Avila-Garzon, Jorge Bacca-Acosta and Juan Chaves-Rodríguez
Appl. Sci. 2023, 13(19), 10915; https://doi.org/10.3390/app131910915 - 02 Oct 2023
Viewed by 912
Abstract
Virtual reality (VR) environments provide a high level of immersion that expands the possibilities for perspective-taking so that people can be in the shoes of others. In that regard, VR storytelling environments are good for situating people in a real migration story. Previous [...] Read more.
Virtual reality (VR) environments provide a high level of immersion that expands the possibilities for perspective-taking so that people can be in the shoes of others. In that regard, VR storytelling environments are good for situating people in a real migration story. Previous research has investigated how users engage in narrative VR experiences. However, there is a lack of research on the predictors of engagement in VR storytelling environments. To fill this gap in the literature, this study aims to identify the predictors of engagement when VR is used as a medium to tell a migration story. A structural model based on hypotheses was validated using partial least squares structural equation modeling (PLS-SEM) with data from the interaction of 212 university students with a tailor-made VR experience developed in Unity to engage people in two migration stories. The results show that our model explains 55.2% of the variance in engagement because of the positive influence of immersion, presence, agency, usability, and user experience (UX). Full article
Show Figures

Figure 1

17 pages, 6086 KiB  
Article
Comparisons of Emotional Responses, Flow Experiences, and Operational Performances in Traditional Parametric Computer-Aided Design Modeling and Virtual-Reality Free-Form Modeling
by Yu-Min Fang and Tzu-Lin Kao
Appl. Sci. 2023, 13(11), 6568; https://doi.org/10.3390/app13116568 - 28 May 2023
Cited by 3 | Viewed by 1232
Abstract
Three-dimensional (3D) computer-aided design (CAD) is a vital tool for visualizing design ideas. While conventional parametric CAD modeling is commonly used, emerging virtual reality (VR) applications in 3D CAD modeling require further exploration. This study contrasts the emotional response, flow experience, and operational [...] Read more.
Three-dimensional (3D) computer-aided design (CAD) is a vital tool for visualizing design ideas. While conventional parametric CAD modeling is commonly used, emerging virtual reality (VR) applications in 3D CAD modeling require further exploration. This study contrasts the emotional response, flow experience, and operational performance of design novices using VR free-form modeling (Gravity Sketch 3D) and conventional parametric CAD modeling (SolidWorks). We arranged two representative tasks for 30 participants: modeling an exact geometric shape (a cube) and a creative shape (a mug). We measured emotional response and flow experience through scales, and gathered operational performance, and further insights through semistructured interviews. The findings reveal more positive and intense emotional responses to VR free-form modeling, but its overall flow experience did not exceed expectations. No significant differences were found in concentration, time distortion sense, or control between the two techniques. Comparing modeling tasks, VR free-form modeling showed promising operational performance for early ideation, whereas conventional parametric CAD modeling proved to be more effective in 3D digitization of known shapes. Full article
Show Figures

Figure 1

Review

Jump to: Research

29 pages, 2464 KiB  
Review
The Use of Sense of Presence in Studies on Human Behavior in Virtual Environments: A Systematic Review
by Robi Barranco Merino, Juan Luis Higuera-Trujillo and Carmen Llinares Millán
Appl. Sci. 2023, 13(24), 13095; https://doi.org/10.3390/app132413095 - 08 Dec 2023
Viewed by 1169
Abstract
Sense of presence is a key element of the user experience in the study of virtual environments. Understanding it is essential for disciplines, such as architecture and environmental psychology, that study human responses using simulated environments. More evidence is needed on how to [...] Read more.
Sense of presence is a key element of the user experience in the study of virtual environments. Understanding it is essential for disciplines, such as architecture and environmental psychology, that study human responses using simulated environments. More evidence is needed on how to optimize spatial presence in simulations of built environments. A systematic review was conducted to define the use of sense of presence in research on human behavior in virtual spaces. Conceptualized dimensions, measurement methodologies, simulation technologies and associated factors were identified. The study identified a diversity of approaches and the predominance of subjective measures over sense of presence indicators. Several studies noted that environmental variables and spatial typologies had significant effects on presence. The results showed that different user profiles responded to stimuli in different ways. The results emphasized the importance of conceiving the construct in interrelation with the built context. A more comprehensive and multidisciplinary orientation is required to identify principles that optimize the spatial experience in virtual environments. This will be important for disciplines that research the human experience using virtual environments. Full article
Show Figures

Figure 1

Back to TopTop