Recent Advances in Extended Reality

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (29 February 2024) | Viewed by 9522

Special Issue Editors

Department of Electrical and Software Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
Interests: XR (VR/AR/MR); virtual avatars & agents; social interaction in XR; perception and cognition in XR; pervasive XR and IoT
Special Issues, Collections and Topics in MDPI journals
School of Computer Engineering, Pusan National University, Busan 46241, Korea
Interests: VR/AR; virtual humans; perception in VR/AR; multimodal interaction
School of Information Technology and Mathematical Sciences, University of South Australia, Adelaide 5095, Australia
Interests: collaborative mixed reality; empathic computing; human computer interaction
Division of Electronics and Communications Engineering, Pukyong National University, Busan 48513, Korea
Interests: AR/MR; computer vision; human computer interaction; deep learning applications

Special Issue Information

Dear Colleagues,

Recent technical advances and research in extended reality (XR), which broadly includes virtual, augmented, and mixed reality (VR, AR, and MR, respectively), enable us to extend our experience and abilities in various contexts. For example, VR users naturally and intuitively interact with remote users in immersive social virtual environments using their embodied avatars; AR users extend their sensory perception and knowledge through additional sensing devices equipped with AR displays; and in situ intelligent virtual entities are augmented into the real world.

In this Special Issue, we aim to capture the current states of XR research and developments while covering the interdisciplinary convergence research aspects in XR. We anticipate that this issue will address recent research findings and approaches, and possibly suggest future directions identifying research gaps. We are pleased to invite diverse research that covers different disciplines and perspectives, including, but not restricted to, the following topics:

  • New XR frameworks and platforms;
  • Sensing and tracking for XR;
  • Novel interfaces and interaction design in XR;
  • Security for XR;
  • Usability and user experience (UX) studies;
  • Human factors and ergonomics in XR;
  • Perception and cognition in XR;
  • Virtual avatars and agents;
  • Human–robot interaction with XR;
  • Remote collaboration and learning in XR;
  • Context-aware XR systems;
  • IoT and 5G for XR;
  • Industrial and occupational XR applications;
  • Other various XR applications.

Dr. Kangsoo Kim
Dr. Myungho Lee
Dr. Dongsik Jo
Dr. Gun Lee
Dr. Hanhoon Park
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • VR/AR/MR/XR
  • metaverse
  • multimodal interaction
  • virtual avatars and agents
  • perception and cognition
  • tracking and sensing techniques
  • context-aware XR

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 1617 KiB  
Article
FusionNet: An End-to-End Hybrid Model for 6D Object Pose Estimation
by Yuning Ye and Hanhoon Park
Electronics 2023, 12(19), 4162; https://doi.org/10.3390/electronics12194162 - 07 Oct 2023
Cited by 1 | Viewed by 915
Abstract
In this study, we propose a hybrid model for Perspective-n-Point (PnP)-based 6D object pose estimation called FusionNet that takes advantage of convolutional neural networks (CNN) and Transformers. CNN is an effective and potential tool for feature extraction, which is considered the most popular [...] Read more.
In this study, we propose a hybrid model for Perspective-n-Point (PnP)-based 6D object pose estimation called FusionNet that takes advantage of convolutional neural networks (CNN) and Transformers. CNN is an effective and potential tool for feature extraction, which is considered the most popular architecture. However, CNN has difficulty in capturing long-range dependencies between features, and most CNN-based models for 6D object pose estimation are bulky and heavy. To address these problems, we propose a lighter-weight CNN building block with attention, design a Transformer-based global dependency encoder, and integrate them into a single model. Our model is able to extract dense 2D–3D point correspondences more accurately while significantly reducing the number of model parameters. Followed with a PnP header that replaces the PnP algorithm for general end-to-end pose estimation, our model showed better or highly competitive performance in pose estimation compared with other state-of-the-art models in experiments on the LINEMOD dataset. Full article
(This article belongs to the Special Issue Recent Advances in Extended Reality)
Show Figures

Figure 1

17 pages, 10747 KiB  
Article
Physically Plausible Realistic Grip-Lift Interaction Based on Hand Kinematics in VR
by Hyeongil Nam, Chanhee Kim, Kangsoo Kim and Jong-Il Park
Electronics 2023, 12(13), 2794; https://doi.org/10.3390/electronics12132794 - 24 Jun 2023
Viewed by 812
Abstract
Immersive technology, refers to various novel ways of creating and interacting with applications and experiences, e.g., virtual reality (VR), has been used in various simulations and training where preparing real/physical settings is not ideal or possible, or where the use of virtual contents [...] Read more.
Immersive technology, refers to various novel ways of creating and interacting with applications and experiences, e.g., virtual reality (VR), has been used in various simulations and training where preparing real/physical settings is not ideal or possible, or where the use of virtual contents is otherwise beneficial. Realizing realistic interactions with virtual content is crucial for a quality experience and the effectiveness of such simulation and training. In this paper, we propose a kinematics-based realistic hand interaction method to enable a physically plausible grip-lifting experience in VR. The method reflects three kinematic characteristics of the hand: the force at contact points, finger flexion, and the speed of hand/finger motion, and we developed a grip-lift interaction prototype using the proposed method. To examine the sense of realism and hand poses during the grip-lift interaction, we conducted a human subjects experiment using the prototype, resulting in positive effects on the perceived realism and usefulness of the interaction. Grip-lifting is a fundamental interaction technique that is involved in most embodied interaction scenarios. Our method would contribute to the design and development of realistic virtual experiences, of which we will discuss the implications and potential based on our findings. Full article
(This article belongs to the Special Issue Recent Advances in Extended Reality)
Show Figures

Figure 1

18 pages, 5736 KiB  
Article
User Experience of Multi-Mode and Multitasked Extended Reality on Different Mobile Interaction Platforms
by Hyeonah Choi, Heeyoon Jeong and Gerard Jounghyun Kim
Electronics 2023, 12(6), 1457; https://doi.org/10.3390/electronics12061457 - 19 Mar 2023
Viewed by 1613
Abstract
“Extended Reality (XR)” refers to a unified platform or content that supports all forms of “reality”—e.g., 2D, 3D virtual, augmented, and augmented virtual. We explore how the mobile device can support such a concept of XR. We evaluate the XR user experiences of [...] Read more.
“Extended Reality (XR)” refers to a unified platform or content that supports all forms of “reality”—e.g., 2D, 3D virtual, augmented, and augmented virtual. We explore how the mobile device can support such a concept of XR. We evaluate the XR user experiences of multi-mode and multitasking among three mobile platforms—(1) bare smartphone (PhoneXR), (2) standalone mobile headset unit (ClosedXR), and (3) smartphone with clip-on lenses (LensXR). Two use cases were considered through: (a) Experiment 1: using and switching among different modes within a single XR application while multitasking with a smartphone app, and (b) Experiment 2: general multitasking among different “reality” applications (e.g., 2D app, AR, VR). Results showed users generally valued the immersive experience over usability—ClosedXR was clearly preferred over the others. Despite potentially offering a balanced level of immersion and usability with its touch-based interaction, LensXR was not generally received well. PhoneXR was not rated particularly advantageous over ClosedXR even if it needed the controller. The usability suffered for ClosedXR only when the long text had to be entered. Thus, improving the 1D/2D operations in ClosedXR for operating and multitasking would be one way to weave XR into our lives with smartphones. Full article
(This article belongs to the Special Issue Recent Advances in Extended Reality)
Show Figures

Figure 1

15 pages, 18509 KiB  
Article
Dynamically Adjusted and Peripheral Visualization of Reverse Optical Flow for VR Sickness Reduction
by Songmin Kim and Gerard J. Kim
Electronics 2023, 12(4), 861; https://doi.org/10.3390/electronics12040861 - 08 Feb 2023
Cited by 1 | Viewed by 1319
Abstract
Sickness is a major obstacle in the wide adoption of virtual reality (VR). Providing low-resolution peripheral “countervection” visualization could mitigate VR sickness. Herein, we present an extension/improvement to this work, in which the reverse optical flow of the scene features is mixed in, [...] Read more.
Sickness is a major obstacle in the wide adoption of virtual reality (VR). Providing low-resolution peripheral “countervection” visualization could mitigate VR sickness. Herein, we present an extension/improvement to this work, in which the reverse optical flow of the scene features is mixed in, and the extent of the periphery is dynamically adjusted simultaneously. We comparatively evaluated the effects of our extension versus the two notable sickness reduction techniques, (1) the original peripheral countervection flow using the simple stripe pattern (with a fixed field of view and peripheral extent) and (2) the dynamic field of view adjustment (with no added visualization). The experimental results indicated that the proposed extension exhibits competitive or better sickness reduction effects and less user-perceived content intrusion, distraction, and breaks in immersion/presence. Furthermore, we tested the comparative effect of visualizing the reverse optical flow only in the lower visual periphery, which further reduced the content intrusion and lowered the sense of immersion and presence. The test indicated that using just the low visual periphery could achieve a comparable level of sickness reduction with significantly less computational effort, making it suitable for mobile applications. Full article
(This article belongs to the Special Issue Recent Advances in Extended Reality)
Show Figures

Figure 1

17 pages, 38742 KiB  
Article
DAVE: Deep Learning-Based Asymmetric Virtual Environment for Immersive Experiential Metaverse Content
by Yunsik Cho, Seunghyun Hong, Mingyu Kim and Jinmo Kim
Electronics 2022, 11(16), 2604; https://doi.org/10.3390/electronics11162604 - 19 Aug 2022
Cited by 9 | Viewed by 2685
Abstract
In this study, we design an interface optimized for the platform by adopting deep learning in an asymmetric virtual environment where virtual reality (VR) and augmented reality (AR) users participate together. We also propose a novel experience environment called deep learning-based asymmetric virtual [...] Read more.
In this study, we design an interface optimized for the platform by adopting deep learning in an asymmetric virtual environment where virtual reality (VR) and augmented reality (AR) users participate together. We also propose a novel experience environment called deep learning-based asymmetric virtual environment (DAVE) for immersive experiential metaverse content. First, VR users use their real hands to intuitively interact with the virtual environment and objects. A gesture interface is designed based on deep learning to directly link gestures to actions. AR users interact with virtual scenes, objects, and VR users via a touch-based input method in a mobile platform environment. A text interface is designed using deep learning to directly link handwritten text to actions. This study aims to propose a novel asymmetric virtual environment via an intuitive, easy, and fast interactive interface design as well as to create metaverse content for an experience environment and a survey experiment. This survey experiment is conducted with users to statistically analyze and investigate user interface satisfaction, user experience, and user presence in the experience environment. Full article
(This article belongs to the Special Issue Recent Advances in Extended Reality)
Show Figures

Figure 1

Review

Jump to: Research

16 pages, 2782 KiB  
Review
Metaverse Solutions for Educational Evaluation
by Lingling Zi and Xin Cong
Electronics 2024, 13(6), 1017; https://doi.org/10.3390/electronics13061017 - 08 Mar 2024
Viewed by 526
Abstract
This study aims to give a comprehensive overview of the application of the metaverse in educational evaluation. First, we characterize the metaverse and illustrate how it can support educational evaluation from the perspectives of virtual reality, augmented reality, and blockchain. Then, we outline [...] Read more.
This study aims to give a comprehensive overview of the application of the metaverse in educational evaluation. First, we characterize the metaverse and illustrate how it can support educational evaluation from the perspectives of virtual reality, augmented reality, and blockchain. Then, we outline the metaverse exploration framework and summarize its technical advantages. Based on this, we propose a metaverse-based implementation scheme to address the issues of reliability, accuracy, and credibility in educational evaluation. Finally, we show its implementation difficulties, performance evaluation, and future work. This proposed scheme opens up new research directions for the reform of educational evaluation while expanding the potential and reach of metaverse applications in education. We think that this study can help researchers in building an ecosystem for educational evaluation that is trustworthy, equitable, and legitimate. Full article
(This article belongs to the Special Issue Recent Advances in Extended Reality)
Show Figures

Figure 1

Back to TopTop