Emerging Immersive Learning Technologies: Augmented and Virtual Reality

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 July 2024 | Viewed by 29462

Special Issue Editors

Department of Information and Computational Science, Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B Street, 02-106 Warsaw, Poland
Interests: information theory; neural code; signal processing; neuroinformatics; data analysis; augmented reality; mixed reality
Center for Digital Medicine and Robotics, Jagiellonian University Medical College, Kopernika 7E Str., 31-034 Krakow, Poland
Interests: telemedicine; augmented reality; mixed reality; virtual reality; holography; biomedical data; biostatistics; medicine; 3D modeling; immersive technologies

Special Issue Information

Dear Colleagues,

This Special Issue aims to introduce the most recent advances in immersive technologies, including virtual reality, augmented reality, and mixed reality, which were applied in various types of learning processes to improve understanding around complex contexts. This Special Issue shall cover the latest technology and communication developments which enable users to mimic real-world experiences in distance education, as well as during traditional laboratory courses. The teaching formula covers a range of interesting topics, many of which will be covered here.

The submitted papers should cover the following areas:

  • virtual reality;
  • augmented reality;
  • mixed reality;
  • brain–computer interfaces;
  • human–machine interaction;
  • remote learning;
  • e-learning;
  • 3D visualization;
  • education;
  • 3D user interfaces.

Dr. Agnieszka Pregowska
Dr. Klaudia Proniewska
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • virtual reality
  • augmented reality
  • mixed reality
  • brain–computer interfaces
  • human–machine interaction
  • remote learning
  • e-learning
  • 3D visualization
  • education
  • 3D user interfaces

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

21 pages, 7205 KiB  
Article
Development of an Extended Reality-Based Collaborative Platform for Engineering Education: Operator 5.0
by Dimitris Mourtzis and John Angelopoulos
Electronics 2023, 12(17), 3663; https://doi.org/10.3390/electronics12173663 - 30 Aug 2023
Viewed by 1220
Abstract
With the shift towards the human centric, sustainable, and resilient Industry 5.0, the need for training operators in complex industrial systems has become increasingly crucial. This paper explores the significance of collaborative extended reality (XR)-based engineering education in the preparation of the next [...] Read more.
With the shift towards the human centric, sustainable, and resilient Industry 5.0, the need for training operators in complex industrial systems has become increasingly crucial. This paper explores the significance of collaborative extended reality (XR)-based engineering education in the preparation of the next generation of operators, denoted as Operator 5.0. By leveraging immersive technologies, operators can gain hands-on training experience in virtual or augmented environments. By incorporating these elements, operators can undergo comprehensive and personalized training, resulting in improved performance, reduced downtime, enhanced safety, and increased operational efficiency. Additionally, the framework is tested within a laboratory environment in three different case studies, focusing on maintenance and repair operations in the context of modern manufacturing in order to test its functionalities. Therefore, in this research, the current developments have been debugged and examined in order to test all of the functionalities of the digital platform so that the revised and improved version of the digital platform can be tested with a wider industrial and educational audience. Full article
Show Figures

Figure 1

30 pages, 12141 KiB  
Article
An Extended Reality System for Situation Awareness in Flood Management and Media Production Planning
by Spyridon Symeonidis, Stamatios Samaras, Christos Stentoumis, Alexander Plaum, Maria Pacelli, Jens Grivolla, Yash Shekhawat, Michele Ferri, Sotiris Diplaris and Stefanos Vrochidis
Electronics 2023, 12(12), 2569; https://doi.org/10.3390/electronics12122569 - 06 Jun 2023
Viewed by 1289
Abstract
Flood management and media production planning are both tasks that require timely and sound decision making, as well as effective collaboration between professionals in a team split between remote headquarter operators and in situ actors. This paper presents an extended reality (XR) platform [...] Read more.
Flood management and media production planning are both tasks that require timely and sound decision making, as well as effective collaboration between professionals in a team split between remote headquarter operators and in situ actors. This paper presents an extended reality (XR) platform that utilizes interactive and immersive technologies and integrates artificial intelligence (AI) algorithms to support the professionals and the public involved in such incidents and events. The developed XR tools address various specialized end-user needs of different target groups and are fueled by modules that intelligently collect, analyze, and link data from heterogeneous sources while considering user-generated content. This platform was tested in a flood-prone area and in a documentary planning scenario, where it was used to create immersive and interactive experiences. The findings demonstrate that it increases situation awareness and improves the overall performance of the professionals involved. The proposed XR system represents an innovative technological approach for tackling the challenges of flood management and media production, one that also has the potential to be applied in other fields. Full article
Show Figures

Figure 1

16 pages, 5560 KiB  
Article
Free-Viewpoint Navigation of Indoor Scene with 360° Field of View
by Hang Xu, Qiang Zhao, Yike Ma, Shuai Wang, Chenggang Yan and Feng Dai
Electronics 2023, 12(8), 1954; https://doi.org/10.3390/electronics12081954 - 21 Apr 2023
Viewed by 984
Abstract
By providing a 360° field of view, spherical panoramas can convey vivid visual impressions. Thus, they are widely used in virtual reality systems and street view services. However, due to bandwidth or storage limitations, existing systems only provide sparsely captured panoramas and have [...] Read more.
By providing a 360° field of view, spherical panoramas can convey vivid visual impressions. Thus, they are widely used in virtual reality systems and street view services. However, due to bandwidth or storage limitations, existing systems only provide sparsely captured panoramas and have limited interaction modes. Although there are methods that can synthesize novel views based on captured panoramas, the generated novel views all lie on the lines connecting existing views. Therefore these methods do not support free-viewpoint navigation. In this paper, we propose a new panoramic image-based rendering method for novel view generation. Our method represents each input panorama with a set of spherical superpixels and warps each superpixel individually so the method can deal with the occlusion and disocclusion problem. The warping is dominated by a two-term constraint, which can preserve the shape of the superpixel and ensure it is warped to the correct position determined by the 3D reconstruction of the scene. Our method can generate novel views that are far from input camera positions. Thus, it supports freely exploring the scene with a 360° field of view. We compare our method with three previous methods on datasets captured by ourselves and by others. Experiments show that our method can obtain better results. Full article
Show Figures

Figure 1

13 pages, 1915 KiB  
Article
Statistical Analysis of Professors’ Assessment Regarding the Didactic Use of Virtual Reality: Engineering vs. Health
by Pablo Fernández-Arias, Álvaro Antón-Sancho, María Sánchez-Jiménez and Diego Vergara
Electronics 2023, 12(6), 1366; https://doi.org/10.3390/electronics12061366 - 13 Mar 2023
Cited by 3 | Viewed by 1148
Abstract
Virtual reality (VR) has proven to be an efficient didactic resource in higher education after the pandemic caused by COVID-19, mainly in the Engineering and Health Sciences degrees. In this work, quantitative research is carried out on the assessments made by Latin American [...] Read more.
Virtual reality (VR) has proven to be an efficient didactic resource in higher education after the pandemic caused by COVID-19, mainly in the Engineering and Health Sciences degrees. In this work, quantitative research is carried out on the assessments made by Latin American professors of Health Sciences and Engineering of the didactic use of VR. Specifically, the gaps by university tenure in the assessments given by the professors of each of the two areas of knowledge analyzed are identified. For this purpose, a validated questionnaire has been used, which has been applied to a sample of 606 professors. As a result, it is shown that the professors of Engineering and Health Sciences have similar self-concepts of their digital competence, but the Engineering professors give higher values to the technical and didactic aspects of VR. Moreover, in both areas, professors from private universities rate VR technologies more highly than those from public universities, this gap being wider in Health Sciences. Finally, some recommendations are offered regarding digital training and the use of VR, derived from the results of this study. Full article
Show Figures

Figure 1

23 pages, 91435 KiB  
Article
Viaduct and Bridge Structural Analysis and Inspection through an App for Immersive Remote Learning
by Antonino Fotia and Vincenzo Barrile
Electronics 2023, 12(5), 1220; https://doi.org/10.3390/electronics12051220 - 03 Mar 2023
Cited by 1 | Viewed by 1801
Abstract
Until now, in the design phase of infrastructures there has been a general tendency to “economize” the resources allocated to them. This modus operandi did not consider the installation of monitoring and control systems as an integral part of the infrastructure itself, not [...] Read more.
Until now, in the design phase of infrastructures there has been a general tendency to “economize” the resources allocated to them. This modus operandi did not consider the installation of monitoring and control systems as an integral part of the infrastructure itself, not considering the high post-intervention costs. This work aims to show how the integration of immersive technologies, including Virtual/Augmented/Mixed Reality, combined with geomatics, survey and structural monitoring techniques can ensure a better visualization and understanding of the different contexts in which the managing bodies are required to guarantee maintenance interventions. In particular, the potential of an application, developed by the authors in Unity 3D, to help the managing institution is described. The app permits the user to explore infrastructures under inspection in a virtual environment. This makes all the information related to the infrastructure available and accessible through the 3D analysis (which is manageable in the app after a mesh edge reduction phase) exploiting the full potential of Mixed/Virtual Reality. The main ability of our application derives from the chance to easily use and integrate different techniques (3D models, information models for construction, VR/AR) allowing for the choice of different 3D models testing and performing their simplification and dimensional reduction. This makes the loading phase of the application faster and the user experience easier and better. The experimentation of the proposed methodology was conducted on a viaduct located in Reggio Calabria. Full article
Show Figures

Figure 1

Review

Jump to: Research, Other

29 pages, 923 KiB  
Review
Light Field Visualization for Training and Education: A Review
by Mary Guindy and Peter A. Kara
Electronics 2024, 13(5), 876; https://doi.org/10.3390/electronics13050876 - 24 Feb 2024
Viewed by 382
Abstract
Three-dimensional visualization technologies such as stereoscopic 3D, virtual reality, and augmented reality have already emerged in training and education; however, light field displays are yet to be introduced in such contexts. In this paper, we characterize light field visualization as a potential candidate [...] Read more.
Three-dimensional visualization technologies such as stereoscopic 3D, virtual reality, and augmented reality have already emerged in training and education; however, light field displays are yet to be introduced in such contexts. In this paper, we characterize light field visualization as a potential candidate for the future of training and education, and compare it to other state-of-the-art 3D technologies. We separately address preschool and elementary school education, middle and high school education, higher education, and specialized training, and assess the suitability of light field displays for these utilization contexts via key performance indicators. This paper exhibits various examples for education, and highlights the differences in terms of display requirements and characteristics. Additionally, our contribution analyzes the scientific-literature-related trends of the past 20 years for 3D technologies, and the past 5 years for the level of education. While the acquired data indicates that light field is still lacking in the context of education, general research on the visualization technology is steadily rising. Finally, we specify a number of future research directions that shall contribute to the emergence of light field visualization for training and education. Full article
Show Figures

Figure 1

21 pages, 891 KiB  
Review
Virtual Reality in Education: A Review of Learning Theories, Approaches and Methodologies for the Last Decade
by Andreas Marougkas, Christos Troussas, Akrivi Krouska and Cleo Sgouropoulou
Electronics 2023, 12(13), 2832; https://doi.org/10.3390/electronics12132832 - 26 Jun 2023
Cited by 22 | Viewed by 19730
Abstract
In the field of education, virtual reality (VR) offers learners an immersive and interactive learning experience, allowing them to comprehend challenging concepts and ideas more efficiently and effectively. VR technology has enabled educators to develop a wide range of learning experiences, from virtual [...] Read more.
In the field of education, virtual reality (VR) offers learners an immersive and interactive learning experience, allowing them to comprehend challenging concepts and ideas more efficiently and effectively. VR technology has enabled educators to develop a wide range of learning experiences, from virtual field trips to complex simulations, that may be utilized to engage students and help them learn. Learning theories and approaches are essential for understanding how students learn and how to design effective learning experiences. This study examines the most recent published findings in educational theories and approaches connected to the use of VR systems for educational and tutoring purposes. Seventeen research studies that meet the search criteria have been found in the database, and each of them focuses on at least one learning theory or learning approach related to educational systems using VR. These studies yielded five educational approaches, one methodology, five learning theories and one theoretical framework, which are presented in the context of virtual reality in education. These include constructivism learning, experiential learning, gamification of learning, John Dewey’s theory of learning by doing, flow theory, Cognitive Theory of Multimedia Learning, design thinking, learning through problem solving, scientific discovery learning, social constructivism, cognitive load theory and the Technology Pedagogical Content Knowledge Framework (TPACK). A major finding of this study is that constructivism learning is the most often utilized learning theory/method, Experiential Learning is most appropriate for VR and the gamification of learning has the greatest future potential. Full article
Show Figures

Figure 1

Other

Jump to: Research, Review

36 pages, 3084 KiB  
Systematic Review
Artificial Intelligence-Based Algorithms in Medical Image Scan Segmentation and Intelligent Visual Content Generation—A Concise Overview
by Zofia Rudnicka, Janusz Szczepanski and Agnieszka Pregowska
Electronics 2024, 13(4), 746; https://doi.org/10.3390/electronics13040746 - 13 Feb 2024
Cited by 2 | Viewed by 1267
Abstract
Recently, artificial intelligence (AI)-based algorithms have revolutionized the medical image segmentation processes. Thus, the precise segmentation of organs and their lesions may contribute to an efficient diagnostics process and a more effective selection of targeted therapies, as well as increasing the effectiveness of [...] Read more.
Recently, artificial intelligence (AI)-based algorithms have revolutionized the medical image segmentation processes. Thus, the precise segmentation of organs and their lesions may contribute to an efficient diagnostics process and a more effective selection of targeted therapies, as well as increasing the effectiveness of the training process. In this context, AI may contribute to the automatization of the image scan segmentation process and increase the quality of the resulting 3D objects, which may lead to the generation of more realistic virtual objects. In this paper, we focus on the AI-based solutions applied in medical image scan segmentation and intelligent visual content generation, i.e., computer-generated three-dimensional (3D) images in the context of extended reality (XR). We consider different types of neural networks used with a special emphasis on the learning rules applied, taking into account algorithm accuracy and performance, as well as open data availability. This paper attempts to summarize the current development of AI-based segmentation methods in medical imaging and intelligent visual content generation that are applied in XR. It concludes with possible developments and open challenges in AI applications in extended reality-based solutions. Finally, future lines of research and development directions of artificial intelligence applications, both in medical image segmentation and extended reality-based medical solutions, are discussed. Full article
Show Figures

Figure 1

Back to TopTop