Virtual Worlds doi: 10.3390/virtualworlds3010007
Authors: André Correia Gonçalves Rui Jesus Pedro Mendes Jorge
Motion capture is a fundamental technique in the development of video games and in film production to animate a virtual character based on the movements of an actor, creating more realistic animations in a short amount of time. One of the ways to obtain this movement from an actor is to capture the motion of the player through an optical sensor to interact with the virtual world. However, during movement some parts of the human body can be occluded by others and there can be noise caused by difficulties in sensor capture, reducing the user experience. This work presents a solution to correct the motion capture errors from the Microsoft Kinect sensor or similar through a deep neural network (DNN) trained with a pre-processed dataset of poses offered by Carnegie Mellon University (CMU) Graphics Lab. A temporal filter is implemented to smooth the movement, given by a set of poses returned by the deep neural network. This system is implemented in Python with the TensorFlow application programming interface (API), which supports the machine learning techniques and the Unity game engine to visualize and interact with the obtained skeletons. The results are evaluated using the mean absolute error (MAE) metric where ground truth is available and with the feedback of 12 participants through a questionnaire for the Kinect data.
]]>Virtual Worlds doi: 10.3390/virtualworlds3010006
Authors: Kaito Kobayashi Masanobu Takahashi
Diminished reality (DR) is a technology in which a background image is overwritten on a real object to make it appear as if the object has been removed from real space. This paper presents a real-time DR application that employs deep learning. A DR application can remove objects inside a 3D region defined by a user in images captured using a smartphone. By specifying the 3D region containing the target object to be removed, DR can be realized for targets with various shapes and sizes, and the specified target can be removed even if the viewpoint changes. To achieve fast and accurate DR, a suitable network was employed based on the experimental results. Additionally, the loss function during the training process was improved to enhance completion accuracy. Then, the operation of the DR application at 10 fps was verified using a smartphone and a laptop computer.
]]>Virtual Worlds doi: 10.3390/virtualworlds3010005
Authors: Wei-An Hsieh Hsin-Yi Chien David Brickler Sabarish V. Babu Jung-Hong Chuang
In this contribution, we propose a hybrid interaction technique that integrates near-field and object-space interaction techniques for manipulating objects at a distance in virtual reality (VR). The objective of the hybrid interaction technique was to seamlessly leverage the strengths of both the near-field and object-space manipulation techniques. We employed bimanual near-field metaphor with scaled replica (BMSR) as our near-field interaction technique, which enabled us to perform multilevel degrees-of-freedom (DoF) separation transformations, such as 1~3DoF translation, 1~3DoF uniform and anchored scaling, 1DoF and 3DoF rotation, and 6DoF simultaneous translation and rotation, with enhanced depth perception and fine motor control provided by near-field manipulation techniques. The object-space interaction technique we utilized was the classic Scaled HOMER, which is known to be effective and appropriate for coarse transformations in distant object manipulation. In a repeated measures within-subjects evaluation, we empirically evaluated the three interaction techniques for their accuracy, efficiency, and economy of movement in pick-and-place, docking, and tunneling tasks in VR. Our findings revealed that the near-field BMSR technique outperformed the object space Scaled HOMER technique in terms of accuracy and economy of movement, but the participants performed more slowly overall with BMSR. Additionally, our results revealed that the participants preferred to use the hybrid interaction technique, as it allowed them to switch and transition seamlessly between the constituent BMSR and Scaled HOMER interaction techniques, depending on the level of accuracy, precision and efficiency required.
]]>Virtual Worlds doi: 10.3390/virtualworlds3010004
Authors: Panagiotis Kourtesis Agapi Papadopoulou Petros Roussos
Background: Given that VR is used in multiple domains, understanding the effects of cybersickness on human cognition and motor skills and the factors contributing to cybersickness is becoming increasing important. This study aimed to explore the predictors of cybersickness and its interplay with cognitive and motor skills. Methods: 30 participants, 20–45 years old, completed the MSSQ and the CSQ-VR, and were immersed in VR. During immersion, they were exposed to a roller coaster ride. Before and after the ride, participants responded to the CSQ-VR and performed VR-based cognitive and psychomotor tasks. After the VR session, participants completed the CSQ-VR again. Results: Motion sickness susceptibility, during adulthood, was the most prominent predictor of cybersickness. Pupil dilation emerged as a significant predictor of cybersickness. Experience with videogaming was a significant predictor of cybersickness and cognitive/motor functions. Cybersickness negatively affected visuospatial working memory and psychomotor skills. Overall the intensity of cybersickness’s nausea and vestibular symptoms significantly decreased after removing the VR headset. Conclusions: In order of importance, motion sickness susceptibility and gaming experience are significant predictors of cybersickness. Pupil dilation appears to be a cybersickness biomarker. Cybersickness affects visuospatial working memory and psychomotor skills. Concerning user experience, cybersickness and its effects on performance should be examined during and not after immersion.
]]>Virtual Worlds doi: 10.3390/virtualworlds3010003
Authors: Constantin Popp Damian T. Murphy
3D audio spatializers for Virtual Reality (VR) can use the acoustic properties of the surfaces of a visualised game space to calculate a matching reverb. However, this approach could lead to reverbs that impair the tasks performed in such a space, such as listening to speech-based audio. Sound designers would then have to alter the room’s acoustic properties independently of its visualisation to improve speech intelligibility, causing audio-visual incongruency. As user expectation of simulated room acoustics regarding speech intelligibility in VR has not been studied, this study asked participants to rate the congruency of reverbs and their visualisations in 6-DoF VR while listening to speech-based audio. The participants compared unaltered, matching reverbs with sound-designed, mismatching reverbs. The latter feature improved D50s and reduced RT60s at the cost of lower audio-visual congruency. Results suggest participants preferred improved reverbs only when the unaltered reverbs had comparatively low D50s or excessive ringing. Otherwise, too dry or too reverberant reverbs were disliked. The range of expected RT60s depended on the surface visualisation. Differences in timbre between the reverbs may not affect preferences as strongly as shorter RT60s. Therefore, sound designers can intervene and prioritise speech intelligibility over audio-visual congruency in acoustically challenging game spaces.
]]>Virtual Worlds doi: 10.3390/virtualworlds3010002
Authors: Yanbo Cheng Yingying Wang
Designing virtual characters that are capable of reflecting a sense of personality is a key goal in research and applications in virtual reality and computer graphics. More and more research efforts are dedicated to investigating approaches to construct a diverse, equitable, and inclusive metaverse by infusing expressive personalities and styles into virtual avatars. While most previous work focused on exploring variations in virtual characters’ dynamic behaviors, characters’ visual appearance plays a crucial role in affecting their perceived personalities. This paper presents a series of experiments evaluating the effect of virtual characters’ outfits on their perceived personality. Based on the related psychology research conducted in the real world, we determined a set of outfit factors likely to reflect personality in virtual characters: color, design, and type. As a framework for our study, we used the “Big Five” personality model for evaluating personality traits. To test our hypothesis, we conducted three perceptual experiments to evaluate the outfit parameters’ contributions to the characters’ personality. In our first experiment, we studied the color factor by varying color hue, saturation, and value; in the second experiment, we evaluated the impact of different neckline, waistline, and sleeve designs; and in our third experiment, we examined the personality perception of five outfit types: professional, casual, fashionable, outdoor, and indoor. Significant results offer guidance to avatar designers on how to create virtual characters with specific personality profiles. We further conducted a verification test to extend the application of our findings to animated virtual characters in augmented reality (AR) and virtual reality (VR) settings. Results confirmed that our findings can be broadly applied to both static and animated virtual characters in VR and AR environments that are commonly used in games, entertainment, and social networking scenarios.
]]>Virtual Worlds doi: 10.3390/virtualworlds3010001
Authors: Hsuan-Ming Chang Ting-Wei Hsu Ming-Han Tsai Sabarish V. Babu Jung-Hong Chuang
Design discussion is crucial in the architectural design process. To enhance the spatial understanding of 3D space and discussion effectiveness, recently, some systems have been proposed to support design discussion interactively in an immersive virtual environment. The entire design discussion can be archived and potentially become course materials for future learners. In this paper, we propose an asynchronous VR exploration system that aims to help learners explore content effectively and efficiently anywhere and at any time. To improve effectiveness and efficiency, we also propose a summarization-to-detail approach with the application space by which students can observe the visualization of spatial summarization of actions and participants’ dwell time or the temporal distribution of dialogues and then locate the important or interesting region or dialogue for further exploration. To further explore the discussion content, students can call the preview to see the time-lapse animation of the object operation to understand the change in models or playback to view the discussion details. We conducted an exploratory user study with 10 participants to evaluate user experience, user impression, and effectiveness of learning the design discussion course content using our asynchronous VR design discussion content exploration system. The results indicate that the interactive VR exploration system presented can help learners study the design discussion content effectively. Participants also provided some positive feedback and confirmed the usefulness and value of the system presented. Our applications and lessons learned have implications for future asynchronous VR exploration systems, not only for architectural design discussion content, but also for other applications, such as industrial visual inspections and educational visualizations of design discussions.
]]>Virtual Worlds doi: 10.3390/virtualworlds2040024
Authors: Kelly Ervin Jonathan Boone Karl Smink Gaurav Savant Keith Martin Spicer Bak Shyla Clark
In this paper, watercraft and ship simulation is summarized, and the way that it can be extended through realistic physics is explored. A hydrodynamic, data-driven, immersive watercraft simulation experience is also introduced, using the Unreal Engine to visualize a Landing Craft Utility (LCU) operation and interaction with near-shore waves in virtual reality (VR). The VR application provides navigation scientists with a better understanding of how coastal waves impact landing operations and channel design. FUNWAVE data generated on the supercomputing resources at the U.S. Army Corps of Engineers (USACE) Engineering Research and Development Center (ERDC) are employed, and using these data, a graphical representation of the domain is created, including the vessel model and a customizable VR bridge to control the vessel within the virtual environment. Several dimension reduction methods are being devised to ensure that the FUNWAVE data can inform the model but keep the application running in real time at an acceptable frame rate for the VR headset. By importing millions of data points output from the FUNWAVE version 3.4 software into Unreal Engine, virtual vessels can be affected by physics-driven data.
]]>Virtual Worlds doi: 10.3390/virtualworlds2040023
Authors: Maram A. Alammary Lesley Halliday Stathis Th. Konstantinidis
Immersive Virtual Reality (IVR) is a promising tool for improving the teaching and learning of nursing and midwifery students. However, the preexisting literature does not comprehensively examine scenario development, theoretical underpinnings, duration, and debriefing techniques. The aim of this review was to assess the available evidence of how 360-degree Virtual Reality (VR) utilising head-mounted devices has been used in undergraduate nursing and midwifery education programmes and to explore the potential pedagogical value based on Kirkpatrick’s evaluation model. This review followed the Joanna Briggs Institute (JBI) methodology. A comprehensive electronic search was conducted across five databases. All studies published in English between 2007–2022 were included, regardless of design, if the focus was undergraduate nursing and midwifery programmes and utilised fully immersive 360-degree VR scenarios. Out of an initial pool of 1700 articles, 26 were selected for final inclusion. The findings indicated a limited diversity in scenario design, with only one study employing a participatory approach. Within the Kirkpatrick model, the most measurable outcomes were found at level 2. The main drawback observed in interventional studies was the absence of a theoretical framework and debriefing. The review concludes that the increased use of fully IVR in nursing education has improved student learning outcomes; however, published literature on midwifery education is scarce.
]]>Virtual Worlds doi: 10.3390/virtualworlds2040022
Authors: Simone Balin Cecilia M. Bolognesi Paolo Borin
This study aims to identify and analyze existing gaps in the integration of immersive approaches for collaborative processes with Building Information Modeling (BIM) in the Architecture, Engineering, and Construction (AEC) sector. Using a systematic approach that includes metadata analysis and review procedures, we have formulated specific research questions aimed at guiding future investigations into these gaps. Additionally, the analysis generates insights that could guide future research directions and improvements in the field. The methodology involves a comprehensive review of the literature, focusing on the interaction between immersiveness, BIM methodology, and collaborative processes. Data from 2010 to 2023 have been analyzed to ensure relevance and completeness. Our findings reveal current limitations in the field, such as the need for fully integrated prototypes and the execution of empirical studies to clarify operational processes. These limitations serve as the basis for our research questions. The study offers actionable insights that could guide future research and improvements in the AEC sector, particularly in the adoption of immersive technologies. The research underscores the urgency of addressing these challenges to facilitate ongoing development and greater adoption of immersive technologies in the AEC sector.
]]>Virtual Worlds doi: 10.3390/virtualworlds2040021
Authors: Donovan Jones Roberto Galvez Darrell Evans Michael Hazelton Rachel Rossiter Pauletta Irwin Peter S. Micalos Patricia Logan Lorraine Rose Shanna Fealy
The COVID-19 pandemic instigated a paradigm shift in healthcare delivery with a rapid adoption of technology-enabled models of care, particularly within the general practice primary care setting. The emergence of the Metaverse and its associated technology mediums, specifically extended reality (XR) technology, presents a promising opportunity for further industry transformation. Therefore, the objective of this study was to explore the current application and utilisation of XR technologies within the general practice primary care setting to establish a baseline for tracking its evolution and integration. A systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) was conducted and registered with the international database of prospectively registered systematic reviews as PROSPERO-CRD42022339905. Eleven articles met the inclusion criteria and were quality appraised and included for review. All databases searched, inclusive of search terms, are supplied to enhance the transparency and reproducibility of the findings. All study interventions used virtual reality technology exclusively. The application of virtual reality within the primary care setting was grouped under three domains: (1) childhood vaccinations, (2) mental health, and (3) health promotion. There is immense potential for the future application of XR technologies within the general practice primary care setting. As technology evolves, healthcare practitioners, XR technology specialists, and researchers should collaborate to harness the full potential of implementing XR mediums.
]]>Virtual Worlds doi: 10.3390/virtualworlds2040020
Authors: Luis Valladares Ríos Ricardo Acosta-Diaz Pedro C. Santana-Mancilla
This study investigates how virtual and augmented reality role games impact self-learning in higher education settings. A qualitative research–action approach that involved creating augmented reality micro-stories to encourage creativity and critical thinking was used. Through role-playing, students collaborated and gained a deeper understanding of the course, improving their self-learning abilities. The findings indicate that incorporating virtual and augmented reality into higher education positively affects self-learning, promoting active student engagement and meaningful learning experiences. Additionally, students perceive these immersive educational methods as bridging the gap between virtual and in-person learning environments, ultimately leading to enhanced educational results.
]]>Virtual Worlds doi: 10.3390/virtualworlds2040019
Authors: Muhammad Iqbal Abraham Campbell
Metaverse is an upcoming transformative technology that will impact our future society with immersive experiences. The recent surge in the adoption of new technologies and innovations in connectivity, interaction technology, and artificial realities can fundamentally change the digital world. The Metaverse concept is the most recent trend to encapsulate and define the potential new digital landscape. However, with the introduction of 5G with high speed and low latency advancements in the hardware and software with the graphics power to display millions of polygons in 3D and blockchain technology, this concept is no longer fiction. This transition from today’s Internet to a spatially embodied Internet is, at its core, a transition from 2D to 3D interactions taking place in multiple virtual universes. In recent years, augmented virtual reality has created possibilities in the private and professional spheres. The new Virtual Reality (VR) headsets and Augmented Reality (AR) glasses can provide immersion in the physical sense. Technology must offer realistic experiences for users to turn this concept into reality. This paper focuses on the potential use cases and benefits of the Metaverse as a tech for good. The research paper outlines the potential areas where a positive impact could occur, highlights recent progress, and discusses the issues around trust, ethics, and cognitive load.
]]>Virtual Worlds doi: 10.3390/virtualworlds2040018
Authors: Silvino Martins Mário Vairinhos
In the context of therapeutic exposure to phobias, virtual reality (VR) offers innovative ways to motivate patients to confront their fears, an opportunity not feasible in traditional non-digital settings. This systematic literature review explores the utilization of narratives and digital games in this context, focusing on identifying the most common ludic and narrative immersion features employed in studies dedicated to animal phobias. Via a search on the Scopus and Web of Science scientific databases, twenty-nine studies were selected for in-depth analysis. The primary objective was to evaluate the presence of ludic and narrative elements in each study to understand their immersive potential across both dimensions. Findings suggest that ludic elements are more commonly used than narrative elements, which are notably scarce, and the exploration of the emotional dimension of narrative immersion is limited. An essential takeaway is that features fostering narrative immersion are invariably linked to the ludic dimension, often functioning as secondary components. This study provides a guiding framework for developing therapeutic interventions in VR, emphasizing the incorporation of ludic and narrative aspects. Additionally, it identifies untapped research opportunities, particularly the integration of autonomous narratives that are less reliant on ludic elements.
]]>Virtual Worlds doi: 10.3390/virtualworlds2030017
Authors: Amna Salman
Teaching through field trips has been very effective in the architecture, engineering and construction (AEC) disciplines as it allows students to bridge the gap between theory and practice. However, it is not always feasible to take a large class on field trips due to time, safety, and cost limitations. To adequately prepare future professionals in the AEC industry, it is imperative that institutions adopt innovative methods of providing the field trip experience. One such approach is using virtual reality (VR) technology. Creating 3D VR construction environments and immersing students in that virtual world could provide an engaging and meaningful experience. Although researchers in AEC schools have developed and deployed many virtual field trips (VFTs) in education, little is known about their potential to provide the same knowledge base. For that reason, a VR app was created to teach students about the design and construction of steel structures, called the Steel Sculpture App (SSA). The SSA served as a VFT, and the location of the steel frame structure served as the actual field trip (AFT). The research was conducted in structure-related courses in the spring, summer, and fall of 2021 and the spring and fall of 2022 semesters. Each semester, students were split into groups, one being the control group and the other being the experimental group. The control groups learned through AFTs, whereas the experimental groups learned through VFTs. A knowledge test was administered at the end of each treatment to collect quantitative data on the students’ performance, understanding, and knowledge retention. The results indicated that the students learning from VFTs scored higher than those learning from AFTs. The paper discusses student assessment results and student feedback about replacing AFTs with VFTs in times of need.
]]>Virtual Worlds doi: 10.3390/virtualworlds2030016
Authors: Linda Peschke Anna Kiani Ute Massler Wolfgang Müller
Appropriate techniques for promoting reading fluency are difficult to implement in the classroom. There is little time to provide students with individualized feedback on reading aloud or to motivate them to do so. In this context, Virtual Reality (VR) can be beneficial for learning because it allows for individualized feedback and for increasing learner engagement. Studies that analyze established methods of language learning in VR at school are thus far lacking. Therefore, this pilot study is one of the first to analyze student acceptance of reading fluency training in desktop VR at a secondary school. The interview guide was developed in accordance with the Technology Acceptance Model. The desktop VR environment is web-based and provides individual and collaborative opportunities for training reading fluency, giving, and receiving feedback, and deepening content understanding of reading texts. To analyze the acceptance of the desktop VR environment, five guided interviews were conducted. The results reveal that despite various technical challenges within the VR environment, students not only accepted but also appreciated the reading fluency training in VR. The integration of established concepts of reading fluency training in foreign language classrooms has great potential as an additional value in addressing the challenges of face-to-face instruction.
]]>Virtual Worlds doi: 10.3390/virtualworlds2030015
Authors: Tim Gorichanaz Alexandros A. Lavdas Michael W. Mehaffy Nikos A. Salingaros
It is well-recognized that online experience can carry profound impacts on health and well-being, particularly for young people. Research has already documented influences from cyberbullying, heightened feelings of inadequacy, and the relative decline of face-to-face interactions and active lifestyles. Less attention has been given to the health impacts of aesthetic experiences of online users, particularly gamers and other users of immersive virtual reality (VR) technologies. However, a significant body of research has begun to document the surprisingly strong yet previously unrecognized impacts of aesthetic experiences on health and well-being in other arenas of life. Other researchers have used both fixed laboratory and wearable sensors and, to a lesser extent, user surveys to measure indicators of activation level, mood, and stress level, which detect physiological markers for health. In this study, we assessed the evidence that online sensorial experience is no less important than in the physical world, with the capacity for both harmful effects and salutogenic benefits. We explore the implications for online design and propose an outline for further research.
]]>Virtual Worlds doi: 10.3390/virtualworlds2030014
Authors: Laura Huisinga
The blended classroom is a unique space for face-to-face (F2F) interaction and online learning. The blended classroom has three distinct interaction types: in-person synchronous, virtual synchronous, and virtual asynchronous; each of these modalities lends itself to different forms of extended reality. This case study looks at using a virtual reality (VR) classroom for an online synchronous weekly meetings for three upper-division or advanced (junior and senior level) higher education design classes at a university. The use of social web VR for a classroom can offer a collaborative, real-time environment that bridges the gap between virtual video conferences and gaming platforms. This paper examines how to use social web VR in a virtual classroom. Mixed methods were used to collect usability data at the end of the semester survey. The system usability scale (SUS) and several qualitative questions gathered student feedback. Overall, the students enjoyed using the VR classroom, but audio issues seemed to be the most significant pain point. While the overall response was positive, this study will address several areas for improvement from both the student and instructor perspectives. Social, web-based VR offers promising potential. Designing a human-centered virtual environment and considering all participants’ total user experience is critical to a successful learning tool.
]]>Virtual Worlds doi: 10.3390/virtualworlds2030013
Authors: Shanna Fealy Pauletta Irwin Zeynep Tacgin Zi Siang See Donovan Jones
This concept paper explores the use of extended reality (XR) technology in nursing education, with a focus on three case studies developed at one regional university in Australia. Tertiary education institutions that deliver nursing curricula are facing challenges around the provision of simulated learning experiences that prepare students for the demands of real-world professional practice. To overcome these barriers, XR technology, which includes augmented, mixed, and virtual reality (AR, MR, VR), offers a diverse media platform for the creation of immersive, hands-on learning experiences, situated within virtual environments that can reflect some of the dynamic aspects of real-world healthcare environments. This document analysis explores the use of XR technology in nursing education, through the narrative and discussion of three applied-use cases. The collaboration and co-design between nursing educators and XR technology experts allows for the creation of synchronous and asynchronous learning experiences beyond traditional nursing simulation media, better preparing students for the demands of real-world professional practice.
]]>Virtual Worlds doi: 10.3390/virtualworlds2030012
Authors: Bita Astaneh Asl Wendy Nora Rummerfield Carrie Sturts Dossick
Multidisciplinary design and construction teams are challenged to communicate and coordinate across complex building systems, including architectural, structural, mechanical, electrical, and piping (MEP). To support this coordination, disciplinary 3D models are combined and coordinated before installation. Studies show that besides the use of 3D models, industry professionals sketch building components to discuss coordination issues and find resolutions that require them to recall the building components in the model. In current practices, 3D models are explored with Building Information Modeling (BIM) tools presented on 2D screens, while Virtual Reality (VR) can provide users with an immersive environment to explore. This paper presents the results of an experiment that studied the effects of VR’s immersive environment on the participants’ complex MEP system recall compared to BIM via sketching. The comparison criteria were the 3D geometry properties of the piping system and the users’ self-awareness in the model categorized under color, shape, dimension, piping, and viewpoint. The results showed significant improvement in recall of shape, dimension, and piping when the model was explored in VR.
]]>Virtual Worlds doi: 10.3390/virtualworlds2020011
Authors: Jacob Kritikos Alexandros Makrypidis Aristomenis Alevizopoulos Georgios Alevizopoulos Dimitris Koutsouris
Brain–Machine Interfaces (BMIs) have made significant progress in recent years; however, there are still several application areas in which improvement is needed, including the accurate prediction of body movement during Virtual Reality (VR) simulations. To achieve a high level of immersion in VR sessions, it is important to have bidirectional interaction, which is typically achieved through the use of movement-tracking devices, such as controllers and body sensors. However, it may be possible to eliminate the need for these external tracking devices by directly acquiring movement information from the motor cortex via electroencephalography (EEG) recordings. This could potentially lead to more seamless and immersive VR experiences. There have been numerous studies that have investigated EEG recordings during movement. While the majority of these studies have focused on movement prediction based on brain signals, a smaller number of them have focused on how to utilize them during VR simulations. This suggests that there is still a need for further research in this area in order to fully understand the potential for using EEG to predict movement in VR simulations. We propose two neural network decoders designed to predict pre-arm-movement and during-arm-movement behavior based on brain activity recorded during the execution of VR simulation tasks in this research. For both decoders, we employ a Long Short-Term Memory model. The study’s findings are highly encouraging, lending credence to the premise that this technology has the ability to replace external tracking devices.
]]>Virtual Worlds doi: 10.3390/virtualworlds2020010
Authors: Anastasios Theodoropoulos Dimitra Stavropoulou Panagiotis Papadopoulos Nikos Platis George Lepouras
The popularity of VR technology has led to the development of public VR setups in entertainment venues, museums, and exhibitions. Interactive VR CAVEs can create compelling gaming experiences for both players and the spectators, with a strong sense of presence and emotional engagement. This paper presents the design and development processes of a VR interactive environment called MobiCave (in room-scale size), that uses motion-tracking systems for an immersive experience. A user study was conducted in the MobiCave, aimed to gather feedback regarding their experience with a demo game. The study researched factors such as immersion, presence, flow, perceived usability, and motivation regarding players and the bystanders. Results showed promising findings for both fun and learning purposes while the experience was found highly immersive. This study suggests that interactive VR setups for public usage could be a motivating opportunity for creating new forms of social interaction and collaboration in gaming.
]]>Virtual Worlds doi: 10.3390/virtualworlds2020009
Authors: Priya Kartick Alvaro Uribe-Quevedo David Rojas
Virtual reality (VR) is gaining popularity as an educational, training, and healthcare tool due to its decreasing cost. Because of the high user variability in terms of ergonomics, 3D manipulation techniques (3DMTs) for 3D user interfaces (3DUIs) must be adjustable for comfort and usability, hence avoiding interactions that only function for the typical user. Given the role of the upper limb (i.e., arm, forearm, and hands) in interacting with virtual objects, research has led to the development of 3DMTs for facilitating isomorphic (i.e., an equal translation of controller movement) and non-isomorphic (i.e., adjusted controller visuals in VR) interactions. Although advances in 3DMTs have been proven to facilitate VR interactions, user variability has not been addressed in terms of ergonomics. This work introduces Piecewise, an upper-limb-customized non-isomorphic 3DMT for 3DUIs that accounts for user variability by incorporating upper-limb ergonomics and comfort range of motion. Our research investigates the effects of upper-limb ergonomics on time completion, skipped objects, percentage of reach, upper-body lean, engagement, and presence levels in comparison to common 3DMTs, such as normal (physical reach), object translation, and reach-bounded non-linear input amplification (RBNLIA). A 20-person within-subjects study revealed that upper-limb ergonomics influence the execution and perception of tasks in virtual reality. The proposed Piecewise approach ranked second behind the RBNLIA method, although all 3DMTs were evaluated as usable, engaging, and favorable in general. The implications of our research are significant because upper-limb ergonomics can affect VR performance for a broader range of users as the technology becomes widely available and adopted for accessibility and inclusive design, providing opportunities to provide additional customizations that can affect the VR user experience.
]]>Virtual Worlds doi: 10.3390/virtualworlds2020008
Authors: Yutaro Ogawa Sotaro Shimada
Mixed-reality (MR) environments, in which virtual objects are overlaid on the real environment and shared with peers by wearing a transparent optical head-mounted display, are considered to be well suited for collaborative work. However, no studies have been conducted to provide neuroscientific evidence of its effectiveness. In contrast, inter-brain synchronization has been repeatedly observed in cooperative tasks and can be used as an index of the quality of cooperation. In this study, we used electroencephalography (EEG) to simultaneously measure the brain activity of pairs of participants, a technique known as hyperscanning, during a cooperative motor task to investigate whether inter-brain synchronization would be also observed in a shared MR environment. The participants were presented with virtual building blocks to grasp and build up an object cooperatively with a partner or individually. We found that inter-brain synchronization in the cooperative condition was stronger than that in the individual condition (F(1, 15) = 4.70, p < 0.05). In addition, there was a significant correlation between task performance and inter-brain synchronization in the cooperative condition (rs = 0.523, p < 0.05). Therefore, the shared MR environment was sufficiently effective to evoke inter-brain synchronization, which reflects the quality of cooperation. This study offers a promising neuroscientific method to objectively measure the effectiveness of MR technology.
]]>Virtual Worlds doi: 10.3390/virtualworlds2020007
Authors: Irene Suh Tess McKinney Ka-Chun Siu
As virtual and augmented reality simulation technologies advance, the use of such technologies in medicine is widespread. The advanced virtual and augmented systems coupled with a complex interactive, immersive environment create a metaverse. The metaverse enables us to connect with others in a virtual world free of spatial restrictions and time constraints. In the educational aspect, it allows collaboration among peers and educators in an immersive 3D environment that can imitate the actual classroom setting with learning tools. Metaverse technology enables visualization of virtual 3D structures, facilitates collaboration and small group activities, improves mentor–mentee interactions, provides opportunities for self-directed learning experiences, and helps develop teamwork skills. The metaverse will be adapted rapidly in healthcare, boost digitalization, and grow in use in surgical procedures and medical education. The potential advantages of using the metaverse in diagnosing and treating patients are tremendous. This perspective paper provides the current state of technology in the medical field and proposes potential research directions to harness the benefits of the metaverse in medical education, research, and patient care. It aims to spark interest and discussion in the application of metaverse technology in healthcare and inspire further research in this area.
]]>Virtual Worlds doi: 10.3390/virtualworlds2020006
Authors: Junshan Liu Salman Azhar Danielle Willkens Botao Li
Heritage Building Information Modeling (HBIM) is an essential technology for heritage documentation, conservation, and management. It enables people to understand, archive, advertise, and virtually reconstruct their built heritage. Creating highly accurate HBIM models requires the use of several reality capture tools, such as terrestrial laser scanning (TLS), photogrammetry, unmanned aerial vehicles (UAV), etc. However, the existing literature did not explicitly review the applications and impacts of TLS in implementing HBIM. This paper uses the PRISMA protocol to present a systematic review of TLS utilization in capturing reality data in order to recognize the status of applications of TLS for HBIM and identify the knowledge gaps on the topic. A thorough examination of the 58 selected articles revealed the state-of-the-art practices when utilizing static TLS technology for surveying and processing captured TLS data for developing HBIM models. Moreover, the absence of guidelines for using static TLS surveys for HBIM data acquisition, the lack of robust automated frameworks for producing/transferring 3D geometries and their attributes from TLS data to BIM entities, and the under-utilized application of TLS for long-term monitoring and change detection were identified as gaps in knowledge. The findings of this research provide stakeholders with a good grasp of static TLS for HBIM and therefore lay the foundation for further research, strategies, and scientific solutions for improving the utilization of TLS when documenting heritage structures and developing HBIM.
]]>Virtual Worlds doi: 10.3390/virtualworlds2010005
Authors: Panagiotis E. Antoniou Matthew Pears Eirini C. Schiza Fotos Frangoudes Constantinos S. Pattichis Heather Wharrad Panagiotis D. Bamidis Stathis Th. Konstantinidis
Immersive experiential technologies find fertile grounds to grow and support healthcare education. Virtual, Augmented, or Mixed reality (VR/AR/MR) have proven to be impactful in both the educational and the affective state of the healthcare student’s increasing engagement. However, there is a lack of guidance for healthcare stakeholders on developing and integrating virtual reality resources into healthcare training. Thus, the authors applied Bardach’s Eightfold Policy Analysis Framework to critically evaluate existing protocols to determine if they are inconsistent, ineffective, or result in uncertain outcomes, following systematic pathways from concepts to decision-making. Co-creative VR resource development resulted as the preferred method. Best practices for co-creating VR Reusable e-Resources identified co-creation as an effective pathway to the prolific use of immersive media in healthcare education. Co-creation should be considered in conjunction with a training framework to enhance educational quality. Iterative cycles engaging all stakeholders enhance educational quality, while co-creation is central to the quality assurance process both for technical and topical fidelity, and tailoring resources to learners’ needs. Co-creation itself is seen as a bespoke learning modality. This paper provides the first body of evidence for co-creative VR resource development as a valid and strengthening method for healthcare immersive content development. Despite prior research supporting co-creation in immersive resource development, there were no established guidelines for best practices.
]]>Virtual Worlds doi: 10.3390/virtualworlds2010004
Authors: Jianing Qi Hao Tang Zhigang Zhu
Online classes are typically conducted by using video conferencing software such as Zoom, Microsoft Teams, and Google Meet. Research has identified drawbacks of online learning, such as “Zoom fatigue”, characterized by distractions and lack of engagement. This study presents the CUNY Affective and Responsive Virtual Environment (CARVE) Hub, a novel virtual reality hub that uses a facial emotion classification model to generate emojis for affective and informal responsive interaction in a 3D virtual classroom setting. A web-based machine learning model is employed for facial emotion classification, enabling students to communicate four basic emotions live through automated web camera capture in a virtual classroom without activating their cameras. The experiment is conducted in undergraduate classes on both Zoom and CARVE, and the results of a survey indicate that students have a positive perception of interactions in the proposed virtual classroom compared with Zoom. Correlations between automated emojis and interactions are also observed. This study discusses potential explanations for the improved interactions, including a decrease in pressure on students when they are not showing faces. In addition, video panels in traditional remote classrooms may be useful for communication but not for interaction. Students favor features in virtual reality, such as spatial audio and the ability to move around, with collaboration being identified as the most helpful feature.
]]>Virtual Worlds doi: 10.3390/virtualworlds2010003
Authors: Yee Sye Lee Ali Rashidi Amin Talei Huai Jian Beh Sina Rashidi
While VR-based training has been proven to improve learning effectiveness over conventional methods, there is a lack of study on its learning effectiveness due to the implementation of training modes. This study aims to investigate the learning effectiveness of engineering students under different training modes in VR-based construction design training. Three VR scenarios with varying degrees of immersiveness were developed based on Dale’s cone of learning experience, including (1) Audio-visual based training, (2) Interactive-based training, and (3) Contrived hands-on experience training. Sixteen students with varying backgrounds participated in this study. The results posit a positive correlation between learning effectiveness and the degree of immersiveness, with a mean score of 77.33%, 81.33%, and 82.67% in each training scenario, respectively. Participants with lower academic performance tend to perform significantly better in audio-visual and interactive-based training. Meanwhile, participants with experience in gaming tend to outperform the latter group. Results also showed that participants with less experience in gaming benefited the most from hands-on VR training. The findings suggest that the general audience retained the most information via hands-on VR training; however, training scenarios should be contextualized toward the targeted group to maximize learning effectiveness.
]]>Virtual Worlds doi: 10.3390/virtualworlds2010002
Authors: Panagiotis Kourtesis Josie Linnell Rayaan Amir Ferran Argelaguet Sarah E. MacPherson
Cybersickness is a drawback of virtual reality (VR), which also affects the cognitive and motor skills of users. The Simulator Sickness Questionnaire (SSQ) and its variant, the Virtual Reality Sickness Questionnaire (VRSQ), are two tools that measure cybersickness. However, both tools suffer from important limitations which raise concerns about their suitability. Two versions of the Cybersickness in VR Questionnaire (CSQ-VR), a paper-and-pencil and a 3D–VR version, were developed. The validation of the CSQ-VR and a comparison against the SSQ and the VRSQ were performed. Thirty-nine participants were exposed to three rides with linear and angular accelerations in VR. Assessments of cognitive and psychomotor skills were performed at baseline and after each ride. The validity of both versions of the CSQ-VR was confirmed. Notably, CSQ-VR demonstrated substantially better internal consistency than both SSQ and VRSQ. Additionally, CSQ-VR scores had significantly better psychometric properties in detecting a temporary decline in performance due to cybersickness. Pupil size was a significant predictor of cybersickness intensity. In conclusion, the CSQ-VR is a valid assessment of cybersickness with superior psychometric properties to SSQ and VRSQ. The CSQ-VR enables the assessment of cybersickness during VR exposure, and it benefits from examining pupil size, a biomarker of cybersickness.
]]>Virtual Worlds doi: 10.3390/virtualworlds2010001
Authors: Gilda A. de Assis Alexandre F. Brandão Ana G. D. Correa Gabriela Castellano
Augmented reality (AR) tools have been investigated with promising outcomes in rehabilitation. Recently, some studies have addressed the neuroplasticity effects induced by this type of therapy using functional connectivity obtained from resting-state functional magnetic resonance imaging (rs-fMRI). This work aims to perform an initial assessment of possible changes in brain functional connectivity associated with the use of NeuroR, an AR system for upper limb motor rehabilitation of poststroke participants. An experimental study with a case series is presented. Three chronic stroke participants with left hemiparesis were enrolled in the study. They received eight sessions with NeuroR to provide shoulder rehabilitation exercises. Measurements of range of motion (ROM) were obtained at the beginning and end of each session, and rs-fMRI data were acquired at baseline (pretest) and after the last training session (post-test). Functional connectivity analyses of the rs-fMRI data were performed using a seed placed at the noninjured motor cortex. ROM increased in two patients who presented spastic hemiparesis in the left upper limb, with a change in muscle tone, and stayed the same (at zero angles) in one of the patients, who had the highest degree of impairment, showing flaccid hemiplegia. All participants had higher mean connectivity values in the ipsilesional brain regions associated with motor function at post-test than at pretest. Our findings show the potential of the NeuroR system to promote neuroplasticity related to AR-based therapy for motor rehabilitation in stroke participants.
]]>Virtual Worlds doi: 10.3390/virtualworlds1020009
Authors: Andrei Torres Bill Kapralos Celina Da Silva Eva Peisachovich Adam Dubrowski
Serious games, that is, games whose primary purpose is education and training, are gaining widespread popularity in higher education contexts and have been associated with increased learner memory retention, engagement, and motivation even among learners with special needs. Despite these benefits, serious games have fixed scenarios that cannot be easily modified, leading to predictable and dull experiences that can reduce user engagement. Therefore, there is a demand for tools that allow educators to create new modifications and customize serious game scenarios, and avoid the fixed-scenario problem and a one-size-fits-all approach. Here, we present and detail our novel virtual serious games authoring platform called Moirai, which uses a no-code approach to allow educators who may have limited (or no) prior programming experience to use a diagram-based interface to author and customize serious games focused on decision and communication skills development. We describe two case studies, each of which involved creating serious games for nursing education (one for mental health education and the other for internationally educated nurses). The usability of both games was qualitatively evaluated using the system usability scale (SUS) questionnaire and achieved above-average usability scores.
]]>Virtual Worlds doi: 10.3390/virtualworlds1020008
Authors: Nathan O. Conner Hannah R. Freeman J. Adam Jones Tony Luczak Daniel Carruth Adam C. Knight Harish Chander
The utilization of commercially available virtual reality (VR) environments has increased over the last decade. Motion sickness that is commonly reported while using VR devices is still prevalent and reported at a higher than acceptable rate. The virtual reality induced symptoms and effects (VRISE) are considered the largest barrier to widespread usage. Current measurement methods have uniform use across studies but are subjective and are not designed for VR. VRISE and other motion sickness symptom profiles are similar but not exactly the same. Common objective physiological and biomechanical as well as subjective perception measures correlated with VRISE should be used instead. Many physiological biomechanical and subjective changes evoked by VRISE have been identified. There is a great difficulty in claiming that these changes are directly caused by VRISE due to numerous other factors that are known to alter these variables resting states. Several theories exist regarding the causation of VRISE. Among these is the sensory conflict theory resulting from differences in expected and actual sensory input. Reducing these conflicts has been shown to decrease VRISE. User characteristics contributing to VRISE severity have shown inconsistent results. Guidelines of field of view (FOV), resolution, and frame rate have been developed to prevent VRISE. Motion-to-photons latency movement also contributes to these symptoms and effects. Intensity of content is positively correlated to VRISE, as is the speed of navigation and oscillatory displays. Duration of immersion shows greater VRISE, though adaptation has been shown to occur from multiple immersions. The duration of post immersion VRISE is related to user history of motion sickness and speed of onset. Cognitive changes from VRISE include decreased reaction time and eye hand coordination. Methods to lower VRISE have shown some success. Postural control presents a potential objective variable for predicting and monitoring VRISE intensity. Further research is needed to lower the rate of VRISE symptom occurrence as a limitation of use.
]]>Virtual Worlds doi: 10.3390/virtualworlds1020007
Authors: Mariapina Trunfio Simona Rossi
The metaverse has increasingly attracted the attention of academics and practitioners, who attempt to better understand its theoretical foundations and business application areas. This paper provides an overarching picture of what has already been studied and investigated in metaverse academic investigation. It adopts a systematic literature review and a bibliometric analysis. The study designs a thematic map of the metaverse research. It proposes four streams of research (metaverse technologies, metaverse areas of application, marketing and consumer behaviour and sustainability) for future investigation, which academics and practitioners should explore. It also contributes towards a systematic advancement of knowledge in the field, provides some preliminary theoretical contributions by shedding light on future research avenues, and offers insights for business.
]]>Virtual Worlds doi: 10.3390/virtualworlds1010006
Authors: Hubert Cecotti
Fully immersive virtual reality (VR) applications have modified the way people access cultural heritage—from the visiting of virtual museums containing large collections of paintings to the visiting of ancient buildings. In this paper, we propose to review the software that are currently available that deal with cultural heritage in fully immersive virtual reality. It goes beyond technologies that were available prior to virtual reality headsets, at a time where virtual was simply the synonym of the application of digital technologies to cultural heritage. We propose to group these applications depending on their content—from generic art galleries and museums to applications that focus on a single artwork or single artist. Furthermore, we review different ways to assess the performance of such applications with workload, usability, flow, and potential VR symptoms surveys. This paper highlights the progress in the implementation of applications that provide immersive learning experiences related to cultural heritage, from 360 images to photogrammetry and 3D models. The paper shows the discrepancy between available software to the general audience on various VR headsets and scholarship activities dealing with cultural heritage in VR.
]]>Virtual Worlds doi: 10.3390/virtualworlds1010005
Authors: Takara E. Truong Nathaniel G. Luttmer Ebsa R. Eshete Alia B. M. Zaki Derek D. Greer Tren J. Hirschi Benjamin R. Stewart Cherry A. Gregory Mark A. Minor
The purpose of the study was to understand how various aspects of virtual reality and extended reality, specifically, environmental displays (e.g., wind, heat, smell, and moisture), audio, and graphics, can be exploited to cause a good startle, or to prevent them. The TreadPort Active Wind Tunnel (TPAWT) was modified to include several haptic environmental displays: heat, wind, olfactory, and mist, resulting in the Multi-Sensory TreadPort Active Wind Tunnel (MS.TPAWT). In total, 120 participants played a VR game that contained three startling situations. Audio and environmental effects were varied in a two-way analysis of variance (ANOVA) study. Muscle activity levels of their orbicularis oculi, sternocleidomastoid, and trapezius were measured using electromyography (EMG). Participants then answered surveys on their perceived levels of startle for each situation. We show that adjusting audio and environmental levels can alter participants physiological and psychological response to the virtual world. Notably, audio is key for eliciting stronger responses and perceptions of the startling experiences, but environmental displays can be used to either amplify those responses or to diminish them. The results also highlight that traditional eye muscle response measurements of startles may not be valid for measuring startle responses to strong environmental displays, suggesting that alternate muscle groups should be used. The study’s implications, in practice, will allow designers to control the participants response by adjusting these settings.
]]>Virtual Worlds doi: 10.3390/virtualworlds1010004
Authors: Sarker Monojit Asish Arun K. Kulshreshth Christoph W. Borst
Emerging Virtual Reality (VR) displays with embedded eye trackers are currently becoming a commodity hardware (e.g., HTC Vive Pro Eye). Eye-tracking data can be utilized for several purposes, including gaze monitoring, privacy protection, and user authentication/identification. Identifying users is an integral part of many applications due to security and privacy concerns. In this paper, we explore methods and eye-tracking features that can be used to identify users. Prior VR researchers explored machine learning on motion-based data (such as body motion, head tracking, eye tracking, and hand tracking data) to identify users. Such systems usually require an explicit VR task and many features to train the machine learning model for user identification. We propose a system to identify users utilizing minimal eye-gaze-based features without designing any identification-specific tasks. We collected gaze data from an educational VR application and tested our system with two machine learning (ML) models, random forest (RF) and k-nearest-neighbors (kNN), and two deep learning (DL) models: convolutional neural networks (CNN) and long short-term memory (LSTM). Our results show that ML and DL models could identify users with over 98% accuracy with only six simple eye-gaze features. We discuss our results, their implications on security and privacy, and the limitations of our work.
]]>Virtual Worlds doi: 10.3390/virtualworlds1010003
Authors: Yaser Maddahi Siqi Chen
Industries are increasing their adoption of digital twins for their unprecedented ability to control physical entities and help manage complex systems by integrating multiple technologies. Recently, the dental industry has seen several technological advancements, but it is uncertain if dental institutions are making an effort to adopt digital twins in their education. In this work, we employ a mixed-method approach to investigate the added value of digital twins for remote learning in the dental industry. We examine the extent of digital twin adoption by dental institutions for remote education, shed light on the concepts and benefits it brings, and provide an application-based roadmap for more extended adoption. We report a review of digital twins in the healthcare industry, followed by identifying use cases and comparing them with use cases in other disciplines. We compare reported benefits, the extent of research, and the level of digital twin adoption by industries. We distill the digital twin characteristics that can add value to the dental industry from the examined digital twin applications in remote learning and other disciplines. Then, inspired by digital twin applications in different fields, we propose a roadmap for digital twins in remote education for dental institutes, consisting of examples of growing complexity. We conclude this paper by identifying the distinctive characteristics of dental digital twins for remote learning.
]]>Virtual Worlds doi: 10.3390/virtualworlds1010002
Authors: Anton Nijholt
Books, movies, and performances create virtual worlds [...]
]]>Virtual Worlds doi: 10.3390/virtualworlds1010001
Authors: Sarah A. Allman Joanna Cordy James P. Hall Victoria Kleanthous Elizabeth R. Lander
360° 3D virtual reality (VR) video is used in education to bring immersive environments into a teaching space for learners to experience in a safe and controlled way. Within 360° 3D VR video, informational elements such as additional text, labelling and directions can be easily incorporated to augment such content. Despite this, the usefulness of this information for learners has not yet been determined. This article presents a study which aims to explore the usefulness of labelling and text within 360° stereoscopic 3D VR video content and how this contributes to the user experience. Postgraduate students from a university in the UK (n = 30) were invited to take part in the study to evaluate VR video content augmented with labels and summary text or neither of these elements. Interconnected themes associated with the user experience were identified from semi-structured interviews. From this, it was established that the incorporation of informational elements resulted in the expansion of the field of view experienced by participants. This “augmented signposting” may facilitate a greater spatial awareness of the virtual environment. Four recommendations for educators developing 360° stereoscopic 3D VR video content are presented.
]]>