Next Issue
Volume 7, March
Previous Issue
Volume 7, January
 
 

Multimodal Technol. Interact., Volume 7, Issue 2 (February 2023) – 16 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
27 pages, 1935 KiB  
Review
A Review of Design and Evaluation Practices in Mobile Text Entry for Visually Impaired and Blind Persons
by Andreas Komninos, Vassilios Stefanis and John Garofalakis
Multimodal Technol. Interact. 2023, 7(2), 22; https://doi.org/10.3390/mti7020022 - 17 Feb 2023
Cited by 2 | Viewed by 1653
Abstract
Millions of people with vision impairment or vision loss face considerable barriers in using mobile technology and services due to the difficulty of text entry. In this paper, we review related studies involving the design and evaluation of novel prototypes for mobile text [...] Read more.
Millions of people with vision impairment or vision loss face considerable barriers in using mobile technology and services due to the difficulty of text entry. In this paper, we review related studies involving the design and evaluation of novel prototypes for mobile text entry for persons with vision loss or impairment. We identify the practices and standards of the research community and compare them against the practices in research for non-impaired persons. We find that there are significant shortcomings in the methodological and result-reporting practices in both population types. In highlighting these issues, we hope to inspire more and better quality research in the domain of mobile text entry for persons with and without vision impairment. Full article
Show Figures

Figure 1

23 pages, 25389 KiB  
Article
Simulating Wearable Urban Augmented Reality Experiences in VR: Lessons Learnt from Designing Two Future Urban Interfaces
by Tram Thi Minh Tran, Callum Parker, Marius Hoggenmüller, Luke Hespanhol and Martin Tomitsch
Multimodal Technol. Interact. 2023, 7(2), 21; https://doi.org/10.3390/mti7020021 - 16 Feb 2023
Cited by 3 | Viewed by 3340
Abstract
Augmented reality (AR) has the potential to fundamentally change how people engage with increasingly interactive urban environments. However, many challenges exist in designing and evaluating these new urban AR experiences, such as technical constraints and safety concerns associated with outdoor AR. We contribute [...] Read more.
Augmented reality (AR) has the potential to fundamentally change how people engage with increasingly interactive urban environments. However, many challenges exist in designing and evaluating these new urban AR experiences, such as technical constraints and safety concerns associated with outdoor AR. We contribute to this domain by assessing the use of virtual reality (VR) for simulating wearable urban AR experiences, allowing participants to interact with future AR interfaces in a realistic, safe and controlled setting. This paper describes two wearable urban AR applications (pedestrian navigation and autonomous mobility) simulated in VR. Based on a thematic analysis of interview data collected across the two studies, we find that the VR simulation successfully elicited feedback on the functional benefits of AR concepts and the potential impact of urban contextual factors, such as safety concerns, attentional capacity, and social considerations. At the same time, we highlight the limitations of this approach in terms of assessing the AR interface’s visual quality and providing exhaustive contextual information. The paper concludes with recommendations for simulating wearable urban AR experiences in VR. Full article
Show Figures

Figure 1

14 pages, 1342 KiB  
Article
How Can One Share a User’s Activity during VR Synchronous Augmentative Cooperation?
by Thomas Rinnert, James Walsh, Cédric Fleury, Gilles Coppin, Thierry Duval and Bruce H. Thomas
Multimodal Technol. Interact. 2023, 7(2), 20; https://doi.org/10.3390/mti7020020 - 14 Feb 2023
Cited by 2 | Viewed by 1485
Abstract
Collaborative virtual environments allow people to work together while being distant. At the same time, empathic computing aims to create a deeper shared understanding between people. In this paper, we investigate how to improve the perception of distant collaborative activities in a virtual [...] Read more.
Collaborative virtual environments allow people to work together while being distant. At the same time, empathic computing aims to create a deeper shared understanding between people. In this paper, we investigate how to improve the perception of distant collaborative activities in a virtual environment by sharing users’ activity. We first propose several visualization techniques for sharing the activity of multiple users. We selected one of these techniques for a pilot study and evaluated its benefits in a controlled experiment using a virtual reality adaptation of the NASA MATB-II (Multi-Attribute Task Battery). Results show (1) that instantaneous indicators of users’ activity are preferred to indicators that continuously display the progress of a task, and (2) that participants are more confident in their ability to detect users needing help when using activity indicators. Full article
(This article belongs to the Special Issue 3D Human–Computer Interaction (Volume II))
Show Figures

Graphical abstract

16 pages, 1983 KiB  
Article
Assessing Heuristic Evaluation in Immersive Virtual Reality—A Case Study on Future Guidance Systems
by Sebastian Stadler, Henriette Cornet and Fritz Frenkler
Multimodal Technol. Interact. 2023, 7(2), 19; https://doi.org/10.3390/mti7020019 - 09 Feb 2023
Cited by 1 | Viewed by 2090
Abstract
A variety of evaluation methods for user interfaces (UI) exist such as usability testing, cognitive walkthrough, and heuristic evaluation. However, UIs such as guidance systems at transit hubs must be evaluated in their intended application field to allow the effective and valid identification [...] Read more.
A variety of evaluation methods for user interfaces (UI) exist such as usability testing, cognitive walkthrough, and heuristic evaluation. However, UIs such as guidance systems at transit hubs must be evaluated in their intended application field to allow the effective and valid identification of usability flaws. However, what if evaluations are not feasible in real environments, or laboratorial conditions cannot be ensured? Based on adapted heuristics, in the present study, the method of heuristic evaluation is combined with immersive Virtual Reality (VR) for the identification of usability flaws of dynamic guidance systems (DGS) at transit hubs. The study involved usability evaluations of nine DGS concepts using the newly proposed method. The results show that compared to computer-based heuristic evaluations, the use of immersive VR led to the identification of an increased amount of “severe” usability flaws as well as overall usability flaws. Within a qualitative assessment, immersive VR is validated as a suitable tool for conducting heuristic evaluations involving significant advantages such as the creation of realistic experiences in laboratorial conditions. Future work seeks to further prove the suitability of using immersive VR for heuristic evaluations and compare the proposed method to other evaluative methods. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

25 pages, 2075 KiB  
Article
Designing to Leverage Presence in VR Rhythm Games
by Robert Dongas and Kazjon Grace
Multimodal Technol. Interact. 2023, 7(2), 18; https://doi.org/10.3390/mti7020018 - 09 Feb 2023
Cited by 2 | Viewed by 2145
Abstract
Rhythm games are known for their engaging gameplay and have gained renewed popularity with the adoption of virtual reality (VR) technology. While VR rhythm games have achieved commercial success, there is a lack of research on how and why they are engaging, and [...] Read more.
Rhythm games are known for their engaging gameplay and have gained renewed popularity with the adoption of virtual reality (VR) technology. While VR rhythm games have achieved commercial success, there is a lack of research on how and why they are engaging, and the connection between that engagement and immersion or presence. This study aims to understand how the design of two popular VR rhythm games, Beat Saber and Ragnarock, leverages presence to immerse players. Through a mixed-methods approach, utilising the Multimodal Presence Scale and a thematic analysis of open-ended questions, we discovered four mentalities which characterise user experiences: action, game, story and musical. We discuss how these mentalities can mediate presence and immersion, suggesting considerations for how designers can leverage this mapping for similar or related games. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

30 pages, 12525 KiB  
Article
Roadmap for the Development of EnLang4All: A Video Game for Learning English
by Isabel Machado Alexandre, Pedro Faria Lopes and Cynthia Borges
Multimodal Technol. Interact. 2023, 7(2), 17; https://doi.org/10.3390/mti7020017 - 03 Feb 2023
Cited by 1 | Viewed by 1696
Abstract
Nowadays, people are more predisposed to being self-taught due to the availability of online information. With digitalization, information appears not only in its conventional state, as blogs, articles, newspapers, or e-books, but also in more interactive and enticing ways. Video games have become [...] Read more.
Nowadays, people are more predisposed to being self-taught due to the availability of online information. With digitalization, information appears not only in its conventional state, as blogs, articles, newspapers, or e-books, but also in more interactive and enticing ways. Video games have become a transmission vehicle for information and knowledge, but they require specific treatment in respect of their presentation and the way in which users interact with them. This treatment includes usability guidelines and heuristics that provide video game properties that are favorable to a better user experience, conducive to captivating the user and to assimilating the content. In this research, usability guidelines and heuristics, complemented with recommendations from educational video game studies, were gathered and analyzed for application to a video game for English language learning called EnLang4All, which was also developed in the scope of this project and evaluated in terms of its reception by users. Full article
Show Figures

Figure 1

29 pages, 4538 KiB  
Article
Ranking Crossing Scenario Complexity for eHMIs Testing: A Virtual Reality Study
by Elena Fratini, Ruth Welsh and Pete Thomas
Multimodal Technol. Interact. 2023, 7(2), 16; https://doi.org/10.3390/mti7020016 - 02 Feb 2023
Viewed by 1593
Abstract
External human–machine interfaces (eHMIs) have the potential to benefit AV–pedestrian interactions. The majority of studies investigating eHMIs have used relatively simple traffic environments, i.e., a single pedestrian crossing in front of a single eHMI on a one-lane straight road. While this approach has [...] Read more.
External human–machine interfaces (eHMIs) have the potential to benefit AV–pedestrian interactions. The majority of studies investigating eHMIs have used relatively simple traffic environments, i.e., a single pedestrian crossing in front of a single eHMI on a one-lane straight road. While this approach has proved to be efficient in providing an initial understanding of how pedestrians respond to eHMIs, it over-simplifies interactions which will be substantially more complex in real-life circumstances. A process is illustrated in a small-scale study (N = 10) to rank different crossing scenarios by level of complexity. Traffic scenarios were first developed for varying traffic density, visual complexity of the road scene, road geometry, weather and visibility conditions, and presence of distractions. These factors have been previously shown to increase difficulty and riskiness of the crossing task. The scenarios were then tested in a motion-based, virtual reality environment. Pedestrians’ perceived workload and objective crossing behaviour were measured as indirect indicators of the level of complexity of the crossing scenario. Sense of presence and simulator sickness were also recorded as a measure of the ecological validity of the virtual environment. The results indicated that some crossing scenarios were more taxing for pedestrians than others, such as those with road geometries where traffic approached from multiple directions. Further, the presence scores showed that the virtual environments experienced were found to be realistic. This paper concludes by proposing a “complex” environment to test eHMIs under more challenging crossing circumstances. Full article
Show Figures

Figure 1

29 pages, 3996 KiB  
Review
Research in Computational Expressive Music Performance and Popular Music Production: A Potential Field of Application?
by Pierluigi Bontempi, Sergio Canazza, Filippo Carnovalini and Antonio Rodà
Multimodal Technol. Interact. 2023, 7(2), 15; https://doi.org/10.3390/mti7020015 - 31 Jan 2023
Cited by 2 | Viewed by 2214
Abstract
In music, the interpreter manipulates the performance parameters in order to offer a sonic rendition of the piece that is capable of conveying specific expressive intentions. Since the 1980s, there has been growing interest in expressive music performance (EMP) and its computational modeling. [...] Read more.
In music, the interpreter manipulates the performance parameters in order to offer a sonic rendition of the piece that is capable of conveying specific expressive intentions. Since the 1980s, there has been growing interest in expressive music performance (EMP) and its computational modeling. This research field has two fundamental objectives: understanding the phenomenon of human musical interpretation and the automatic generation of expressive performances. Rule-based, statistical, machine, and deep learning approaches have been proposed, most of them devoted to the classical repertoire, in particular to piano pieces. On the contrary, we introduce the role of expressive performance within popular music and the contemporary ecology of pop music production based on the use of digital audio workstations (DAWs) and virtual instruments. After an analysis of the tools related to expressiveness commonly available to modern producers, we propose a detailed survey of research into the computational EMP field, highlighting the potential and limits of what is present in the literature with respect to the context of popular music, which by its nature cannot be completely superimposed to the classical one. In the concluding discussion, we suggest possible lines of future research in the field of computational expressiveness applied to pop music. Full article
Show Figures

Figure 1

24 pages, 18483 KiB  
Article
Enhancing Operational Police Training in High Stress Situations with Virtual Reality: Experiences, Tools and Guidelines
by Olivia Zechner, Lisanne Kleygrewe, Emma Jaspaert, Helmut Schrom-Feiertag, R. I. Vana Hutter and Manfred Tscheligi
Multimodal Technol. Interact. 2023, 7(2), 14; https://doi.org/10.3390/mti7020014 - 31 Jan 2023
Cited by 7 | Viewed by 5136
Abstract
Virtual Reality (VR) provides great opportunities for police officers to train decision-making and acting (DMA) in cognitively demanding and stressful situations. This paper presents a summary of findings from a three-year project, including requirements collected from experienced police trainers and industry experts, and [...] Read more.
Virtual Reality (VR) provides great opportunities for police officers to train decision-making and acting (DMA) in cognitively demanding and stressful situations. This paper presents a summary of findings from a three-year project, including requirements collected from experienced police trainers and industry experts, and quantitative and qualitative results of human factor studies and field trials. Findings include advantages of VR training such as the possibility to safely train high-risk situations in controllable and reproducible training environments, include a variety of avatars that would be difficult to use in real-life training (e.g., vulnerable populations or animals) and handle dangerous equipment (e.g., explosives) but also highlight challenges such as tracking, locomotion and intelligent virtual agents. The importance of strong alignment between training didactics and technical possibilities is highlighted and potential solutions presented. Furthermore training outcomes are transferable to real-world police duties and may apply to other domains that would benefit from simulation-based training. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

13 pages, 1497 KiB  
Perspective
Challenges and Trends in User Trust Discourse in AI Popularity
by Sonia Sousa, José Cravino and Paulo Martins
Multimodal Technol. Interact. 2023, 7(2), 13; https://doi.org/10.3390/mti7020013 - 31 Jan 2023
Cited by 2 | Viewed by 2562
Abstract
The Internet revolution in 1990, followed by the data-driven and information revolution, has transformed the world as we know it. Nowadays, what seam to be 10 to 20 years ago, a science fiction idea (i.e., machines dominating the world) is seen as possible. [...] Read more.
The Internet revolution in 1990, followed by the data-driven and information revolution, has transformed the world as we know it. Nowadays, what seam to be 10 to 20 years ago, a science fiction idea (i.e., machines dominating the world) is seen as possible. This revolution also brought a need for new regulatory practices where user trust and artificial Intelligence (AI) discourse has a central role. This work aims to clarify some misconceptions about user trust in AI discourse and fight the tendency to design vulnerable interactions that lead to further breaches of trust, both real and perceived. Findings illustrate the lack of clarity in understanding user trust and its effects on computer science, especially in measuring user trust characteristics. It argues for clarifying those notions to avoid possible trust gaps and misinterpretations in AI adoption and appropriation. Full article
Show Figures

Figure 1

16 pages, 3288 KiB  
Article
Velocity-Oriented Dynamic Control–Display Gain for Kinesthetic Interaction with a Grounded Force-Feedback Device
by Zhenxing Li, Jari Kangas and Roope Raisamo
Multimodal Technol. Interact. 2023, 7(2), 12; https://doi.org/10.3390/mti7020012 - 28 Jan 2023
Viewed by 1254
Abstract
Kinesthetic interaction is an important interaction method for virtual reality. Current kinesthetic interaction using a grounded force-feedback device, however, is still considered difficult and time-consuming because of the interaction difficulty in a three-dimensional space. Velocity-oriented dynamic control–display (CD) gain has been used to [...] Read more.
Kinesthetic interaction is an important interaction method for virtual reality. Current kinesthetic interaction using a grounded force-feedback device, however, is still considered difficult and time-consuming because of the interaction difficulty in a three-dimensional space. Velocity-oriented dynamic control–display (CD) gain has been used to improve user task performance with pointing devices, such as the mouse. In this study, we extended the application of this technique to kinesthetic interaction and examined its effects on interaction speed, positioning accuracy and touch perception. The results showed that using this technique could improve interaction speed without affecting positioning accuracy in kinesthetic interaction. Velocity-oriented dynamic CD gain could negatively affect touch perception in softness while using large gains. However, it is promising and particularly suitable for kinesthetic tasks that do not require high accuracy in touch perception. Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
Show Figures

Figure 1

13 pages, 860 KiB  
Article
Association of the Big Five Personality Traits with Training Effectiveness, Sense of Presence, and Cybersickness in Virtual Reality
by Sebastian Oltedal Thorp, Lars Morten Rimol and Simone Grassini
Multimodal Technol. Interact. 2023, 7(2), 11; https://doi.org/10.3390/mti7020011 - 23 Jan 2023
Cited by 3 | Viewed by 1946
Abstract
Virtual reality (VR) presents numerous opportunities for training skills and abilities through the technology’s capacity to simulate realistic training scenarios and environments. This can be seen in how newer research has emphasized how VR can be used for creating adaptable training scenarios. Nevertheless, [...] Read more.
Virtual reality (VR) presents numerous opportunities for training skills and abilities through the technology’s capacity to simulate realistic training scenarios and environments. This can be seen in how newer research has emphasized how VR can be used for creating adaptable training scenarios. Nevertheless, a limited number of studies have examined how personality traits can influence the training effectiveness of participants within VR. To assess individual preferences in a virtual environment, the current study examines the associations of Big Five personality traits with training effectiveness from VR, as well as sense of presence and cybersickness. Our results show that traits of high agreeableness and low conscientiousness are predictors of training transferability in the VR environment in relation to the real world. Furthermore, the results also showed that trainees experiencing higher levels of cybersickness incurred worse training outcomes. Full article
(This article belongs to the Special Issue 3D Human–Computer Interaction (Volume II))
Show Figures

Figure 1

20 pages, 4401 KiB  
Article
Smiles and Angry Faces vs. Nods and Head Shakes: Facial Expressions at the Service of Autonomous Vehicles
by Alexandros Rouchitsas and Håkan Alm
Multimodal Technol. Interact. 2023, 7(2), 10; https://doi.org/10.3390/mti7020010 - 20 Jan 2023
Cited by 3 | Viewed by 2771
Abstract
When deciding whether to cross the street or not, pedestrians take into consideration information provided by both vehicle kinematics and the driver of an approaching vehicle. It will not be long, however, before drivers of autonomous vehicles (AVs) will be unable to communicate [...] Read more.
When deciding whether to cross the street or not, pedestrians take into consideration information provided by both vehicle kinematics and the driver of an approaching vehicle. It will not be long, however, before drivers of autonomous vehicles (AVs) will be unable to communicate their intention to pedestrians, as they will be engaged in activities unrelated to driving. External human–machine interfaces (eHMIs) have been developed to fill the communication gap that will result by offering information to pedestrians about the situational awareness and intention of an AV. Several anthropomorphic eHMI concepts have employed facial expressions to communicate vehicle intention. The aim of the present study was to evaluate the efficiency of emotional (smile; angry expression) and conversational (nod; head shake) facial expressions in communicating vehicle intention (yielding; non-yielding). Participants completed a crossing intention task where they were tasked with deciding appropriately whether to cross the street or not. Emotional expressions communicated vehicle intention more efficiently than conversational expressions, as evidenced by the lower latency in the emotional expression condition compared to the conversational expression condition. The implications of our findings for the development of anthropomorphic eHMIs that employ facial expressions to communicate vehicle intention are discussed. Full article
Show Figures

Figure 1

17 pages, 1934 KiB  
Article
Does Augmented Reality Help to Understand Chemical Phenomena during Hands-On Experiments?–Implications for Cognitive Load and Learning
by Hendrik Peeters, Sebastian Habig and Sabine Fechner
Multimodal Technol. Interact. 2023, 7(2), 9; https://doi.org/10.3390/mti7020009 - 19 Jan 2023
Cited by 5 | Viewed by 2594
Abstract
Chemical phenomena are only observable on a macroscopic level, whereas they are explained by entities on a non-visible level. Students often demonstrate limited ability to link these different levels. Augmented reality (AR) offers the possibility to increase contiguity by embedding virtual models into [...] Read more.
Chemical phenomena are only observable on a macroscopic level, whereas they are explained by entities on a non-visible level. Students often demonstrate limited ability to link these different levels. Augmented reality (AR) offers the possibility to increase contiguity by embedding virtual models into hands-on experiments. Therefore, this paper presents a pre- and post-test study investigating how learning and cognitive load are influenced by AR during hands-on experiments. Three comparison groups (AR, animation and filmstrip), with a total of N = 104 German secondary school students, conducted and explained two hands-on experiments. Whereas the AR group was allowed to use an AR app showing virtual models of the processes on the submicroscopic level during the experiments, the two other groups were provided with the same dynamic or static models after experimenting. Results indicate no significant learning gain for the AR group in contrast to the two other groups. The perceived intrinsic cognitive load was higher for the AR group in both experiments as well as the extraneous load in the second experiment. It can be concluded that AR could not unleash its theoretically derived potential in the present study. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

5 pages, 216 KiB  
Editorial
Emerging Technologies and New Media for Children: Introduction
by Émeline Brulé and Kiley Sobel
Multimodal Technol. Interact. 2023, 7(2), 8; https://doi.org/10.3390/mti7020008 - 19 Jan 2023
Cited by 1 | Viewed by 1368
Abstract
In his 1749 essay, Letter on the Blind for the Use of Those Who Can See, Diderot explores the impact of vision, or lack thereof, on the development of knowledge [...] Full article
3 pages, 155 KiB  
Editorial
Acknowledgment to the Reviewers of MTI in 2022
by MTI Editorial Office
Multimodal Technol. Interact. 2023, 7(2), 7; https://doi.org/10.3390/mti7020007 - 18 Jan 2023
Viewed by 820
Abstract
High-quality academic publishing is built on rigorous peer review Multimodal Technologies and Interaction (MTI) was able to uphold its high standards for published papers due to the outstanding efforts of our reviewers [...] Full article
Previous Issue
Next Issue
Back to TopTop