Next Issue
Volume 8, April
Previous Issue
Volume 8, February
 
 

Multimodal Technol. Interact., Volume 8, Issue 3 (March 2024) – 10 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
15 pages, 2890 KiB  
Article
Design and Evaluation of a Memory-Recalling Virtual Reality Application for Elderly Users
by Zoe Anastasiadou, Eleni Dimitriadou and Andreas Lanitis
Multimodal Technol. Interact. 2024, 8(3), 24; https://doi.org/10.3390/mti8030024 - 21 Mar 2024
Viewed by 826
Abstract
Virtual reality (VR) can be useful in efforts that aim to improve the well-being of older members of society. Within this context, the work presented in this paper aims to provide the elderly with a user-friendly and enjoyable virtual reality application incorporating memory [...] Read more.
Virtual reality (VR) can be useful in efforts that aim to improve the well-being of older members of society. Within this context, the work presented in this paper aims to provide the elderly with a user-friendly and enjoyable virtual reality application incorporating memory recall and storytelling activities that could promote mental awareness. An important aspect of the proposed VR application is the presence of a virtual audience that listens to the stories presented by elderly users and interacts with them. In an effort to maximize the impact of the VR application, research was conducted to study whether the elderly are willing to use the VR application and whether they believe it can help to improve well-being and reduce the effects of loneliness and social isolation. Self-reported results related to the experience of the users show that elderly users are positive towards the use of such an application in everyday life as a means of improving their overall well-being. Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
Show Figures

Figure 1

14 pages, 881 KiB  
Article
Show-and-Tell: An Interface for Delivering Rich Feedback upon Creative Media Artefacts
by Colin Dodds and Ahmed Kharrufa
Multimodal Technol. Interact. 2024, 8(3), 23; https://doi.org/10.3390/mti8030023 - 14 Mar 2024
Viewed by 766
Abstract
In this paper, we explore an approach to feedback which could allow those learning creative digital media practices in remote and asynchronous environments to receive rich, multi-modal, and interactive feedback upon their creative artefacts. We propose the show-and-tell feedback interface which couples graphical [...] Read more.
In this paper, we explore an approach to feedback which could allow those learning creative digital media practices in remote and asynchronous environments to receive rich, multi-modal, and interactive feedback upon their creative artefacts. We propose the show-and-tell feedback interface which couples graphical user interface changes (the show) to a text-based explanations (the tell). We describe the rationale behind the design and offer a tentative set of design criteria. We report the implementation and deployment into a real-world educational setting using a prototype interface developed to allow either traditional text-only feedback or our proposed show-and tell feedback across four sessions. The prototype was used to provide formative feedback upon music students’ coursework resulting in a total of 103 pieces of feedback. Thematic analysis was used to analyse the data obtained through interviews and focus groups with both educators and students (i.e., feedback givers and receivers). Recipients considered show-and-tell feedback to possess greater clarity and detail in comparison with the single modality text-only feedback they are used to receiving. We also report interesting emergent issues around control and artistic vision, and we discuss how these issues could be mitigated in future iterations of the interface. Full article
Show Figures

Figure 1

15 pages, 2020 KiB  
Article
Do Not Freak Me Out! The Impact of Lip Movement and Appearance on Knowledge Gain and Confidence
by Amal Abdulrahman, Katherine Hopman and Deborah Richards
Multimodal Technol. Interact. 2024, 8(3), 22; https://doi.org/10.3390/mti8030022 - 05 Mar 2024
Viewed by 1006
Abstract
Virtual agents (VAs) have been used effectively for psychoeducation. However, getting the VA’s design right is critical to ensure the user experience does not become a barrier to receiving and responding to the intended message. The study reported in this paper seeks to [...] Read more.
Virtual agents (VAs) have been used effectively for psychoeducation. However, getting the VA’s design right is critical to ensure the user experience does not become a barrier to receiving and responding to the intended message. The study reported in this paper seeks to help first-year psychology students to develop knowledge and confidence to recommend emotion regulation strategies. In previous work, we received negative feedback concerning the VA’s lip-syncing, including creepiness and visual overload, in the case of stroke patients. We seek to test the impact of the removal of lip-syncing on the perception of the VA and its ability to achieve its intended outcomes, also considering the influence of the visual features of the avatar. We conducted a 2 (lip-sync/no lip-sync) × 2 (human-like/cartoon-like) experimental design and measured participants’ perception of the VA in terms of eeriness, user experience, knowledge gain and participants’ confidence to practice their knowledge. While participants showed a tendency to prefer the cartoon look over the human look and the absence of lip-syncing over its presence, all groups reported no significant increase in knowledge but significant increases in confidence in their knowledge and ability to recommend the learnt strategies to others, concluding that realism and lip-syncing did not influence the intended outcomes. Thus, in future designs, we will allow the user to switch off the lip-sync function if they prefer. Further, our findings suggest that lip-syncing should not be a standard animation included with VAs, as is currently the case. Full article
Show Figures

Figure 1

18 pages, 1125 KiB  
Article
Accessible Metaverse: A Theoretical Framework for Accessibility and Inclusion in the Metaverse
by Achraf Othman, Khansa Chemnad, Aboul Ella Hassanien, Ahmed Tlili, Christina Yan Zhang, Dena Al-Thani, Fahriye Altınay, Hajer Chalghoumi, Hend S. Al-Khalifa, Maisa Obeid, Mohamed Jemni, Tawfik Al-Hadhrami and Zehra Altınay
Multimodal Technol. Interact. 2024, 8(3), 21; https://doi.org/10.3390/mti8030021 - 01 Mar 2024
Cited by 2 | Viewed by 1632
Abstract
The following article investigates the Metaverse and its potential to bolster digital accessibility for persons with disabilities. Through qualitative analysis, we examine responses from eleven experts in digital accessibility, Metaverse development, disability advocacy, and policy formulation. This exploration uncovers key insights into the [...] Read more.
The following article investigates the Metaverse and its potential to bolster digital accessibility for persons with disabilities. Through qualitative analysis, we examine responses from eleven experts in digital accessibility, Metaverse development, disability advocacy, and policy formulation. This exploration uncovers key insights into the Metaverse’s current state, its inherent principles, and the challenges and opportunities it presents in terms of accessibility. The findings reveal a mixed state of inclusivity within the Metaverse, highlighting significant advancements along with notable gaps, especially in integrating assistive technologies and ensuring interoperability across different virtual environments. This study emphasizes the Metaverse’s potential to revolutionize experiences for individuals with disabilities, provided that accessibility is embedded in its foundational design. Ethical and legal considerations, such as privacy, non-discrimination, and evolving legal frameworks, are identified as critical factors that shape an inclusive Metaverse. We propose a comprehensive framework that emphasizes technological adaptation and innovation, user-centric design, universal access, social and economic considerations, and global standards. This framework aims to guide future research and policy interventions to foster an inclusive digital environment in the Metaverse. This paper contributes to the emerging discourse on the Metaverse and digital accessibility, offering a nuanced understanding of its complexities and a roadmap for future exploration and development. This underscores the necessity of a multi-faceted approach that incorporates technological innovation, user-centered design, ethical considerations, legal compliance, and continuous research to create an inclusive and accessible Metaverse. Full article
(This article belongs to the Special Issue Designing an Inclusive and Accessible Metaverse)
Show Figures

Graphical abstract

20 pages, 1121 KiB  
Article
Trust Development and Explainability: A Longitudinal Study with a Personalized Assistive System
by Setareh Zafari, Jesse de Pagter, Guglielmo Papagni, Alischa Rosenstein, Michael Filzmoser and Sabine T. Koeszegi
Multimodal Technol. Interact. 2024, 8(3), 20; https://doi.org/10.3390/mti8030020 - 01 Mar 2024
Viewed by 1122
Abstract
This article reports on a longitudinal experiment in which the influence of an assistive system’s malfunctioning and transparency on trust was examined over a period of seven days. To this end, we simulated the system’s personalized recommendation features to support participants with the [...] Read more.
This article reports on a longitudinal experiment in which the influence of an assistive system’s malfunctioning and transparency on trust was examined over a period of seven days. To this end, we simulated the system’s personalized recommendation features to support participants with the task of learning new texts and taking quizzes. Using a 2 × 2 mixed design, the system’s malfunctioning (correct vs. faulty) and transparency (with vs. without explanation) were manipulated as between-subjects variables, whereas exposure time was used as a repeated-measure variable. A combined qualitative and quantitative methodological approach was used to analyze the data from 171 participants. Our results show that participants perceived the system making a faulty recommendation as a trust violation. Additionally, a trend emerged from both the quantitative and qualitative analyses regarding how the availability of explanations (even when not accessed) increased the perception of a trustworthy system. Full article
(This article belongs to the Special Issue Cooperative Intelligence in Automated Driving- 2nd Edition)
Show Figures

Figure 1

18 pages, 2780 KiB  
Article
Enhancing Calculus Learning through Interactive VR and AR Technologies: A Study on Immersive Educational Tools
by Logan Pinter and Mohammad Faridul Haque Siddiqui
Multimodal Technol. Interact. 2024, 8(3), 19; https://doi.org/10.3390/mti8030019 - 01 Mar 2024
Viewed by 1056
Abstract
In the realm of collegiate education, calculus can be quite challenging for students. Many students struggle to visualize abstract concepts, as mathematics often moves into strict arithmetic rather than geometric understanding. Our study presents an innovative solution to this problem: an immersive, interactive [...] Read more.
In the realm of collegiate education, calculus can be quite challenging for students. Many students struggle to visualize abstract concepts, as mathematics often moves into strict arithmetic rather than geometric understanding. Our study presents an innovative solution to this problem: an immersive, interactive VR graphing tool capable of standard 2D graphs, solids of revolution, and a series of visualizations deemed potentially useful to struggling students. This tool was developed within the Unity 3D engine, and while interaction and expression parsing rely on existing libraries, core functionalities were developed independently. As a pilot study, it includes qualitative information from a survey of students currently or previously enrolled in Calculus II/III courses, revealing its potential effectiveness. This survey primarily aims to determine the tool’s viability in future endeavors. The positive response suggests the tool’s immediate usefulness and its promising future in educational settings, prompting further exploration and consideration for adaptation into an Augmented Reality (AR) environment. Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
Show Figures

Figure 1

14 pages, 3032 KiB  
Perspective
Keep the Human in the Loop: Arguments for Human Assistance in the Synthesis of Simulation Data for Robot Training
by Carina Liebers, Pranav Megarajan, Jonas Auda, Tim C. Stratmann, Max Pfingsthorn, Uwe Gruenefeld and Stefan Schneegass
Multimodal Technol. Interact. 2024, 8(3), 18; https://doi.org/10.3390/mti8030018 - 01 Mar 2024
Viewed by 945
Abstract
Robot training often takes place in simulated environments, particularly with reinforcement learning. Therefore, multiple training environments are generated using domain randomization to ensure transferability to real-world applications and compensate for unknown real-world states. We propose improving domain randomization by involving human application experts [...] Read more.
Robot training often takes place in simulated environments, particularly with reinforcement learning. Therefore, multiple training environments are generated using domain randomization to ensure transferability to real-world applications and compensate for unknown real-world states. We propose improving domain randomization by involving human application experts in various stages of the training process. Experts can provide valuable judgments on simulation realism, identify missing properties, and verify robot execution. Our human-in-the-loop workflow describes how they can enhance the process in five stages: validating and improving real-world scans, correcting virtual representations, specifying application-specific object properties, verifying and influencing simulation environment generation, and verifying robot training. We outline examples and highlight research opportunities. Furthermore, we present a case study in which we implemented different prototypes, demonstrating the potential of human experts in the given stages. Our early insights indicate that human input can benefit robot training at different stages. Full article
(This article belongs to the Special Issue Challenges in Human-Centered Robotics)
Show Figures

Figure 1

27 pages, 6397 KiB  
Article
The FlexiBoard: Tangible and Tactile Graphics for People with Vision Impairments
by Mathieu Raynal, Julie Ducasse, Marc J.-M. Macé, Bernard Oriola and Christophe Jouffrais
Multimodal Technol. Interact. 2024, 8(3), 17; https://doi.org/10.3390/mti8030017 - 27 Feb 2024
Viewed by 1105
Abstract
Over the last decade, several projects have demonstrated how interactive tactile graphics and tangible interfaces can improve and enrich access to information for people with vision impairments. While the former can be used to display a relatively large amount of information, they cannot [...] Read more.
Over the last decade, several projects have demonstrated how interactive tactile graphics and tangible interfaces can improve and enrich access to information for people with vision impairments. While the former can be used to display a relatively large amount of information, they cannot be physically updated, which constrains the type of tasks that they can support. On the other hand, tangible interfaces are particularly suited for the (re)construction and manipulation of graphics, but the use of physical objects also restricts the type and amount of information that they can convey. We propose to bridge the gap between these two approaches by investigating the potential of tactile and tangible graphics for people with vision impairments. Working closely with special education teachers, we designed and developed the FlexiBoard, an affordable and portable system that enhances traditional tactile graphics with tangible interaction. In this paper, we report on the successive design steps that enabled us to identify and consider technical and design requirements. We thereafter explore two domains of application for the FlexiBoard: education and board games. Firstly, we report on one brainstorming session that we organized with four teachers in order to explore the application space of tangible and tactile graphics for educational activities. Secondly, we describe how the FlexiBoard enabled the successful adaptation of one visual board game into a multimodal accessible game that supports collaboration between sighted, low-vision and blind players. Full article
Show Figures

Figure 1

30 pages, 541 KiB  
Article
How to Design Human-Vehicle Cooperation for Automated Driving: A Review of Use Cases, Concepts, and Interfaces
by Jakob Peintner, Bengt Escher, Henrik Detjen, Carina Manger and Andreas Riener
Multimodal Technol. Interact. 2024, 8(3), 16; https://doi.org/10.3390/mti8030016 - 26 Feb 2024
Viewed by 1145
Abstract
Currently, a significant gap exists between academic and industrial research in automated driving development. Despite this, there is common sense that cooperative control approaches in automated vehicles will surpass the previously favored takeover paradigm in most driving situations due to enhanced driving performance [...] Read more.
Currently, a significant gap exists between academic and industrial research in automated driving development. Despite this, there is common sense that cooperative control approaches in automated vehicles will surpass the previously favored takeover paradigm in most driving situations due to enhanced driving performance and user experience. Yet, the application of these concepts in real driving situations remains unclear, and a holistic approach to driving cooperation is missing. Existing research has primarily focused on testing specific interaction scenarios and implementations. To address this gap and offer a contemporary perspective on designing human–vehicle cooperation in automated driving, we have developed a three-part taxonomy with the help of an extensive literature review. The taxonomy broadens the notion of driving cooperation towards a holistic and application-oriented view by encompassing (1) the “Cooperation Use Case”, (2) the “Cooperation Frame”, and (3) the “Human–Machine Interface”. We validate the taxonomy by categorizing related literature and providing a detailed analysis of an exemplar paper. The proposed taxonomy offers designers and researchers a concise overview of the current state of driver cooperation and insights for future work. Further, the taxonomy can guide automotive HMI designers in ideation, communication, comparison, and reflection of cooperative driving interfaces. Full article
(This article belongs to the Special Issue Cooperative Intelligence in Automated Driving- 2nd Edition)
Show Figures

Figure 1

21 pages, 25063 KiB  
Article
Substitute Buttons: Exploring Tactile Perception of Physical Buttons for Use as Haptic Proxies
by Bram van Deurzen, Gustavo Alberto Rovelo Ruiz, Daniël M. Bot, Davy Vanacken and Kris Luyten
Multimodal Technol. Interact. 2024, 8(3), 15; https://doi.org/10.3390/mti8030015 - 20 Feb 2024
Viewed by 1188
Abstract
Buttons are everywhere and are one of the most common interaction elements in both physical and digital interfaces. While virtual buttons offer versatility, enhancing them with realistic haptic feedback is challenging. Achieving this requires a comprehensive understanding of the tactile perception of physical [...] Read more.
Buttons are everywhere and are one of the most common interaction elements in both physical and digital interfaces. While virtual buttons offer versatility, enhancing them with realistic haptic feedback is challenging. Achieving this requires a comprehensive understanding of the tactile perception of physical buttons and their transferability to virtual counterparts. This research investigates tactile perception concerning button attributes such as shape, size, and roundness and their potential generalization across diverse button types. In our study, participants interacted with each of the 36 buttons in our search space and provided a response to which one they thought they were touching. The findings were used to establish six substitute buttons capable of effectively emulating tactile experiences across various buttons. In a second study, these substitute buttons were validated against virtual buttons in VR. Highlighting the potential use of the substitute buttons as haptic proxies for applications such as encountered-type haptics. Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
Show Figures

Graphical abstract

Previous Issue
Next Issue
Back to TopTop