Multimodal User Interfaces and Experiences: Challenges, Applications, and Perspectives

A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).

Deadline for manuscript submissions: 30 June 2024 | Viewed by 22061

Special Issue Editors

Faculty of Psychology, Beijing Normal University, Beijing 100875, China
Interests: human–computer interaction; user experience; tangible interaction; engineering psychology

E-Mail Website
Guest Editor
TD School, University of Technology Sydney, Broadway, NSW 2007, Australia
Interests: design thinking; organizational culture; innovation management; prototyping

E-Mail Website
Guest Editor
Department of Transdisciplinary Science and Engineering, Tokyo Institute of Technology, Tokyo 152-8550, Japan
Interests: human-centered design; urban/rural sociology; qualitative research; engineering education
1. School of Design and Architecture, Swinburne University of Technology, Melbourne 3122, Australia
2. Faculty of Psychology, Beijing Normal University, Beijing 100875, China
Interests: active aging; social health; user experience; human–computer interaction

E-Mail Website
Guest Editor
School of Art and Design, Fuzhou University of International Studies and Trade, Fuzhou 350202, China
Interests: kansei engineering; human factors; design management; industrial design

Special Issue Information

Dear Colleagues,

This Special Issue aims to explore the challenges and opportunities of understanding, designing, and evaluating user experience (UX) across and beyond disciplines. By soliciting contributions from theoretical and practical perspectives that explicitly address the exploration and evaluation of user interfaces and experiences, we will envision the future directions of UX/HCI research on multimodal technologies and the application of user-friendly interfaces. The context includes but is not limited to education, healthcare, transportation, finance, and environmental protection. In particular, we are interested in contributions addressing the research through design approach and the intersection between applied psychology, human–computer interaction (HCI), cognitive neuroscience, anthropology, and design, such as transdisciplinary teaching to specific student groups, digitalized child/elderly/patient care services, car/train/aircraft human–machine interfaces (HMIs), mobile banking applications, carbon peaking and carbon neutrality strategies. We encourage authors to submit original research articles, works in progress, surveys, reviews, and viewpoint articles, presenting transdisciplinary frameworks, methods, and practices that may significantly impact the field for years to come. Topics of interest include, but are not limited to, the following:

  • Transdisciplinary teaching and learning;
  • Design thinking, doing, and tinkering;
  • Human factors and applied psychology;
  • Kansei, emotional, and affective engineering;
  • Psychological and digital wellbeing;
  • Clinical and counseling psychology;
  • Psychological and behavioral big data;
  • Brand, advertising, and consumer psychology;
  • Measurement and human resources;
  • Interaction design qualities and guidelines;
  • Usability evaluation methods;
  • Emerging and multimodal technologies;
  • Augmented, mixed, and extended realities;
  • Inclusion, resilience, and new normal;
  • Creativity, innovation, and entrepreneurship;
  • Human-centered design (HCD);
  • User experience design (UXD);
  • Emotion-driven design (EDD);
  • Collaborative design (Co-Design);
  • Industrial design (ID);
  • NeuroDesign (ND).

Dr. Wei Liu
Dr. Jan Auernhammer
Dr. Takumi Ohashi
Dr. Di Zhu
Dr. Kuo-Hsiang Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

22 pages, 5648 KiB  
Article
Mobile User Interface Adaptation Based on Usability Reward Model and Multi-Agent Reinforcement Learning
by Dmitry Vidmanov and Alexander Alfimtsev
Multimodal Technol. Interact. 2024, 8(4), 26; https://doi.org/10.3390/mti8040026 - 24 Mar 2024
Viewed by 668
Abstract
Today, reinforcement learning is one of the most effective machine learning approaches in the tasks of automatically adapting computer systems to user needs. However, implementing this technology into a digital product requires addressing a key challenge: determining the reward model in the digital [...] Read more.
Today, reinforcement learning is one of the most effective machine learning approaches in the tasks of automatically adapting computer systems to user needs. However, implementing this technology into a digital product requires addressing a key challenge: determining the reward model in the digital environment. This paper proposes a usability reward model in multi-agent reinforcement learning. Well-known mathematical formulas used for measuring usability metrics were analyzed in detail and incorporated into the usability reward model. In the usability reward model, any neural network-based multi-agent reinforcement learning algorithm can be used as the underlying learning algorithm. This paper presents a study using independent and actor-critic reinforcement learning algorithms to investigate their impact on the usability metrics of a mobile user interface. Computational experiments and usability tests were conducted in a specially designed multi-agent environment for mobile user interfaces, enabling the implementation of various usage scenarios and real-time adaptations. Full article
Show Figures

Figure 1

26 pages, 5428 KiB  
Article
Into the Rhythm: Evaluating Breathing Instruction Sound Experiences on the Run with Novice Female Runners
by Vincent van Rheden, Eric Harbour, Thomas Finkenzeller and Alexander Meschtscherjakov
Multimodal Technol. Interact. 2024, 8(4), 25; https://doi.org/10.3390/mti8040025 - 22 Mar 2024
Viewed by 709
Abstract
Running is a popular sport throughout the world. Breathing strategies like stable breathing and slow breathing can positively influence the runner’s physiological and psychological experiences. Sonic breathing instructions are an established, unobtrusive method used in contexts such as exercise and meditation. We argue [...] Read more.
Running is a popular sport throughout the world. Breathing strategies like stable breathing and slow breathing can positively influence the runner’s physiological and psychological experiences. Sonic breathing instructions are an established, unobtrusive method used in contexts such as exercise and meditation. We argue sound to be a viable approach for administering breathing strategies whilst running. This paper describes two laboratory studies using within-subject designs that investigated the usage of sonic breathing instructions with novice female runners. The first study (N = 11) examined the effect of information richness of five different breathing instruction sounds on adherence and user experience. The second study (N = 11) explored adherence and user experience of sonically more enriched sounds, and aimed to increase the sonic experience. Results showed that all sounds were effective in stabilizing the breathing rate (study 1 and 2, respectively: mean absolute percentage error = 1.16 ± 1.05% and 1.9 ± 0.11%, percent time attached = 86.81 ± 9.71% and 86.18 ± 11.96%). Information-rich sounds were subjectively more effective compared to information-poor sounds (mean ratings: 7.55 ± 1.86 and 5.36 ± 2.42, respectively). All sounds scored low (mean < 5/10) on intention to use. Full article
Show Figures

Figure 1

23 pages, 11766 KiB  
Article
Technology and Meditation: Exploring the Challenges and Benefits of a Physical Device to Support Meditation Routine
by Tjaša Kermavnar and Pieter M. A. Desmet
Multimodal Technol. Interact. 2024, 8(2), 9; https://doi.org/10.3390/mti8020009 - 29 Jan 2024
Viewed by 1537
Abstract
Existing studies of technology supporting meditation habit formation mainly focus on mobile applications which support users via reminders. A potentially more effective source of motivation could be contextual cues provided by meaningful objects in meaningful locations. This longitudinal mixed-methods 8-week study explored the [...] Read more.
Existing studies of technology supporting meditation habit formation mainly focus on mobile applications which support users via reminders. A potentially more effective source of motivation could be contextual cues provided by meaningful objects in meaningful locations. This longitudinal mixed-methods 8-week study explored the effectiveness of such an object, Prana, in supporting forming meditation habits among seven novice meditators. First, the Meditation Intentions Questionnaire-24 and the Determinants of Meditation Practice Inventory-Revised were administered. The self-report habit index (SrHI) was administered before and after the study. Prana recorded meditation session times, while daily diaries captured subjective experiences. At the end of the study, the system usability scale, the ten-item personality inventory, and the brief self-control scale were completed, followed by individual semi-structured interviews. We expected to find an increase in meditation frequency and temporal consistency, but the results failed to confirm this. Participants meditated for between 16% and 84% of the study. The frequency decreased with time for four, decreased with subsequent increase for two, and remained stable for one of them. Daily meditation experiences were positive, and the perceived difficulty to start meditating was low. No relevant correlation was found between the perceived difficulty in starting to meditate and meditation experience overall; the latter was only weakly associated with the likelihood of meditating the next day. While meditation became more habitual for six participants, positive scores on SrHI were rare. Despite the inconclusive results, this study provides valuable insights into challenges and benefits of using a meditation device, as well as potential methodological difficulties in studying habit formation with physical devices. Full article
Show Figures

Figure 1

14 pages, 1022 KiB  
Article
Exploring the Appeal of Car-Borne Central Control Platforms Based on Driving Experience
by Chih-Kuan Lin, Chien-Hsiung Chen and Kai-Shuan Shen
Multimodal Technol. Interact. 2023, 7(11), 101; https://doi.org/10.3390/mti7110101 - 29 Oct 2023
Viewed by 1244
Abstract
This study explored drivers’ emotion-based impressions of car-borne central control platforms (CBCCPs) for personal-use vehicles. Thus, this preference-based study examined experts’ and drivers’ opinions regarding the appeal of CBCCPs from the perspective of Miryoku engineering. To this end, this study analyzed data via [...] Read more.
This study explored drivers’ emotion-based impressions of car-borne central control platforms (CBCCPs) for personal-use vehicles. Thus, this preference-based study examined experts’ and drivers’ opinions regarding the appeal of CBCCPs from the perspective of Miryoku engineering. To this end, this study analyzed data via the EGM (evaluation grid method (EGM) and quantification theory type I. Results: Drivers’ preferences for specific CBCCP design characteristics were categorized into the factors “legible”, convenient”, and “tasteful”, which comprised the core of the EGM semantic hierarchical diagram. In addition, the importance of CBCCPs’ appeal factors and characteristics was assessed through quantification theory type I. The findings of this study provide valuable insights for designers, manufacturers, and researchers interested in the design of CBCCPs. Additionally, the results of this study can contribute to research on applied psychology, human–computer interactions, and car interface design. Full article
Show Figures

Figure 1

20 pages, 44045 KiB  
Article
On the Effectiveness of Using Virtual Reality to View BIM Metadata in Architectural Design Reviews for Healthcare
by Emma Buchanan, Giuseppe Loporcaro and Stephan Lukosch
Multimodal Technol. Interact. 2023, 7(6), 60; https://doi.org/10.3390/mti7060060 - 07 Jun 2023
Cited by 1 | Viewed by 2141
Abstract
This article reports on a study that assessed whether Virtual Reality (VR) can be used to display Building Information Modelling (BIM) metadata alongside spatial data in a virtual environment, and by doing so determine if it increases the effectiveness of the design review [...] Read more.
This article reports on a study that assessed whether Virtual Reality (VR) can be used to display Building Information Modelling (BIM) metadata alongside spatial data in a virtual environment, and by doing so determine if it increases the effectiveness of the design review by improving participants’ understanding of the design. Previous research has illustrated the potential for VR to enhance design reviews, especially the ability to convey spatial information, but there has been limited research into how VR can convey additional BIM metadata. A user study with 17 healthcare professionals assessed participants’ performances and preferences for completing design reviews in VR or using a traditional design review system of PDF drawings and a 3D model. The VR condition had a higher task completion rate, a higher SUS score and generally faster completion times. VR increases the effectiveness of a design review conducted by healthcare professionals. Full article
Show Figures

Figure 1

15 pages, 4076 KiB  
Article
Augmented Reality for Supporting Workers in Human–Robot Collaboration
by Ana Moya, Leire Bastida, Pablo Aguirrezabal, Matteo Pantano and Patricia Abril-Jiménez
Multimodal Technol. Interact. 2023, 7(4), 40; https://doi.org/10.3390/mti7040040 - 10 Apr 2023
Cited by 3 | Viewed by 1819
Abstract
This paper discusses the potential benefits of using augmented reality (AR) technology to enhance human–robot collaborative industrial processes. The authors describe a real-world use case at Siemens premises in which an AR-based authoring tool is used to reduce cognitive load, assist human workers [...] Read more.
This paper discusses the potential benefits of using augmented reality (AR) technology to enhance human–robot collaborative industrial processes. The authors describe a real-world use case at Siemens premises in which an AR-based authoring tool is used to reduce cognitive load, assist human workers in training robots, and support calibration and inspection tasks during assembly tasks. The study highlights the potential of AR as a solution for optimizing human–robot collaboration and improving productivity. The article describes the methodology used to deploy and evaluate the ARContent tool, which demonstrated improved usability, reduced task load, and increased efficiency in the assembly process. However, the study is limited by the restricted availability of workers and their knowledge of assembly tasks with robots. The authors suggest that future work should focus on testing the ARContent tool with a larger user pool and improving the authoring tool based on the shortcomings identified during the study. Overall, this work shows the potential for AR technology to revolutionize industrial processes and improve collaboration between humans and robots. Full article
Show Figures

Figure 1

17 pages, 1200 KiB  
Article
Location- and Physical-Activity-Based Application for Japanese Vocabulary Acquisition for Non-Japanese Speakers
by Nguyen Tran, Shogo Kajimura and Yu Shibuya
Multimodal Technol. Interact. 2023, 7(3), 29; https://doi.org/10.3390/mti7030029 - 13 Mar 2023
Cited by 1 | Viewed by 1333
Abstract
There are various mobile applications to support foreign-language learning. While providing interactive designs and playful games to keep learners interested, these applications do not focus on motivating learners to continue learning after a long time. Our goal for this study was to develop [...] Read more.
There are various mobile applications to support foreign-language learning. While providing interactive designs and playful games to keep learners interested, these applications do not focus on motivating learners to continue learning after a long time. Our goal for this study was to develop an application that guides learners to achieve small goals by creating small lessons that are related to their real-life situations, with a main focus on vocabulary acquisition. Therefore, we present MiniHongo, a smartphone application that recognizes learners’ current locations and activities to compose lessons that comprise words that are strongly related to the learners’ real-time situations and can be studied in a short time period, thereby improving user motivation. MiniHongo uses a cloud service for its database and public application programming interfaces for location tracking. A between-subject experiment was conducted to evaluate MiniHongo, which involved comparing it to two other versions of itself. One composed lessons without location recognition, and the other composed lessons without location and activity recognition. The experimental results indicate that users have a strong interest in learning Japanese with MiniHongo, and some difference was found in how well users could memorize what they learned via the application. It is also suggested that the application requires improvements. Full article
Show Figures

Figure 1

18 pages, 1160 KiB  
Article
Developing Dynamic Audio Navigation UIs to Pinpoint Elements in Tactile Graphics
by Gaspar Ramôa, Vincent Schmidt and Peter König
Multimodal Technol. Interact. 2022, 6(12), 113; https://doi.org/10.3390/mti6120113 - 18 Dec 2022
Cited by 4 | Viewed by 1746
Abstract
Access to complex graphical information is essential when connecting blind and visually impaired (BVI) people with the world. Tactile graphics readers enable access to graphical data through audio-tactile user interfaces (UIs), but these have yet to mature. A challenging task for blind people [...] Read more.
Access to complex graphical information is essential when connecting blind and visually impaired (BVI) people with the world. Tactile graphics readers enable access to graphical data through audio-tactile user interfaces (UIs), but these have yet to mature. A challenging task for blind people is locating specific elements–areas in detailed tactile graphics. To this end, we developed three audio navigation UIs that dynamically guide the user’s hand to a specific position using audio feedback. One is based on submarine sonar sounds, another relies on the target’s coordinate plan x and y-axis, and the last uses direct voice instructions. The UIs were implemented in the Tactonom Reader device, a new tactile graphic reader that enhances swell paper graphics with pinpointed audio explanations. To evaluate the effectiveness of the three different dynamic navigation UIs, we conducted a within-subject usability test that involved 13 BVI participants. Beyond comparing the effectiveness of the different UIs, we observed and recorded the interaction of the visually impaired participants with the different navigation UI to further investigate their behavioral patterns during the interaction. We observed that user interfaces that required the user to move their hand in a straight direction were more likely to provoke frustration and were often perceived as challenging for blind and visually impaired people. The analysis revealed that the voice-based navigation UI guides the participant the fastest to the target and does not require prior training. This suggests that a voice-based navigation strategy is a promising approach for designing an accessible user interface for the blind. Full article
Show Figures

Figure 1

16 pages, 3699 KiB  
Article
Usability Tests for Texture Comparison in an Electroadhesion-Based Haptic Device
by Afonso Castiço and Paulo Cardoso
Multimodal Technol. Interact. 2022, 6(12), 108; https://doi.org/10.3390/mti6120108 - 08 Dec 2022
Cited by 2 | Viewed by 1328
Abstract
Haptic displays have been gaining more relevance over the recent years, in part because of the multiple advantages they present compared with standard displays, especially for improved user experience and their many different fields of application. Among the various haptic technologies, electroadhesion is [...] Read more.
Haptic displays have been gaining more relevance over the recent years, in part because of the multiple advantages they present compared with standard displays, especially for improved user experience and their many different fields of application. Among the various haptic technologies, electroadhesion is seen as capable of better interaction with a user, through a display. TanvasTouch is an economically competitive haptic device using electroadhesion, providing an API and respective haptic engine, which makes the development of applications much easier and more systematic than in the past, back when the creation of these haptic solutions required a greater amount of work and resulted in ad-hoc solutions. Despite these advantages, it is important to access its ability to describe textures in a way understandable by the user’s touch. The current paper presents a set of experiments using TanvasTouch electroadhesion-based haptic technology to access how a texture created on a TanvasTouch device can be perceived as a representation of a real-world object. Full article
Show Figures

Figure 1

29 pages, 48775 KiB  
Article
Understanding and Creating Spatial Interactions with Distant Displays Enabled by Unmodified Off-The-Shelf Smartphones
by Teo Babic, Harald Reiterer and Michael Haller
Multimodal Technol. Interact. 2022, 6(10), 94; https://doi.org/10.3390/mti6100094 - 19 Oct 2022
Cited by 1 | Viewed by 1864
Abstract
Over decades, many researchers developed complex in-lab systems with the overall goal to track multiple body parts of the user for a richer and more powerful 2D/3D interaction with a distant display. In this work, we introduce a novel smartphone-based tracking approach that [...] Read more.
Over decades, many researchers developed complex in-lab systems with the overall goal to track multiple body parts of the user for a richer and more powerful 2D/3D interaction with a distant display. In this work, we introduce a novel smartphone-based tracking approach that eliminates the need for complex tracking systems. Relying on simultaneous usage of the front and rear smartphone cameras, our solution enables rich spatial interactions with distant displays by combining touch input with hand-gesture input, body and head motion, as well as eye-gaze input. In this paper, we firstly present a taxonomy for classifying distant display interactions, providing an overview of enabling technologies, input modalities, and interaction techniques, spanning from 2D to 3D interactions. Further, we provide more details about our implementation—using off-the-shelf smartphones. Finally, we validate our system in a user study by a variety of 2D and 3D multimodal interaction techniques, including input refinement. Full article
Show Figures

Figure 1

14 pages, 251 KiB  
Article
Building a Three-Level User Experience (UX) Measurement Framework for Mobile Banking Applications in a Chinese Context: An Analytic Hierarchy Process (AHP) Analysis
by Di Zhu, Yuanhong Xu, Hongjie Ma, Jingxiao Liao, Wen Sun, Yuting Chen and Wei Liu
Multimodal Technol. Interact. 2022, 6(9), 83; https://doi.org/10.3390/mti6090083 - 16 Sep 2022
Cited by 8 | Viewed by 2610
Abstract
User experience (UX) has drawn the attention of the banking industry in the past few decades. Although banking systems have a complete service process to ensure financial safety for customers, the mobile banking UX has much potential to be improved. Most research in [...] Read more.
User experience (UX) has drawn the attention of the banking industry in the past few decades. Although banking systems have a complete service process to ensure financial safety for customers, the mobile banking UX has much potential to be improved. Most research in this field of study relies on existing criteria to describe a user’s experience. However, these criteria are focused more on usability measurement, which neglects to identify the requirements of end-users. Users are asked to give feedback on the provided application, limiting the scope of the user study. Therefore, this study uses mixed methods research and in-depth semi-structured interviews to collect end-user UX requirements to build a UX measurement framework of five main services transfers, including financial management, loans, account openings, and credit cards. This study uses an online survey to validate and revise the framework by applying analytic hierarchy process (AHP) analysis to quantify criteria. We interviewed 17 customers and collected 857 online validation surveys, and 400 customers attended the AHP analysis. As a result, this study proposes a three-level measurement framework for mobile banking applications in a Chinese context. The first-level criteria are scenario requirements (24.03%), data requirements (20.98%), and function requirements (54.99%). We hope that the framework will guide designers and researchers to design better user-friendly user interfaces and improve customer satisfaction rates in the future. Full article

Other

Jump to: Research

13 pages, 1497 KiB  
Perspective
Challenges and Trends in User Trust Discourse in AI Popularity
by Sonia Sousa, José Cravino and Paulo Martins
Multimodal Technol. Interact. 2023, 7(2), 13; https://doi.org/10.3390/mti7020013 - 31 Jan 2023
Cited by 2 | Viewed by 2576
Abstract
The Internet revolution in 1990, followed by the data-driven and information revolution, has transformed the world as we know it. Nowadays, what seam to be 10 to 20 years ago, a science fiction idea (i.e., machines dominating the world) is seen as possible. [...] Read more.
The Internet revolution in 1990, followed by the data-driven and information revolution, has transformed the world as we know it. Nowadays, what seam to be 10 to 20 years ago, a science fiction idea (i.e., machines dominating the world) is seen as possible. This revolution also brought a need for new regulatory practices where user trust and artificial Intelligence (AI) discourse has a central role. This work aims to clarify some misconceptions about user trust in AI discourse and fight the tendency to design vulnerable interactions that lead to further breaches of trust, both real and perceived. Findings illustrate the lack of clarity in understanding user trust and its effects on computer science, especially in measuring user trust characteristics. It argues for clarifying those notions to avoid possible trust gaps and misinterpretations in AI adoption and appropriation. Full article
Show Figures

Figure 1

Back to TopTop