Feature Papers in Multimodal Technologies and Interaction—Edition 2023

A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).

Deadline for manuscript submissions: closed (30 June 2023) | Viewed by 70348

Special Issue Editor

Special Issue Information

Dear Colleagues,

We are pleased to announce this new Special Issue on “Feature Papers in Multimodal Technologies and Interaction—Edition 2023”. This Special Issue is devoted to publishing high-quality articles that describe the most significant and cutting-edge research in all areas that fit the scope of the journal. We have no limitations on the paper types or paper length. Papers describing the challenges and added value of interdisciplinary works are also welcome. Topics of interest include, but are not limited to the following:

  • Displays/sensors: visual, tactile/haptic, sonic, taste, smell;
  • Multimodal interaction, interfaces, and communication;
  • Human–computer, human–human, and human–robot interaction;
  • Human factors, cognition;
  • Multimodal perception;
  • Smart wearable technology;
  • Psychology and neuroscience;
  • Digital and sensory marketing;
  • Enabling, disruptive technologies;
  • Multimodal science, technology, and interfaces;
  • Theoretical, social, and cultural issues;
  • Virtual reality, augmented reality, extended reality;
  • Ubiquitous computing;
  • Design and evaluation;
  • Content creation, environments processes and methods;
  • Data visualization;
  • Application domains.

Prof. Dr. Cristina Portalés Ricart
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (33 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

18 pages, 373 KiB  
Article
Group Leader vs. Remaining Group—Whose Data Should Be Used for Prediction of Team Performance?
by Ronald Böck
Multimodal Technol. Interact. 2023, 7(9), 90; https://doi.org/10.3390/mti7090090 - 14 Sep 2023
Viewed by 965
Abstract
Humans are considered to be communicative, usually interacting in dyads or groups. In this paper, we investigate group interactions regarding performance in a rather formal gathering. In particular, a collection of ten performance indicators used in social group sciences is used to assess [...] Read more.
Humans are considered to be communicative, usually interacting in dyads or groups. In this paper, we investigate group interactions regarding performance in a rather formal gathering. In particular, a collection of ten performance indicators used in social group sciences is used to assess the outcomes of the meetings in this manuscript, in an automatic, machine learning-based way. For this, the Parking Lot Corpus, comprising 70 meetings in total, is analysed. At first, we obtain baseline results for the automatic prediction of performance results on the corpus. This is the first time the Parking Lot Corpus is tapped in this sense. Additionally, we compare baseline values to those obtained, utilising bidirectional long-short term memories. For multiple performance indicators, improvements in the baseline results are able to be achieved. Furthermore, the experiments showed a trend that the acoustic material of the remaining group should use for the prediction of team performance. Full article
Show Figures

Figure 1

16 pages, 3526 KiB  
Article
Evaluation of the Road to Birth Software to Support Obstetric Problem-Based Learning Education with a Cohort of Pre-Clinical Medical Students
by Megan L. Hutchcraft, Robert C. Wallon, Shanna M. Fealy, Donovan Jones and Roberto Galvez
Multimodal Technol. Interact. 2023, 7(8), 84; https://doi.org/10.3390/mti7080084 - 21 Aug 2023
Viewed by 1353
Abstract
Integration of technology within problem-based learning curricula is expanding; however, information regarding student experiences and attitudes about the integration of such technologies is limited. This study aimed to evaluate pre-clinical medical student perceptions and use patterns of the “Road to Birth” (RtB) software, [...] Read more.
Integration of technology within problem-based learning curricula is expanding; however, information regarding student experiences and attitudes about the integration of such technologies is limited. This study aimed to evaluate pre-clinical medical student perceptions and use patterns of the “Road to Birth” (RtB) software, a novel program designed to support human maternal anatomy and physiology education. Second-year medical students at a large midwestern American university participated in a prospective, mixed-methods study. The RtB software is available as a mobile smartphone/tablet application and in immersive virtual reality. The program was integrated into problem-based learning activities across a three-week obstetrics teaching period. Student visuospatial ability, weekly program usage, weekly user satisfaction, and end-of-course focus group interview data were obtained. Survey data were analyzed and summarized using descriptive statistics. Focus group interview data were analyzed using inductive thematic analysis. Of the eligible students, 66% (19/29) consented to participate in the study with 4 students contributing to the focus group interview. Students reported incremental knowledge increases on weekly surveys (69.2% week one, 71.4% week two, and 78.6% week three). Qualitative results indicated the RtB software was perceived as a useful educational resource; however, its interactive nature could have been further optimized. Students reported increased use of portable devices over time and preferred convenient options when using technology incorporated into the curriculum. This study identifies opportunities to better integrate technology into problem-based learning practices in medical education. Further empirical research is warranted with larger and more diverse student samples. Full article
Show Figures

Figure 1

13 pages, 1112 KiB  
Article
Exploring a Novel Mexican Sign Language Lexicon Video Dataset
by Víctor Martínez-Sánchez, Iván Villalón-Turrubiates, Francisco Cervantes-Álvarez and Carlos Hernández-Mejía
Multimodal Technol. Interact. 2023, 7(8), 83; https://doi.org/10.3390/mti7080083 - 19 Aug 2023
Cited by 1 | Viewed by 1664
Abstract
This research explores a novel Mexican Sign Language (MSL) lexicon video dataset containing the dynamic gestures most frequently used in MSL. Each gesture consists of a set of different versions of videos under uncontrolled conditions. The MX-ITESO-100 dataset is composed of a lexicon [...] Read more.
This research explores a novel Mexican Sign Language (MSL) lexicon video dataset containing the dynamic gestures most frequently used in MSL. Each gesture consists of a set of different versions of videos under uncontrolled conditions. The MX-ITESO-100 dataset is composed of a lexicon of 100 gestures and 5000 videos from three participants with different grammatical elements. Additionally, the dataset is evaluated in a two-step neural network model as having an accuracy greater than 99% and thus serves as a benchmark for future training of machine learning models in computer vision systems. Finally, this research provides an inclusive environment within society and organizations, in particular for people with hearing impairments. Full article
Show Figures

Figure 1

19 pages, 16645 KiB  
Article
Multimodal Interaction for Cobot Using MQTT
by José Rouillard and Jean-Marc Vannobel
Multimodal Technol. Interact. 2023, 7(8), 78; https://doi.org/10.3390/mti7080078 - 03 Aug 2023
Cited by 1 | Viewed by 1739
Abstract
For greater efficiency, human–machine and human–robot interactions must be designed with the idea of multimodality in mind. To allow the use of several interaction modalities, such as the use of voice, touch, gaze tracking, on several different devices (computer, smartphone, tablets, etc.) and [...] Read more.
For greater efficiency, human–machine and human–robot interactions must be designed with the idea of multimodality in mind. To allow the use of several interaction modalities, such as the use of voice, touch, gaze tracking, on several different devices (computer, smartphone, tablets, etc.) and to integrate possible connected objects, it is necessary to have an effective and secure means of communication between the different parts of the system. This is even more important with the use of a collaborative robot (cobot) sharing the same space and very close to the human during their tasks. This study present research work in the field of multimodal interaction for a cobot using the MQTT protocol, in virtual (Webots) and real worlds (ESP microcontrollers, Arduino, IOT2040). We show how MQTT can be used efficiently, with a common publish/subscribe mechanism for several entities of the system, in order to interact with connected objects (like LEDs and conveyor belts), robotic arms (like the Ned Niryo), or mobile robots. We compare the use of MQTT with that of the Firebase Realtime Database used in several of our previous research works. We show how a “pick–wait–choose–and place” task can be carried out jointly by a cobot and a human, and what this implies in terms of communication and ergonomic rules, via health or industrial concerns (people with disabilities, and teleoperation). Full article
Show Figures

Figure 1

24 pages, 14309 KiB  
Article
Exploring the Educational Value and Impact of Vision-Impairment Simulations on Sympathy and Empathy with XREye
by Katharina Krösl, Marina Lima Medeiros, Marlene Huber, Steven Feiner and Carmine Elvezio
Multimodal Technol. Interact. 2023, 7(7), 70; https://doi.org/10.3390/mti7070070 - 06 Jul 2023
Viewed by 1486
Abstract
To create a truly accessible and inclusive society, we need to take the more than 2.2 billion people with vision impairments worldwide into account when we design our cities, buildings, and everyday objects. This requires sympathy and empathy, as well as a certain [...] Read more.
To create a truly accessible and inclusive society, we need to take the more than 2.2 billion people with vision impairments worldwide into account when we design our cities, buildings, and everyday objects. This requires sympathy and empathy, as well as a certain level of understanding of the impact of vision impairments on perception. In this study, we explore the potential of an extended version of our vision-impairment simulation system XREye to increase sympathy and empathy and evaluate its educational value in an expert study with 56 educators and education students. We include data from a previous study in related work on sympathy and empathy as a baseline for comparison with our data. Our results show increased sympathy and empathy after experiencing XREye and positive feedback regarding its educational value. Hence, we believe that vision-impairment simulations, such as XREye, have merit to be used for educational purposes in order to increase awareness for the challenges people with vision impairments face in their everyday lives. Full article
Show Figures

Figure 1

11 pages, 2573 KiB  
Article
Exploring Learning Curves in Acupuncture Education Using Vision-Based Needle Tracking
by Duy Duc Pham, Trong Hieu Luu, Le Trung Chanh Tran, Hoai Trang Nguyen Thi and Hoang-Long Cao
Multimodal Technol. Interact. 2023, 7(7), 69; https://doi.org/10.3390/mti7070069 - 06 Jul 2023
Viewed by 1489
Abstract
Measuring learning curves allows for the inspection of the rate of learning and competency threshold for each individual, training lesson, or training method. In this work, we investigated learning curves in acupuncture needle manipulation training with continuous performance measurement using a vision-based needle [...] Read more.
Measuring learning curves allows for the inspection of the rate of learning and competency threshold for each individual, training lesson, or training method. In this work, we investigated learning curves in acupuncture needle manipulation training with continuous performance measurement using a vision-based needle training system. We tracked the needle insertion depth of 10 students to investigate their learning curves. The results show that the group-level learning curve was fitted with the Thurstone curve, indicating that students were able to improve their needle insertion skills after repeated practice. Additionally, the analysis of individual learning curves revealed valuable insights into the learning experiences of each participant, highlighting the importance of considering individual differences in learning styles and abilities when designing training programs. Full article
Show Figures

Figure 1

19 pages, 1245 KiB  
Article
Mid-Air Gestural Interaction with a Large Fogscreen
by Vera Remizova, Antti Sand, I. Scott MacKenzie, Oleg Špakov, Katariina Nyyssönen, Ismo Rakkolainen, Anneli Kylliäinen, Veikko Surakka and Yulia Gizatdinova
Multimodal Technol. Interact. 2023, 7(7), 63; https://doi.org/10.3390/mti7070063 - 24 Jun 2023
Cited by 1 | Viewed by 1416
Abstract
Projected walk-through fogscreens have been created, but there is little research on the evaluation of the interaction performance with fogscreens. The present study investigated mid-air hand gestures for interaction with a large fogscreen. Participants (N = 20) selected objects from a fogscreen [...] Read more.
Projected walk-through fogscreens have been created, but there is little research on the evaluation of the interaction performance with fogscreens. The present study investigated mid-air hand gestures for interaction with a large fogscreen. Participants (N = 20) selected objects from a fogscreen using tapping and dwell-based gestural techniques, with and without vibrotactile/haptic feedback. In terms of Fitts’ law, the throughput was about 1.4 bps to 2.6 bps, suggesting that gestural interaction with a large fogscreen is a suitable and effective input method. Our results also suggest that tapping without haptic feedback has good performance and potential for interaction with a fogscreen, and that tactile feedback is not necessary for effective mid-air interaction. These findings have implications for the design of gestural interfaces suitable for interaction with fogscreens. Full article
Show Figures

Figure 1

27 pages, 7108 KiB  
Article
Location-Based Game for Thought-Provoking Evacuation Training
by Hiroyuki Mitsuhara, Chie Tanimura, Junko Nemoto and Masami Shishibori
Multimodal Technol. Interact. 2023, 7(6), 59; https://doi.org/10.3390/mti7060059 - 07 Jun 2023
Viewed by 1064
Abstract
Participation in evacuation training can aid survival in the event of an unpredictable disaster, such as an earthquake. However, conventional evacuation training is not well designed for provoking critical thinking in participants regarding the processes involved in a successful evacuation. To realize thought-provoking [...] Read more.
Participation in evacuation training can aid survival in the event of an unpredictable disaster, such as an earthquake. However, conventional evacuation training is not well designed for provoking critical thinking in participants regarding the processes involved in a successful evacuation. To realize thought-provoking evacuation training, we developed a location-based game that presents digital materials that express disaster situations corresponding to locations or times preset in a scenario and providing scenario-based multi-ending as the game element. The developed game motivates participants to take decisions by providing high situational and audiovisual realism. In addition, the game encourages the participants to think objectively about the evacuation process by working together with a reflection-support system. We practiced thought-provoking evacuation training with fifth-grade students, focusing on tsunami evacuation and lifesaving-related moral dilemmas. In this practice, we observed that the participants took decisions as if they were dealing with actual disaster situations and objectively thought about the evacuation process by reflecting on their decisions. Meanwhile, we found that lifesaving-related moral dilemmas are difficult to address in evacuation training. Full article
Show Figures

Graphical abstract

24 pages, 13031 KiB  
Article
Development and Evaluation of a Mobile Application with Augmented Reality for Guiding Visitors on Hiking Trails
by Rute Silva, Rui Jesus and Pedro Jorge
Multimodal Technol. Interact. 2023, 7(6), 58; https://doi.org/10.3390/mti7060058 - 31 May 2023
Cited by 3 | Viewed by 2427
Abstract
Tourism on the island of Santa Maria, Azores, has been increasing due to its characteristics in terms of biodiversity and geodiversity. This island has several hiking trails; the available information can be consulted in pamphlets and physical placards, whose maintenance and updating is [...] Read more.
Tourism on the island of Santa Maria, Azores, has been increasing due to its characteristics in terms of biodiversity and geodiversity. This island has several hiking trails; the available information can be consulted in pamphlets and physical placards, whose maintenance and updating is difficult and expensive. Thus, the need to improve the visitors’ experience arises, in this case, by using the technological means currently available to everyone: a smartphone. This paper describes the development and evaluation of the user experience of a mobile application for guiding visitors on said hiking trails, as well as the design principles and main issues observed during this process. The application is based on an augmented reality interaction model providing visitors with an interactive and recreational experience through Augmented Reality in outdoor environments (without additional marks in the physical space and using georeferenced information), helping in navigation during the route and providing updated information with easy maintenance. For the design and evaluation of the application, two studies were carried out with users on-site (Santa Maria, Azores). The first had 77 participants, to analyze users and define the application’s characteristics, and the second had 10 participants to evaluate the user experience. The feedback from participants was obtained through questionnaires. In these questionnaires, an average SUS (System Usability Scale) score of 83 (excellent) and positive results in the UEQ (User Experience Questionnaire) were obtained. Full article
Show Figures

Figure 1

18 pages, 1306 KiB  
Article
A Digital Coach to Promote Emotion Regulation Skills
by Katherine Hopman, Deborah Richards and Melissa M. Norberg
Multimodal Technol. Interact. 2023, 7(6), 57; https://doi.org/10.3390/mti7060057 - 29 May 2023
Cited by 2 | Viewed by 1782
Abstract
There is growing awareness that effective emotion regulation is critical for health, adjustment and wellbeing. Emerging evidence suggests that interventions that promote flexible emotion regulation may have the potential to reduce the incidence and prevalence of mental health problems in specific at-risk populations. [...] Read more.
There is growing awareness that effective emotion regulation is critical for health, adjustment and wellbeing. Emerging evidence suggests that interventions that promote flexible emotion regulation may have the potential to reduce the incidence and prevalence of mental health problems in specific at-risk populations. The challenge is how best to engage with at risk populations, who may not be actively seeking assistance, to deliver this early intervention approach. One possible solution is via digital technology and development, which has rapidly accelerated in this space. Such rapid growth has, however, occurred at the expense of developing a deep understanding of key elements of successful program design and specific mechanisms that influence health behavior change. This paper presents a detailed description of the design, development and evaluation of an emotion regulation intervention conversational agent (ERICA) who acts as a digital coach. ERICA uses interactive conversation to encourage self-reflection and to support and empower users to learn a range of cognitive emotion regulation strategies including Refocusing, Reappraisal, Planning and Putting into Perspective. A pilot evaluation of ERICA was conducted with 138 university students and confirmed that ERICA provided a feasible and highly usable method for delivering an emotion regulation intervention. The results also indicated that ERICA was able to develop a therapeutic relationship with participants and increase their intent to use a range of cognitive emotion regulation strategies. These findings suggest that ERICA holds potential to be an effective approach for delivering an early intervention to support mental health and wellbeing. ERICA’s dialogue, embedded with interactivity, therapeutic alliance and empathy cues, provide the basis for the development of other psychoeducation interventions. Full article
Show Figures

Figure 1

22 pages, 4763 KiB  
Article
Hand-Controlled User Interfacing for Head-Mounted Augmented Reality Learning Environments
by Jennifer Challenor, David White and David Murphy
Multimodal Technol. Interact. 2023, 7(6), 55; https://doi.org/10.3390/mti7060055 - 26 May 2023
Cited by 2 | Viewed by 1458
Abstract
With the rapid expansion of technology and hardware availability within the field of Augmented Reality, building and deploying Augmented Reality learning environments has become more logistically viable than ever before. In this paper, we focus on the development of a new mobile learning [...] Read more.
With the rapid expansion of technology and hardware availability within the field of Augmented Reality, building and deploying Augmented Reality learning environments has become more logistically viable than ever before. In this paper, we focus on the development of a new mobile learning experience for a museum by combining multiple technologies to provide additional Human–computer interaction possibilities. This is both to reduce barriers to entry for end-users as well as provide natural interaction methods. Using our method, we implemented a new approach to gesture-based interactions for Augmented Reality interactions by combining two devices, a Leap Motion and a Microsoft HoloLens (1st Generation), via an intermediary device with the use of local-area networking. This was carried out with the intention of comparing this method against alternative forms of Augmented Reality to determine which implementation has the largest impact on adult learners’ ability to retain information. A control group has been used to establish data on memory retention without the use of Augmented Reality technology, along with three focus groups to explore the different methods and locations. Results found that adult learners retain the most overall information when being educated through a traditional lecture, with a statistically significant difference between the methods; however, the use of Augmented Reality resulted in a slower rate of knowledge decay between testing intervals. This contrasts with existing research as adult learners did not respond to the technology in the same way that child and teenage audiences previously have, which suggests that prior research may not be generalisable to all audiences. Full article
Show Figures

Figure 1

21 pages, 775 KiB  
Article
Linking Personality and Trust in Intelligent Virtual Assistants
by Lisa Schadelbauer, Stephan Schlögl and Aleksander Groth
Multimodal Technol. Interact. 2023, 7(6), 54; https://doi.org/10.3390/mti7060054 - 25 May 2023
Cited by 1 | Viewed by 1655
Abstract
Throughout the last years, Intelligent Virtual Assistants (IVAs), such as Alexa and Siri, have increasingly gained in popularity. Yet, privacy advocates raise great concerns regarding the amount and type of data these systems collect and consequently process. Among many other things, it is [...] Read more.
Throughout the last years, Intelligent Virtual Assistants (IVAs), such as Alexa and Siri, have increasingly gained in popularity. Yet, privacy advocates raise great concerns regarding the amount and type of data these systems collect and consequently process. Among many other things, it is technology trust which seems to be of high significance here, particularly when it comes to the adoption of IVAs, for they usually provide little transparency as to how they function and use personal and potentially sensitive data. While technology trust is influenced by many different socio-technical parameters, this article focuses on human personality and its connection to respective trust perceptions, which in turn may further impact the actual adoption of IVA products. To this end, we report on the results of an online survey (n=367). Findings show that on a scale from 0 to 100%, people trust IVAs 51.59% on average. Furthermore, the data point to a significant positive correlation between people’s propensity to trust in general technology and their trust in IVAs. Yet, they also show that those who exhibit a higher propensity to trust in technology tend to also have a higher affinity for technology interaction and are consequently more likely to adopt IVAs. Full article
Show Figures

Figure 1

20 pages, 5702 KiB  
Article
Revealing Unknown Aspects: Sparking Curiosity and Engagement with a Tourist Destination through a 360-Degree Virtual Tour
by Dimitra Petousi, Akrivi Katifori, Maria Boile, Lori Kougioumtzian, Christos Lougiakis, Maria Roussou and Yannis Ioannidis
Multimodal Technol. Interact. 2023, 7(5), 51; https://doi.org/10.3390/mti7050051 - 16 May 2023
Viewed by 2177
Abstract
Over the past decades, 360-degree virtual tours have been used to provide the public access to accurate representations of cultural heritage sites and museums. The COVID-19 pandemic has contributed to a rise in the popularity of virtual tours as a means of engaging [...] Read more.
Over the past decades, 360-degree virtual tours have been used to provide the public access to accurate representations of cultural heritage sites and museums. The COVID-19 pandemic has contributed to a rise in the popularity of virtual tours as a means of engaging with locations remotely and has raised an interesting question: How could we use such experiences to bring the public closer to locations that are otherwise unreachable in real life or not considered to be tourist destinations? In this study, we examine the effectiveness of promoting engagement with a city through the virtual presentation of unknown and possibly also inaccessible points of interest through a 360-degree panoramic virtual tour. The evaluation of the experience with 31 users through an online questionnaire confirms its potential to spark curiosity, promote engagement, foster reflection, and motivate users to explore the location and its attractions at their leisure, thus enabling them to experience it from their personal point of view. The outcomes highlight the need for further research to explore this potential and identify best practices for virtual experience design. Full article
Show Figures

Figure 1

16 pages, 1089 KiB  
Article
Team Success: A Mixed Methods Approach to Evaluating Virtual Team Leadership Behaviors
by Diana R. Sanchez, Amanda Rueda, Hana R. Zimman, Reese Haydon, Daniel Diaz and Kentaro Kawasaki
Multimodal Technol. Interact. 2023, 7(5), 48; https://doi.org/10.3390/mti7050048 - 05 May 2023
Viewed by 2359
Abstract
The virtuality of organizational teams have gained interest and popularity in recent years, and have become more prevalent amidst the COVID-19 pandemic. Organizational productivity and team relationship-building may suffer certain pitfalls in virtual communication and support without the understanding of the dynamics of [...] Read more.
The virtuality of organizational teams have gained interest and popularity in recent years, and have become more prevalent amidst the COVID-19 pandemic. Organizational productivity and team relationship-building may suffer certain pitfalls in virtual communication and support without the understanding of the dynamics of short-term, project-based virtual teams. The manuscript aimed to expand what is currently known about short-term virtual team dynamics related to types of effective leadership behaviors. The present study employed a mixed method approach to understanding the dynamics of these teams at both the individual and team level. Small teams were formed and instructed to collaborate on a virtual survival task. Team-related outcomes were measured at the individual level, such as team coordination, team support, and team success. Additionally, distinct latent profiles of leadership behaviors were developed and analyzed at the team level. Team support, more so than team coordination, significantly predicted team success at the individual level, with instrumental support having the strongest effect. Distinct leadership behaviors emerged in teams and were classified through a latent profile analysis, but none of the profiles were significantly related to team performance scores. Demonstrating instrumental support in short-term virtual teams may improve team success. It is important to understand that distinct leadership behaviors exist and future research should explore the impact of these leadership behaviors on other team-related outcomes. Full article
Show Figures

Figure 1

13 pages, 910 KiB  
Article
Differences of Training Structures on Stimulus Class Formation in Computational Agents
by Alexis Carrillo and Moisés Betancort
Multimodal Technol. Interact. 2023, 7(4), 39; https://doi.org/10.3390/mti7040039 - 04 Apr 2023
Viewed by 1608
Abstract
Stimulus Equivalence (SE) is a behavioural phenomenon in which organisms respond functionally to stimuli without explicit training. SE provides a framework in the experimental analysis of behaviour to study language, symbolic behaviour, and cognition. It is also a frequently discussed matter in interdisciplinary [...] Read more.
Stimulus Equivalence (SE) is a behavioural phenomenon in which organisms respond functionally to stimuli without explicit training. SE provides a framework in the experimental analysis of behaviour to study language, symbolic behaviour, and cognition. It is also a frequently discussed matter in interdisciplinary research, linking behaviour analysis with linguistics and neuroscience. Previous research has attempted to replicate SE with computational agents, mostly based on Artificial Neural Network (ANN) models. The aim of this paper was to analyse the effect of three Training Structures (TSs) on stimulus class formation in a simulation with ANNs as computational agents performing a classification task, in a matching-to-sample procedure. Twelve simulations were carried out as a product of the implementation of four ANN architectures on the three TSs. SE was not achieved, but two agents showed an emergent response on half of the transitivity test pairs on linear sequence TSs and reflexivity on one member of the class. The results suggested that an ANN with a large enough number of units in a hidden layer can perform a limited number of emergent relations within specific experimental conditions: reflexivity on B and transitivity on AC, when pairs AB and BC are trained on a three-member stimulus class and tested in a classification task. Reinforcement learning is proposed as the framework for further simulations. Full article
Show Figures

Figure 1

21 pages, 3502 KiB  
Article
EEG Correlates of Distractions and Hesitations in Human–Robot Interaction: A LabLinking Pilot Study
by Birte Richter, Felix Putze, Gabriel Ivucic, Mara Brandt, Christian Schütze, Rafael Reisenhofer, Britta Wrede and Tanja Schultz
Multimodal Technol. Interact. 2023, 7(4), 37; https://doi.org/10.3390/mti7040037 - 29 Mar 2023
Cited by 1 | Viewed by 1992
Abstract
In this paper, we investigate the effect of distractions and hesitations as a scaffolding strategy. Recent research points to the potential beneficial effects of a speaker’s hesitations on the listeners’ comprehension of utterances, although results from studies on this issue indicate that humans [...] Read more.
In this paper, we investigate the effect of distractions and hesitations as a scaffolding strategy. Recent research points to the potential beneficial effects of a speaker’s hesitations on the listeners’ comprehension of utterances, although results from studies on this issue indicate that humans do not make strategic use of them. The role of hesitations and their communicative function in human-human interaction is a much-discussed topic in current research. To better understand the underlying cognitive processes, we developed a human–robot interaction (HRI) setup that allows the measurement of the electroencephalogram (EEG) signals of a human participant while interacting with a robot. We thereby address the research question of whether we find effects on single-trial EEG based on the distraction and the corresponding robot’s hesitation scaffolding strategy. To carry out the experiments, we leverage our LabLinking method, which enables interdisciplinary joint research between remote labs. This study could not have been conducted without LabLinking, as the two involved labs needed to combine their individual expertise and equipment to achieve the goal together. The results of our study indicate that the EEG correlates in the distracted condition are different from the baseline condition without distractions. Furthermore, we could differentiate the EEG correlates of distraction with and without a hesitation scaffolding strategy. This proof-of-concept study shows that LabLinking makes it possible to conduct collaborative HRI studies in remote laboratories and lays the first foundation for more in-depth research into robotic scaffolding strategies. Full article
Show Figures

Figure 1

16 pages, 54274 KiB  
Article
The Impact of Different Overlay Materials on the Tactile Detection of Virtual Straight Lines
by Patrick Coe, Grigori Evreinov and Roope Raisamo
Multimodal Technol. Interact. 2023, 7(4), 35; https://doi.org/10.3390/mti7040035 - 28 Mar 2023
Viewed by 1293
Abstract
To improve the perception of haptic feedback, materials and sense-modifier effects should be examined. Teflon, Nylon mesh, and Silicone overlays were tested in combination with lateral vibrations to study their impact on the tactile sense. A feelable point moving along a line was [...] Read more.
To improve the perception of haptic feedback, materials and sense-modifier effects should be examined. Teflon, Nylon mesh, and Silicone overlays were tested in combination with lateral vibrations to study their impact on the tactile sense. A feelable point moving along a line was implemented through the use of a dynamically moving interference maximum generated via the offset actuation of four haptic exciters affixed to corners of a Gorilla Glass surface. This feedback was presented to eight participants in a series of randomized experiments. Both the Nylon mesh and Teflon covering revealed a statistically significant (p < 0.05) impact of improvement to the user performance in the task of dynamic haptic virtual straight lines localization. While Silicone covering, having three times greater friction than Gorilla Glass, has less or no impact on both decision time, the number of task repetitions, and error rate (p > 0.05). The lateral vibration modifier (60 Hz) can also successfully be used with an increase in performance by about twofold, at least that was demonstrated for both the Nylon mesh and Teflon covering. Full article
Show Figures

Figure 1

10 pages, 1163 KiB  
Article
Online Platforms for Remote Immersive Virtual Reality Testing: An Emerging Tool for Experimental Behavioral Research
by Tobias Loetscher, Nadia Siena Jurkovic, Stefan Carlo Michalski, Mark Billinghurst and Gun Lee
Multimodal Technol. Interact. 2023, 7(3), 32; https://doi.org/10.3390/mti7030032 - 21 Mar 2023
Cited by 3 | Viewed by 1797
Abstract
Virtual Reality (VR) technology is gaining in popularity as a research tool for studying human behavior. However, the use of VR technology for remote testing is still an emerging field. This study aimed to evaluate the feasibility of conducting remote VR behavioral experiments [...] Read more.
Virtual Reality (VR) technology is gaining in popularity as a research tool for studying human behavior. However, the use of VR technology for remote testing is still an emerging field. This study aimed to evaluate the feasibility of conducting remote VR behavioral experiments that require millisecond timing. Participants were recruited via an online crowdsourcing platform and accessed a task on the classic cognitive phenomenon “Inhibition of Return” through a web browser using their own VR headset or desktop computer (68 participants in each group). The results confirm previous research that remote participants using desktop computers can be used effectively for conducting time-critical cognitive experiments. However, inhibition of return was only partially replicated for the VR headset group. Exploratory analyses revealed that technical factors, such as headset type, were likely to significantly impact variability and must be mitigated to obtain accurate results. This study demonstrates the potential for remote VR testing to broaden the research scope and reach a larger participant population. Crowdsourcing services appear to be an efficient and effective way to recruit participants for remote behavioral testing using high-end VR headsets. Full article
Show Figures

Figure 1

23 pages, 2246 KiB  
Article
Need for UAI–Anatomy of the Paradigm of Usable Artificial Intelligence for Domain-Specific AI Applicability
by Hajo Wiemer, Dorothea Schneider, Valentin Lang, Felix Conrad, Mauritz Mälzer, Eugen Boos, Kim Feldhoff, Lucas Drowatzky and Steffen Ihlenfeldt
Multimodal Technol. Interact. 2023, 7(3), 27; https://doi.org/10.3390/mti7030027 - 28 Feb 2023
Cited by 3 | Viewed by 2398
Abstract
Data-driven methods based on artificial intelligence (AI) are powerful yet flexible tools for gathering knowledge and automating complex tasks in many areas of science and practice. Despite the rapid development of the field, the existing potential of AI methods to solve recent industrial, [...] Read more.
Data-driven methods based on artificial intelligence (AI) are powerful yet flexible tools for gathering knowledge and automating complex tasks in many areas of science and practice. Despite the rapid development of the field, the existing potential of AI methods to solve recent industrial, corporate and social challenges has not yet been fully exploited. Research shows the insufficient practicality of AI in domain-specific contexts as one of the main application hurdles. Focusing on industrial demands, this publication introduces a new paradigm in terms of applicability of AI methods, called Usable AI (UAI). Aspects of easily accessible, domain-specific AI methods are derived, which address essential user-oriented AI services within the UAI paradigm: usability, suitability, integrability and interoperability. The relevance of UAI is clarified by describing challenges, hurdles and peculiarities of AI applications in the production area, whereby the following user roles have been abstracted: developers of cyber–physical production systems (CPPS), developers of processes and operators of processes. The analysis shows that target artifacts, motivation, knowledge horizon and challenges differ for the user roles. Therefore, UAI shall enable domain- and user-role-specific adaptation of affordances accompanied by adaptive support of vertical and horizontal integration across the domains and user roles. Full article
Show Figures

Figure 1

30 pages, 2900 KiB  
Article
Developing Usability Guidelines for mHealth Applications (UGmHA)
by Eman Nasr, Wafaa Alsaggaf and Doaa Sinnari
Multimodal Technol. Interact. 2023, 7(3), 26; https://doi.org/10.3390/mti7030026 - 28 Feb 2023
Cited by 2 | Viewed by 2333
Abstract
Mobile health (mHealth) is a branch of electronic health (eHealth) technology that provides healthcare services using smartphones and wearable devices. However, most mHealth applications were developed without applying mHealth specialized usability guidelines. Although many researchers have used various guidelines to design and evaluate [...] Read more.
Mobile health (mHealth) is a branch of electronic health (eHealth) technology that provides healthcare services using smartphones and wearable devices. However, most mHealth applications were developed without applying mHealth specialized usability guidelines. Although many researchers have used various guidelines to design and evaluate mHealth applications, these guidelines have certain limitations. First, some of them are general guidelines. Second, others are specified for mHealth applications; however, they only cover a few features of mHealth applications. Third, some of them did not consider accessibility needs for the elderly and people with special needs. Therefore, this paper proposes a new set of usability guidelines for mHealth applications (UGmHA) based on Quinones et al.’s formal methodology, which consists of seven stages starting from the Exploratory stage and ending with the Refining stage. What distinguishes these proposed guidelines is that they are easy to follow, consider the feature of accessibility for the elderly and people with special needs and cover different features of mHealth applications. In order to validate UGmHA, an experiment was conducted on two applications in Saudi Arabia using UGmHA versus other well-known usability guidelines to discover usability issues. The experimental results show that the UGmHA discovered more usability issues than did the other guidelines. Full article
Show Figures

Figure 1

13 pages, 711 KiB  
Article
Comparison of Using an Augmented Reality Learning Tool at Home and in a Classroom Regarding Motivation and Learning Outcomes
by Aldo Uriarte-Portillo, María Blanca Ibáñez, Ramón Zatarain-Cabada and María Lucía Barrón-Estrada
Multimodal Technol. Interact. 2023, 7(3), 23; https://doi.org/10.3390/mti7030023 - 23 Feb 2023
Cited by 5 | Viewed by 2434
Abstract
The recent pandemic brought on considerable changes in terms of learning activities, which were moved from in-person classroom-based lessons to virtual work performed at home in most world regions. One of the most considerable challenges faced by educators was keeping students motivated toward [...] Read more.
The recent pandemic brought on considerable changes in terms of learning activities, which were moved from in-person classroom-based lessons to virtual work performed at home in most world regions. One of the most considerable challenges faced by educators was keeping students motivated toward learning activities. Interactive learning environments in general, and augmented reality (AR)-based learning environments in particular, are thought to foster emotional and cognitive engagement when used in the classroom. This study aims to compare the motivation and learning outcomes of middle school students in two educational settings: in the classroom and at home. The study involved 55 middle school students using the AR application to practice basic chemistry concepts. The results suggested that students’ general motivation towards the activity was similar in both settings. However, students who worked at home reported better satisfaction and attention levels compared with those who worked in the classroom. Additionally, students who worked at home made fewer mistakes and achieved better grades compared with those who worked in the classroom. Overall, the study suggests that AR can be exploited as an effective learning environment for learning the basic principles of chemistry in home settings. Full article
Show Figures

Figure 1

30 pages, 12525 KiB  
Article
Roadmap for the Development of EnLang4All: A Video Game for Learning English
by Isabel Machado Alexandre, Pedro Faria Lopes and Cynthia Borges
Multimodal Technol. Interact. 2023, 7(2), 17; https://doi.org/10.3390/mti7020017 - 03 Feb 2023
Cited by 1 | Viewed by 1723
Abstract
Nowadays, people are more predisposed to being self-taught due to the availability of online information. With digitalization, information appears not only in its conventional state, as blogs, articles, newspapers, or e-books, but also in more interactive and enticing ways. Video games have become [...] Read more.
Nowadays, people are more predisposed to being self-taught due to the availability of online information. With digitalization, information appears not only in its conventional state, as blogs, articles, newspapers, or e-books, but also in more interactive and enticing ways. Video games have become a transmission vehicle for information and knowledge, but they require specific treatment in respect of their presentation and the way in which users interact with them. This treatment includes usability guidelines and heuristics that provide video game properties that are favorable to a better user experience, conducive to captivating the user and to assimilating the content. In this research, usability guidelines and heuristics, complemented with recommendations from educational video game studies, were gathered and analyzed for application to a video game for English language learning called EnLang4All, which was also developed in the scope of this project and evaluated in terms of its reception by users. Full article
Show Figures

Figure 1

17 pages, 3082 KiB  
Article
Multimodal Communication and Peer Interaction during Equation-Solving Sessions with and without Tangible Technologies
by Daranee Lehtonen, Jorma Joutsenlahti and Päivi Perkkilä
Multimodal Technol. Interact. 2023, 7(1), 6; https://doi.org/10.3390/mti7010006 - 11 Jan 2023
Viewed by 2265
Abstract
Despite the increasing use of technologies in the classroom, there are concerns that technology-enhanced learning environments may hinder students’ communication and interaction. In this study, we investigated how tangible technologies can enhance students’ multimodal communication and interaction during equation-solving pair work compared to [...] Read more.
Despite the increasing use of technologies in the classroom, there are concerns that technology-enhanced learning environments may hinder students’ communication and interaction. In this study, we investigated how tangible technologies can enhance students’ multimodal communication and interaction during equation-solving pair work compared to working without such technologies. A tangible app for learning equation solving was developed and tested in fourth- and fifth-grade classrooms with two class teachers and 24 students. Video data of the interventions were analysed using deductive and inductive content analysis. Coded data were also quantified for quantitative analysis. Additionally, teacher interview data were used to compare and contrast the findings. The findings showed that the tangible app better promoted students’ multimodal communication and peer interaction than working only with paper and pencil. When working in pairs, tangible-app students interacted with one another much more often and in more ways than their paper-and-pencil peers. The implications of this study are discussed in terms of its contributions to research on tangible technologies for learning, educational technology development, and the use of tangibles in classrooms to support students’ multimodal communication and peer interaction. Full article
Show Figures

Figure 1

18 pages, 4283 KiB  
Article
A Decision-Support System to Analyse Customer Satisfaction Applied to a Tourism Transport Service
by Célia M. Q. Ramos, Pedro J. S. Cardoso, Hortênsio C. L. Fernandes and João M. F. Rodrigues
Multimodal Technol. Interact. 2023, 7(1), 5; https://doi.org/10.3390/mti7010005 - 31 Dec 2022
Cited by 6 | Viewed by 2720
Abstract
Due to the perishable nature of tourist products, which impacts supply and demand, the possibility of analysing the relationship between customers’ satisfaction and service quality can contribute to increased revenues. Machine learning techniques allow the analysis of how these services can be improved [...] Read more.
Due to the perishable nature of tourist products, which impacts supply and demand, the possibility of analysing the relationship between customers’ satisfaction and service quality can contribute to increased revenues. Machine learning techniques allow the analysis of how these services can be improved or developed and how to reach new markets, and look for the emergence of ideas to innovate and improve interaction with the customer. This paper presents a decision-support system for analysing consumer satisfaction, based on consumer feedback from the customer’s experience when transported by a transfer company, in the present case working in the Algarve region, Portugal. The results show how tourists perceive the service and which factors influence their level of satisfaction and sentiment. One of the results revealed that the first impression associated with good news is what creates the most value in the experience, i.e., “first impressions matter”. Full article
Show Figures

Figure 1

16 pages, 1354 KiB  
Article
A Machine Learning Approach to Identify the Preferred Representational System of a Person
by Mohammad Hossein Amirhosseini and Julie Wall
Multimodal Technol. Interact. 2022, 6(12), 112; https://doi.org/10.3390/mti6120112 - 17 Dec 2022
Viewed by 1623
Abstract
Whenever people think about something or engage in activities, internal mental processes will be engaged. These processes consist of sensory representations, such as visual, auditory, and kinesthetic, which are constantly being used, and they can have an impact on a person’s performance. Each [...] Read more.
Whenever people think about something or engage in activities, internal mental processes will be engaged. These processes consist of sensory representations, such as visual, auditory, and kinesthetic, which are constantly being used, and they can have an impact on a person’s performance. Each person has a preferred representational system they use most when speaking, learning, or communicating, and identifying it can explain a large part of their exhibited behaviours and characteristics. This paper proposes a machine learning-based automated approach to identify the preferred representational system of a person that is used unconsciously. A novel methodology has been used to create a specific labelled conversational dataset, four different machine learning models (support vector machine, logistic regression, random forest, and k-nearest neighbour) have been implemented, and the performance of these models has been evaluated and compared. The results show that the support vector machine model has the best performance for identifying a person’s preferred representational system, as it has a better mean accuracy score compared to the other approaches after the performance of 10-fold cross-validation. The automated model proposed here can assist Neuro Linguistic Programming practitioners and psychologists to have a better understanding of their clients’ behavioural patterns and the relevant cognitive processes. It can also be used by people and organisations in order to achieve their goals in personal development and management. The two main knowledge contributions in this paper are the creation of the first labelled dataset for representational systems, which is now publicly available, and the use of machine learning techniques for the first time to identify a person’s preferred representational system in an automated way. Full article
Show Figures

Figure 1

21 pages, 7701 KiB  
Article
patchIT: A Multipurpose Patch Creation Tool for Image Processing Applications
by Anastasios L. Kesidis, Vassilios Krassanakis, Loukas-Moysis Misthos and Nikolaos Merlemis
Multimodal Technol. Interact. 2022, 6(12), 111; https://doi.org/10.3390/mti6120111 - 14 Dec 2022
Viewed by 4728
Abstract
Patch-based approaches in image processing are often preferable to working with the entire image. They provide an alternative representation of the image as a set of partial local sub-images (patches) which is a vital preprocessing step in many image processing applications. In this [...] Read more.
Patch-based approaches in image processing are often preferable to working with the entire image. They provide an alternative representation of the image as a set of partial local sub-images (patches) which is a vital preprocessing step in many image processing applications. In this paper, a new software tool called patchIT is presented, providing an integrated framework suitable for the systematic and automatized extraction of patches from images based on user-defined geometrical and spatial criteria. Patches can be extracted in both a sliding and random manner and can be exported either as images, MATLAB .mat files, or raw text files. The proposed tool offers further functionality, including masking operations that act as spatial filters, identifying candidate patch areas, as well as geometric transformations by applying patch value indexing. It also efficiently handles issues that arise in large-scale patch processing scenarios in terms of memory and time requirements. In addition, a use case in cartographic research is presented that utilizes patchIT for map evaluation purposes based on a visual heterogeneity indicator. The tool supports all common image file formats and efficiently processes bitonal, grayscale, color, and multispectral images. PatchIT is freely available to the scientific community under the third version of GNU General Public License (GPL v3) on the GitHub platform. Full article
Show Figures

Figure 1

20 pages, 2572 KiB  
Article
Virtual Reality in Health Science Education: Professors’ Perceptions
by Álvaro Antón-Sancho, Pablo Fernández-Arias and Diego Vergara
Multimodal Technol. Interact. 2022, 6(12), 110; https://doi.org/10.3390/mti6120110 - 14 Dec 2022
Cited by 10 | Viewed by 3495
Abstract
Virtual reality (VR) is a simulated experience in a three-dimensional (3D) computer-simulated world. Recent advances in technology position VR as a multipurpose technology in the healthcare sector and as a critical component in achieving Health 4.0. In this article, descriptive and correlationally quantitative [...] Read more.
Virtual reality (VR) is a simulated experience in a three-dimensional (3D) computer-simulated world. Recent advances in technology position VR as a multipurpose technology in the healthcare sector and as a critical component in achieving Health 4.0. In this article, descriptive and correlationally quantitative research is carried out on the assessments made by Latin American health sciences university professors on the didactic use of virtual reality technologies. The main objective was to analyze the differences in the perceptions expressed by the public or private tenure of the universities where the professors teach. In addition, gender and age gaps were identified in the assessments obtained from each of the types of universities. The results reveal that Latin American health science professors at private universities have a higher selfconcept of their digital skills for the use of virtual reality in the lectures. This greater selfconcept also leads to a reduction in the gender and age gaps in the participating private universities with respect to the public counterparts. It is advisable to increase both faculty training in the didactic use of virtual reality and funding for its use, mainly in public universities. Full article
Show Figures

Figure 1

19 pages, 8339 KiB  
Article
Design and Implementation of a Personalizable Alternative Mouse and Keyboard Interface for Individuals with Limited Upper Limb Mobility
by Daniel Andreas, Hannah Six, Adna Bliek and Philipp Beckerle
Multimodal Technol. Interact. 2022, 6(12), 104; https://doi.org/10.3390/mti6120104 - 24 Nov 2022
Viewed by 1798
Abstract
People with neuromuscular diseases often experience limited upper limb mobility, which makes the handling of standard computer mice and keyboards difficult. Due to the importance of computers in private and professional life, this work aims at implementing an alternative mouse and keyboard interface [...] Read more.
People with neuromuscular diseases often experience limited upper limb mobility, which makes the handling of standard computer mice and keyboards difficult. Due to the importance of computers in private and professional life, this work aims at implementing an alternative mouse and keyboard interface that will allow for their efficient use by people with a neuromuscular disease. Due to the strongly differing symptoms of these diseases, personalization on the hardware and software levels is the focus of our work. The presented mouse alternative is based on a spectacle frame with an integrated motion sensor for head tracking, which enables the control of the mouse cursor position; the keyboard alternative consists of ten keys, which are used to generate word suggestions for the user input. The interface was tested in a user study involving three participants without disabilities, which showed the general functionality of the system and potential room for improvement. With an average throughput of 1.56 bits per second achieved by the alternative mouse and typing speeds of 8.44 words per minute obtained using the alternative keyboard, the proposed interface could be a promising input device for people with limited upper limb mobility. Full article
Show Figures

Figure 1

19 pages, 5214 KiB  
Article
Data as a Resource for Designing Digitally Enhanced Consumer Packaged Goods
by Gustavo Berumen, Joel Fischer and Martin Baumers
Multimodal Technol. Interact. 2022, 6(11), 101; https://doi.org/10.3390/mti6110101 - 17 Nov 2022
Viewed by 1972
Abstract
The incorporation of digital functionalities into consumer packaged goods (CPG) has the potential to improve our lives by supporting us in our daily practises. However, despite the increasing availability of data about their use, research is needed to explore how these data can [...] Read more.
The incorporation of digital functionalities into consumer packaged goods (CPG) has the potential to improve our lives by supporting us in our daily practises. However, despite the increasing availability of data about their use, research is needed to explore how these data can be harnessed to create such digital enhancements. This paper explores how consumers can utilise data about interactions with CPGs to conceptualise their enhanced versions. We devised a data-inspired ideation approach, using data visualisations and design cards to facilitate the conceptualisation of enhanced CPGs. Analysing the role of data as expressed through participants’ comments and designs, we found that data served as a basis for the creation of unique concepts imbued with greater consideration for the experiences of others and attention to their own interests. Our study shows the value of empowering consumers through data to broaden and inform their contributions towards the creation of smart products. Full article
Show Figures

Graphical abstract

15 pages, 1643 KiB  
Article
Effects of Cognitive Behavioral Stress Management Delivered by a Virtual Human, Teletherapy, and an E-Manual on Psychological and Physiological Outcomes in Adult Women: An Experimental Test
by Kate Loveys, Michael Antoni, Liesje Donkin, Mark Sagar, William Xu and Elizabeth Broadbent
Multimodal Technol. Interact. 2022, 6(11), 99; https://doi.org/10.3390/mti6110099 - 14 Nov 2022
Cited by 4 | Viewed by 2175
Abstract
Technology may expand the reach of stress management to broader populations. However, issues with engagement can reduce intervention effectiveness. Technologies with highly social interfaces, such as virtual humans (VH), may offer advantages in this space. However, it is unclear how VH compare to [...] Read more.
Technology may expand the reach of stress management to broader populations. However, issues with engagement can reduce intervention effectiveness. Technologies with highly social interfaces, such as virtual humans (VH), may offer advantages in this space. However, it is unclear how VH compare to telehealth and e-manuals at delivering psychological interventions. This experiment compared the effects of single laboratory session of Cognitive Behavioral Stress Management (CBSM) delivered by a VH (VH-CBSM), human telehealth (T-CBSM), and an e-manual (E-CBSM) on psychological and physiological outcomes in a community sample of stressed adult women. A pilot randomized controlled trial (RCT) with a parallel, mixed design was conducted. Adult women (M age =43.21, SD = 10.70) who self-identified as stressed were randomly allocated to VH-CBSM, T-CBSM, or E-CBSM involving one 90 min session and homework. Perceived stress, stress management skills, negative affect, optimism, relaxation, and physiological stress were measured. Mixed factorial ANOVAs and pairwise comparisons with Bonferroni correction investigated main and interaction effects of time and condition. Participants’ data (N = 38) were analysed (12 = VH-CBSM; 12 = T-CBSM; 14 = E-CBSM). Each condition significantly improved stress, negative affect, optimism, relaxation, and physiological stress over time with large effect sizes. No significant differences were found between conditions on outcomes. Overall, all three technologies showed promise for remotely delivering CBSM in a controlled setting. The findings suggest feasibility of the VH-CBSM delivery approach and support conducting a fully powered RCT to examine its effectiveness when delivering a full 10-week CBSM intervention. Full article
Show Figures

Figure 1

Review

Jump to: Research, Other

13 pages, 263 KiB  
Review
A Review of Virtual Reality for Individuals with Hearing Impairments
by Stefania Serafin, Ali Adjorlu and Lone Marianne Percy-Smith
Multimodal Technol. Interact. 2023, 7(4), 36; https://doi.org/10.3390/mti7040036 - 28 Mar 2023
Cited by 4 | Viewed by 4362
Abstract
Virtual Reality (VR) technologies have the potential to be applied in a clinical context to improve training and rehabilitation for individuals with hearing impairment. The introduction of such technologies in clinical audiology is in its infancy and requires devices that can be taken [...] Read more.
Virtual Reality (VR) technologies have the potential to be applied in a clinical context to improve training and rehabilitation for individuals with hearing impairment. The introduction of such technologies in clinical audiology is in its infancy and requires devices that can be taken out of laboratory settings as well as a solid collaboration between researchers and clinicians. In this paper, we discuss the state of the art of VR in audiology with applications to measurement and monitoring of hearing loss, rehabilitation, and training, as well as the development of assistive technologies. We review papers that utilize VR delivered through a head-mounted display (HMD) and used individuals with hearing impairment as test subjects, or presented solutions targeted at individuals with hearing impairments, discussing their goals and results, and analyzing how VR can be a useful tool in hearing research. The review shows the potential of VR in testing and training individuals with hearing impairment, as well as the need for more research and applications in this domain. Full article
27 pages, 1935 KiB  
Review
A Review of Design and Evaluation Practices in Mobile Text Entry for Visually Impaired and Blind Persons
by Andreas Komninos, Vassilios Stefanis and John Garofalakis
Multimodal Technol. Interact. 2023, 7(2), 22; https://doi.org/10.3390/mti7020022 - 17 Feb 2023
Cited by 2 | Viewed by 1676
Abstract
Millions of people with vision impairment or vision loss face considerable barriers in using mobile technology and services due to the difficulty of text entry. In this paper, we review related studies involving the design and evaluation of novel prototypes for mobile text [...] Read more.
Millions of people with vision impairment or vision loss face considerable barriers in using mobile technology and services due to the difficulty of text entry. In this paper, we review related studies involving the design and evaluation of novel prototypes for mobile text entry for persons with vision loss or impairment. We identify the practices and standards of the research community and compare them against the practices in research for non-impaired persons. We find that there are significant shortcomings in the methodological and result-reporting practices in both population types. In highlighting these issues, we hope to inspire more and better quality research in the domain of mobile text entry for persons with and without vision impairment. Full article
Show Figures

Figure 1

Other

Jump to: Research, Review

26 pages, 1419 KiB  
Systematic Review
How Is Privacy Behavior Formulated? A Review of Current Research and Synthesis of Information Privacy Behavioral Factors
by Ioannis Paspatis, Aggeliki Tsohou and Spyros Kokolakis
Multimodal Technol. Interact. 2023, 7(8), 76; https://doi.org/10.3390/mti7080076 - 29 Jul 2023
Cited by 2 | Viewed by 1967
Abstract
What influences Information Communications and Technology (ICT) users’ privacy behavior? Several studies have shown that users state to care about their personal data. Contrary to that though, they perform unsafe privacy actions, such as ignoring to configure privacy settings. In this research, we [...] Read more.
What influences Information Communications and Technology (ICT) users’ privacy behavior? Several studies have shown that users state to care about their personal data. Contrary to that though, they perform unsafe privacy actions, such as ignoring to configure privacy settings. In this research, we present the results of an in-depth literature review on the factors affecting privacy behavior. We seek to investigate the underlying factors that influence individuals’ privacy-conscious behavior in the digital domain, as well as effective interventions to promote such behavior. Privacy decisions regarding the disclosure of personal information may have negative consequences on individuals’ lives, such as becoming a victim of identity theft, impersonation, etc. Moreover, third parties may exploit this information for their own benefit, such as targeted advertising practices. By identifying the factors that may affect SNS users’ privacy awareness, we can assist in creating methods for effective privacy protection and/or user-centered design. Examining the results of several research studies, we found evidence that privacy behavior is affected by a variety of factors, including individual ones (e.g., demographics) and contextual ones (e.g., financial exchanges). We synthesize a framework that aggregates the scattered factors that have been found in the literature to affect privacy behavior. Our framework can be beneficial to academics and practitioners in the private and public sectors. For example, academics can utilize our findings to create specialized information privacy courses and theoretical or laboratory modules. Full article
Show Figures

Figure 1

Back to TopTop