Next Issue
Volume 7, April
Previous Issue
Volume 7, February
 
 

Multimodal Technol. Interact., Volume 7, Issue 3 (March 2023) – 11 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
32 pages, 3665 KiB  
Article
Evaluating Social Impact of Smart City Technologies and Services: Methods, Challenges, Future Directions
by Elise Hodson, Teija Vainio, Michel Nader Sayún, Martin Tomitsch, Ana Jones, Meri Jalonen, Ahmet Börütecene, Md Tanvir Hasan, Irina Paraschivoiu, Annika Wolff, Sharon Yavo-Ayalon, Sari Yli-Kauhaluoma and Gareth W. Young
Multimodal Technol. Interact. 2023, 7(3), 33; https://doi.org/10.3390/mti7030033 - 22 Mar 2023
Cited by 5 | Viewed by 3970
Abstract
This study examines motivations, definitions, methods and challenges of evaluating the social impacts of smart city technologies and services. It outlines concepts of social impact assessment and discusses how social impact has been included in smart city evaluation frameworks. Thematic analysis is used [...] Read more.
This study examines motivations, definitions, methods and challenges of evaluating the social impacts of smart city technologies and services. It outlines concepts of social impact assessment and discusses how social impact has been included in smart city evaluation frameworks. Thematic analysis is used to investigate how social impact is addressed in eight smart city projects that prioritise human-centred design across a variety of contexts and development phases, from design research and prototyping to completed and speculative projects. These projects are notable for their emphasis on human, organisational and natural stakeholders; inclusion, participation and empowerment; new methods of citizen engagement; and relationships between sustainability and social impact. At the same time, there are gaps in the evaluation of social impact in both the smart city indexes and the eight projects. Based on our analysis, we contend that more coherent, consistent and analytical approaches are needed to build narratives of change and to comprehend impacts before, during and after smart city projects. We propose criteria for social impact evaluation in smart cities and identify new directions for research. This is of interest for smart city developers, researchers, funders and policymakers establishing protocols and frameworks for evaluation, particularly as smart city concepts and complex technologies evolve in the context of equitable and sustainable development. Full article
Show Figures

Figure 1

10 pages, 1163 KiB  
Article
Online Platforms for Remote Immersive Virtual Reality Testing: An Emerging Tool for Experimental Behavioral Research
by Tobias Loetscher, Nadia Siena Jurkovic, Stefan Carlo Michalski, Mark Billinghurst and Gun Lee
Multimodal Technol. Interact. 2023, 7(3), 32; https://doi.org/10.3390/mti7030032 - 21 Mar 2023
Cited by 3 | Viewed by 1800
Abstract
Virtual Reality (VR) technology is gaining in popularity as a research tool for studying human behavior. However, the use of VR technology for remote testing is still an emerging field. This study aimed to evaluate the feasibility of conducting remote VR behavioral experiments [...] Read more.
Virtual Reality (VR) technology is gaining in popularity as a research tool for studying human behavior. However, the use of VR technology for remote testing is still an emerging field. This study aimed to evaluate the feasibility of conducting remote VR behavioral experiments that require millisecond timing. Participants were recruited via an online crowdsourcing platform and accessed a task on the classic cognitive phenomenon “Inhibition of Return” through a web browser using their own VR headset or desktop computer (68 participants in each group). The results confirm previous research that remote participants using desktop computers can be used effectively for conducting time-critical cognitive experiments. However, inhibition of return was only partially replicated for the VR headset group. Exploratory analyses revealed that technical factors, such as headset type, were likely to significantly impact variability and must be mitigated to obtain accurate results. This study demonstrates the potential for remote VR testing to broaden the research scope and reach a larger participant population. Crowdsourcing services appear to be an efficient and effective way to recruit participants for remote behavioral testing using high-end VR headsets. Full article
Show Figures

Figure 1

13 pages, 2483 KiB  
Article
Toward Creating Software Architects Using Mobile Project-Based Learning Model (Mobile-PBL) for Teaching Software Architecture
by Lamis F. Al-Qora’n, Ali Jawarneh and Julius T. Nganji
Multimodal Technol. Interact. 2023, 7(3), 31; https://doi.org/10.3390/mti7030031 - 15 Mar 2023
Cited by 3 | Viewed by 1769
Abstract
Project-based learning (PBL) promotes increased levels of learning, deepens student understanding of acquired knowledge, and improves learning motivation. Students develop their ability to think and learn independently through depending on themselves in searching for knowledge, planning, exploration, and looking for solutions to practical [...] Read more.
Project-based learning (PBL) promotes increased levels of learning, deepens student understanding of acquired knowledge, and improves learning motivation. Students develop their ability to think and learn independently through depending on themselves in searching for knowledge, planning, exploration, and looking for solutions to practical problems. Information availability, student engagement, and motivation to learn all increase with mobile learning. The teaching process may be enhanced by combining the two styles. This paper proposes and evaluates a teaching model called Mobile Project-Based Learning (Mobile-PBL) that combines the two learning styles. The paper investigates how significantly Mobile-PBL can benefit students. The traditional lecture method used to teach the software architecture module in the classroom is not sufficient to provide students with the necessary practical experience to earn a career as software architects in the future. Therefore, the first author tested the use of the model for teaching the software architecture module at Philadelphia University’s Software Engineering Department on 62 students who registered for a software architecture course over three semesters. She compared the results of using the model for teaching with those results that were obtained when using the project-based learning (PBL) approach alone. The students’ opinions regarding the approach, any problems they had, and any recommendations for improvement were collected through a focus group session after finishing each semester and by distributing a survey to students to evaluate the effectiveness of the used model. Comments from the students were positive, according to the findings. The projects were well-received by the students, who agreed that it gave them a good understanding of several course ideas and concepts, as well as providing them with the required practical experience. The students also mentioned a few difficulties encountered while working on the projects, including student distraction from social media and the skills that educators and learners in higher education institutions are expected to have. Full article
(This article belongs to the Special Issue Designing EdTech and Virtual Learning Environments)
Show Figures

Figure 1

14 pages, 2023 KiB  
Article
Higher Education in the Pacific Alliance: Descriptive and Exploratory Analysis of the Didactic Potential of Virtual Reality
by Álvaro Antón-Sancho, Pablo Fernández-Arias and Diego Vergara
Multimodal Technol. Interact. 2023, 7(3), 30; https://doi.org/10.3390/mti7030030 - 15 Mar 2023
Viewed by 1271
Abstract
In this paper, we conducted descriptive quantitative research on the assessment of virtual reality (VR) technologies in higher education in the countries of the Pacific Alliance (PA). Specifically, differences between PA countries in terms of the above perceptions were identified and the behavior [...] Read more.
In this paper, we conducted descriptive quantitative research on the assessment of virtual reality (VR) technologies in higher education in the countries of the Pacific Alliance (PA). Specifically, differences between PA countries in terms of the above perceptions were identified and the behavior of the gender and knowledge area gaps in each of them was analyzed. A validated quantitative questionnaire was used for this purpose. As a result, we found that PA professors express high ratings of VR but point out strong disadvantages regarding its use in lectures; in addition, they have low self-concept of their digital competence. In this regard, it was identified that there are notable differences among the PA countries. Mexico is the country with the most marked gender gaps, while Chile has strong gaps by areas of knowledge. We give some recommendations towards favoring a homogeneous process of integration of VR in higher education in the PA countries. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

17 pages, 1200 KiB  
Article
Location- and Physical-Activity-Based Application for Japanese Vocabulary Acquisition for Non-Japanese Speakers
by Nguyen Tran, Shogo Kajimura and Yu Shibuya
Multimodal Technol. Interact. 2023, 7(3), 29; https://doi.org/10.3390/mti7030029 - 13 Mar 2023
Cited by 1 | Viewed by 1346
Abstract
There are various mobile applications to support foreign-language learning. While providing interactive designs and playful games to keep learners interested, these applications do not focus on motivating learners to continue learning after a long time. Our goal for this study was to develop [...] Read more.
There are various mobile applications to support foreign-language learning. While providing interactive designs and playful games to keep learners interested, these applications do not focus on motivating learners to continue learning after a long time. Our goal for this study was to develop an application that guides learners to achieve small goals by creating small lessons that are related to their real-life situations, with a main focus on vocabulary acquisition. Therefore, we present MiniHongo, a smartphone application that recognizes learners’ current locations and activities to compose lessons that comprise words that are strongly related to the learners’ real-time situations and can be studied in a short time period, thereby improving user motivation. MiniHongo uses a cloud service for its database and public application programming interfaces for location tracking. A between-subject experiment was conducted to evaluate MiniHongo, which involved comparing it to two other versions of itself. One composed lessons without location recognition, and the other composed lessons without location and activity recognition. The experimental results indicate that users have a strong interest in learning Japanese with MiniHongo, and some difference was found in how well users could memorize what they learned via the application. It is also suggested that the application requires improvements. Full article
Show Figures

Figure 1

22 pages, 2829 KiB  
Article
Learning about Victims of Holocaust in Virtual Reality: The Main, Mediating and Moderating Effects of Technology, Instructional Method, Flow, Presence, and Prior Knowledge
by Miriam Mulders
Multimodal Technol. Interact. 2023, 7(3), 28; https://doi.org/10.3390/mti7030028 - 06 Mar 2023
Cited by 3 | Viewed by 1821
Abstract
The goal of the current study was to investigate the effects of a virtual reality (VR) simulation of Anne Frank’s hiding place on learning. In a 2 × 2 experiment, 132 middle school students learned about the living conditions of Anne Frank, a [...] Read more.
The goal of the current study was to investigate the effects of a virtual reality (VR) simulation of Anne Frank’s hiding place on learning. In a 2 × 2 experiment, 132 middle school students learned about the living conditions of Anne Frank, a girl of Jewish heritage during the Second World War, through desktop VR (DVR) and head-mounted display VR (HMD-VR) (media conditions). Approximately half of each group engaged in an explorative vs. an expository learning approach (method condition). The exposition group received instructions on how to explore the hiding place stepwise, whereas the exploration group experienced it autonomously. Next to the main effects of media and methods, the mediating effects of the learning process variables of presence and flow and the moderating effects of contextual variables (e.g., prior technical knowledge) have been analyzed. The results revealed that the HMD-VR led to significantly improved evaluation, and—even if not statistically significant—perspective-taking in Anne, but less knowledge gain compared to DVR. Further results showed that adding instructions and segmentation within the exposition group led to significantly increased knowledge gain compared to the exploration group. For perspective-taking and evaluation, no differences were detected. A significant interaction between media and methods was not found. No moderating effects by contextual variables but mediating effects were observed: For example, the feeling of presence within VR can fully explain the relationships between media and learning. These results support the view that learning processes are crucial for learning in VR and that studies neglecting these learning processes may be confounded. Hence, the results pointed out that media comparison studies are limited because they do not consider the complex interaction structures of media, instructional methods, learning processes, and contextual variables. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

23 pages, 2246 KiB  
Article
Need for UAI–Anatomy of the Paradigm of Usable Artificial Intelligence for Domain-Specific AI Applicability
by Hajo Wiemer, Dorothea Schneider, Valentin Lang, Felix Conrad, Mauritz Mälzer, Eugen Boos, Kim Feldhoff, Lucas Drowatzky and Steffen Ihlenfeldt
Multimodal Technol. Interact. 2023, 7(3), 27; https://doi.org/10.3390/mti7030027 - 28 Feb 2023
Cited by 3 | Viewed by 2409
Abstract
Data-driven methods based on artificial intelligence (AI) are powerful yet flexible tools for gathering knowledge and automating complex tasks in many areas of science and practice. Despite the rapid development of the field, the existing potential of AI methods to solve recent industrial, [...] Read more.
Data-driven methods based on artificial intelligence (AI) are powerful yet flexible tools for gathering knowledge and automating complex tasks in many areas of science and practice. Despite the rapid development of the field, the existing potential of AI methods to solve recent industrial, corporate and social challenges has not yet been fully exploited. Research shows the insufficient practicality of AI in domain-specific contexts as one of the main application hurdles. Focusing on industrial demands, this publication introduces a new paradigm in terms of applicability of AI methods, called Usable AI (UAI). Aspects of easily accessible, domain-specific AI methods are derived, which address essential user-oriented AI services within the UAI paradigm: usability, suitability, integrability and interoperability. The relevance of UAI is clarified by describing challenges, hurdles and peculiarities of AI applications in the production area, whereby the following user roles have been abstracted: developers of cyber–physical production systems (CPPS), developers of processes and operators of processes. The analysis shows that target artifacts, motivation, knowledge horizon and challenges differ for the user roles. Therefore, UAI shall enable domain- and user-role-specific adaptation of affordances accompanied by adaptive support of vertical and horizontal integration across the domains and user roles. Full article
Show Figures

Figure 1

30 pages, 2900 KiB  
Article
Developing Usability Guidelines for mHealth Applications (UGmHA)
by Eman Nasr, Wafaa Alsaggaf and Doaa Sinnari
Multimodal Technol. Interact. 2023, 7(3), 26; https://doi.org/10.3390/mti7030026 - 28 Feb 2023
Cited by 2 | Viewed by 2345
Abstract
Mobile health (mHealth) is a branch of electronic health (eHealth) technology that provides healthcare services using smartphones and wearable devices. However, most mHealth applications were developed without applying mHealth specialized usability guidelines. Although many researchers have used various guidelines to design and evaluate [...] Read more.
Mobile health (mHealth) is a branch of electronic health (eHealth) technology that provides healthcare services using smartphones and wearable devices. However, most mHealth applications were developed without applying mHealth specialized usability guidelines. Although many researchers have used various guidelines to design and evaluate mHealth applications, these guidelines have certain limitations. First, some of them are general guidelines. Second, others are specified for mHealth applications; however, they only cover a few features of mHealth applications. Third, some of them did not consider accessibility needs for the elderly and people with special needs. Therefore, this paper proposes a new set of usability guidelines for mHealth applications (UGmHA) based on Quinones et al.’s formal methodology, which consists of seven stages starting from the Exploratory stage and ending with the Refining stage. What distinguishes these proposed guidelines is that they are easy to follow, consider the feature of accessibility for the elderly and people with special needs and cover different features of mHealth applications. In order to validate UGmHA, an experiment was conducted on two applications in Saudi Arabia using UGmHA versus other well-known usability guidelines to discover usability issues. The experimental results show that the UGmHA discovered more usability issues than did the other guidelines. Full article
Show Figures

Figure 1

20 pages, 2207 KiB  
Article
A Literature Survey of How to Convey Transparency in Co-Located Human–Robot Interaction
by Svenja Y. Schött, Rifat Mehreen Amin and Andreas Butz
Multimodal Technol. Interact. 2023, 7(3), 25; https://doi.org/10.3390/mti7030025 - 25 Feb 2023
Cited by 5 | Viewed by 2751
Abstract
In human–robot interaction, transparency is essential to ensure that humans understand and trust robots. Understanding is vital from an ethical perspective and benefits interaction, e.g., through appropriate trust. While there is research on explanations and their content, the methods used to convey the [...] Read more.
In human–robot interaction, transparency is essential to ensure that humans understand and trust robots. Understanding is vital from an ethical perspective and benefits interaction, e.g., through appropriate trust. While there is research on explanations and their content, the methods used to convey the explanations are underexplored. It remains unclear which approaches are used to foster understanding. To this end, we contribute a systematic literature review exploring how robot transparency is fostered in papers published in the ACM Digital Library and IEEE Xplore. We found that researchers predominantly rely on monomodal visual or verbal explanations to foster understanding. Commonly, these explanations are external, as opposed to being integrated in the robot design. This paper provides an overview of how transparency is communicated in human–robot interaction research and derives a classification with concrete recommendations for communicating transparency. Our results establish a solid base for consistent, transparent human–robot interaction designs. Full article
(This article belongs to the Special Issue Challenges in Human-Centered Robotics)
Show Figures

Figure 1

16 pages, 1472 KiB  
Article
Can AI-Oriented Requirements Enhance Human-Centered Design of Intelligent Interactive Systems? Results from a Workshop with Young HCI Designers
by Pietro Battistoni, Marianna Di Gregorio, Marco Romano, Monica Sebillo and Giuliana Vitiello
Multimodal Technol. Interact. 2023, 7(3), 24; https://doi.org/10.3390/mti7030024 - 25 Feb 2023
Cited by 2 | Viewed by 2339
Abstract
In this paper, we show that the evolution of artificial intelligence (AI) and its increased presence within an interactive system pushes designers to rethink the way in which AI and its users interact and to highlight users’ feelings towards AI. For novice designers, [...] Read more.
In this paper, we show that the evolution of artificial intelligence (AI) and its increased presence within an interactive system pushes designers to rethink the way in which AI and its users interact and to highlight users’ feelings towards AI. For novice designers, it is crucial to acknowledge that both the user and artificial intelligence possess decision-making capabilities. Such a process may involve mediation between humans and artificial intelligence. This process should also consider the mutual learning that can occur between the two entities over time. Therefore, we explain how to adapt the Human-Centered Design (HCD) process to give centrality to AI as the user, further empowering the interactive system, and to adapt the interaction design to the actual capabilities, limitations, and potentialities of AI. This is to encourage designers to explore the interactions between AI and humans and focus on the potential user experience. We achieve such centrality by extracting and formalizing a new category of AI requirements. We have provocatively named this extension: “Intelligence-Centered”. A design workshop with MsC HCI students was carried out as a case study supporting this change of perspective in design. Full article
Show Figures

Figure 1

13 pages, 711 KiB  
Article
Comparison of Using an Augmented Reality Learning Tool at Home and in a Classroom Regarding Motivation and Learning Outcomes
by Aldo Uriarte-Portillo, María Blanca Ibáñez, Ramón Zatarain-Cabada and María Lucía Barrón-Estrada
Multimodal Technol. Interact. 2023, 7(3), 23; https://doi.org/10.3390/mti7030023 - 23 Feb 2023
Cited by 5 | Viewed by 2447
Abstract
The recent pandemic brought on considerable changes in terms of learning activities, which were moved from in-person classroom-based lessons to virtual work performed at home in most world regions. One of the most considerable challenges faced by educators was keeping students motivated toward [...] Read more.
The recent pandemic brought on considerable changes in terms of learning activities, which were moved from in-person classroom-based lessons to virtual work performed at home in most world regions. One of the most considerable challenges faced by educators was keeping students motivated toward learning activities. Interactive learning environments in general, and augmented reality (AR)-based learning environments in particular, are thought to foster emotional and cognitive engagement when used in the classroom. This study aims to compare the motivation and learning outcomes of middle school students in two educational settings: in the classroom and at home. The study involved 55 middle school students using the AR application to practice basic chemistry concepts. The results suggested that students’ general motivation towards the activity was similar in both settings. However, students who worked at home reported better satisfaction and attention levels compared with those who worked in the classroom. Additionally, students who worked at home made fewer mistakes and achieved better grades compared with those who worked in the classroom. Overall, the study suggests that AR can be exploited as an effective learning environment for learning the basic principles of chemistry in home settings. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop