Next Issue
Volume 7, September
Previous Issue
Volume 7, July
 
 

Multimodal Technol. Interact., Volume 7, Issue 8 (August 2023) – 10 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
16 pages, 3526 KiB  
Article
Evaluation of the Road to Birth Software to Support Obstetric Problem-Based Learning Education with a Cohort of Pre-Clinical Medical Students
by Megan L. Hutchcraft, Robert C. Wallon, Shanna M. Fealy, Donovan Jones and Roberto Galvez
Multimodal Technol. Interact. 2023, 7(8), 84; https://doi.org/10.3390/mti7080084 - 21 Aug 2023
Viewed by 1403
Abstract
Integration of technology within problem-based learning curricula is expanding; however, information regarding student experiences and attitudes about the integration of such technologies is limited. This study aimed to evaluate pre-clinical medical student perceptions and use patterns of the “Road to Birth” (RtB) software, [...] Read more.
Integration of technology within problem-based learning curricula is expanding; however, information regarding student experiences and attitudes about the integration of such technologies is limited. This study aimed to evaluate pre-clinical medical student perceptions and use patterns of the “Road to Birth” (RtB) software, a novel program designed to support human maternal anatomy and physiology education. Second-year medical students at a large midwestern American university participated in a prospective, mixed-methods study. The RtB software is available as a mobile smartphone/tablet application and in immersive virtual reality. The program was integrated into problem-based learning activities across a three-week obstetrics teaching period. Student visuospatial ability, weekly program usage, weekly user satisfaction, and end-of-course focus group interview data were obtained. Survey data were analyzed and summarized using descriptive statistics. Focus group interview data were analyzed using inductive thematic analysis. Of the eligible students, 66% (19/29) consented to participate in the study with 4 students contributing to the focus group interview. Students reported incremental knowledge increases on weekly surveys (69.2% week one, 71.4% week two, and 78.6% week three). Qualitative results indicated the RtB software was perceived as a useful educational resource; however, its interactive nature could have been further optimized. Students reported increased use of portable devices over time and preferred convenient options when using technology incorporated into the curriculum. This study identifies opportunities to better integrate technology into problem-based learning practices in medical education. Further empirical research is warranted with larger and more diverse student samples. Full article
Show Figures

Figure 1

13 pages, 1112 KiB  
Article
Exploring a Novel Mexican Sign Language Lexicon Video Dataset
by Víctor Martínez-Sánchez, Iván Villalón-Turrubiates, Francisco Cervantes-Álvarez and Carlos Hernández-Mejía
Multimodal Technol. Interact. 2023, 7(8), 83; https://doi.org/10.3390/mti7080083 - 19 Aug 2023
Cited by 1 | Viewed by 1707
Abstract
This research explores a novel Mexican Sign Language (MSL) lexicon video dataset containing the dynamic gestures most frequently used in MSL. Each gesture consists of a set of different versions of videos under uncontrolled conditions. The MX-ITESO-100 dataset is composed of a lexicon [...] Read more.
This research explores a novel Mexican Sign Language (MSL) lexicon video dataset containing the dynamic gestures most frequently used in MSL. Each gesture consists of a set of different versions of videos under uncontrolled conditions. The MX-ITESO-100 dataset is composed of a lexicon of 100 gestures and 5000 videos from three participants with different grammatical elements. Additionally, the dataset is evaluated in a two-step neural network model as having an accuracy greater than 99% and thus serves as a benchmark for future training of machine learning models in computer vision systems. Finally, this research provides an inclusive environment within society and organizations, in particular for people with hearing impairments. Full article
Show Figures

Figure 1

27 pages, 7639 KiB  
Article
Virtual Urban Field Studies: Evaluating Urban Interaction Design Using Context-Based Interface Prototypes
by Robert Dongas, Kazjon Grace, Samuel Gillespie, Marius Hoggenmueller, Martin Tomitsch and Stewart Worrall
Multimodal Technol. Interact. 2023, 7(8), 82; https://doi.org/10.3390/mti7080082 - 18 Aug 2023
Viewed by 1564
Abstract
In this study, we propose the use of virtual urban field studies (VUFS) through context-based interface prototypes for evaluating the interaction design of auditory interfaces. Virtual field tests use mixed-reality technologies to combine the fidelity of real-world testing with the affordability and speed [...] Read more.
In this study, we propose the use of virtual urban field studies (VUFS) through context-based interface prototypes for evaluating the interaction design of auditory interfaces. Virtual field tests use mixed-reality technologies to combine the fidelity of real-world testing with the affordability and speed of testing in the lab. In this paper, we apply this concept to rapidly test sound designs for autonomous vehicle (AV)–pedestrian interaction with a high degree of realism and fidelity. We also propose the use of psychometrically validated measures of presence in validating the verisimilitude of VUFS. Using mixed qualitative and quantitative methods, we analysed users’ perceptions of presence in our VUFS prototype and the relationship to our prototype’s effectiveness. We also examined the use of higher-order ambisonic spatialised audio and its impact on presence. Our results provide insights into how VUFS can be designed to facilitate presence as well as design guidelines for how this can be leveraged. Full article
Show Figures

Figure 1

22 pages, 6112 KiB  
Article
Creative Use of OpenAI in Education: Case Studies from Game Development
by Fiona French, David Levi, Csaba Maczo, Aiste Simonaityte, Stefanos Triantafyllidis and Gergo Varda
Multimodal Technol. Interact. 2023, 7(8), 81; https://doi.org/10.3390/mti7080081 - 18 Aug 2023
Cited by 1 | Viewed by 4714
Abstract
Educators and students have shown significant interest in the potential for generative artificial intelligence (AI) technologies to support student learning outcomes, for example, by offering personalized experiences, 24 h conversational assistance, text editing and help with problem-solving. We review contemporary perspectives on the [...] Read more.
Educators and students have shown significant interest in the potential for generative artificial intelligence (AI) technologies to support student learning outcomes, for example, by offering personalized experiences, 24 h conversational assistance, text editing and help with problem-solving. We review contemporary perspectives on the value of AI as a tool in an educational context and describe our recent research with undergraduate students, discussing why and how we integrated OpenAI tools ChatGPT and Dall-E into the curriculum during the 2022–2023 academic year. A small cohort of games programming students in the School of Computing and Digital Media at London Metropolitan University was given a research and development assignment that explicitly required them to engage with OpenAI. They were tasked with evaluating OpenAI tools in the context of game development, demonstrating a working solution and reporting on their findings. We present five case studies that showcase some of the outputs from the students and we discuss their work. This mode of assessment was both productive and popular, mapping to students’ interests and helping to refine their skills in programming, problem-solving, critical reflection and exploratory design. Full article
Show Figures

Figure 1

32 pages, 1631 KiB  
Systematic Review
“From Gamers into Environmental Citizens”: A Systematic Literature Review of Empirical Research on Behavior Change Games for Environmental Citizenship
by Yiannis Georgiou, Andreas Ch. Hadjichambis, Demetra Paraskeva-Hadjichambi and Anastasia Adamou
Multimodal Technol. Interact. 2023, 7(8), 80; https://doi.org/10.3390/mti7080080 - 14 Aug 2023
Viewed by 1463
Abstract
As the global environmental crisis intensifies, there has been a significant interest in behavior change games (BCGs), as a viable venue to empower players’ pro-environmentalism. This pro-environmental empowerment is well-aligned with the notion of environmental citizenship (EC), which aims at transforming citizens into [...] Read more.
As the global environmental crisis intensifies, there has been a significant interest in behavior change games (BCGs), as a viable venue to empower players’ pro-environmentalism. This pro-environmental empowerment is well-aligned with the notion of environmental citizenship (EC), which aims at transforming citizens into “environmental agents of change”, seeking to achieve more sustainable lifestyles. Despite these arguments, studies in this area are thinly spread and fragmented across various research domains. This article is grounded on a systematic review of empirical articles on BCGs for EC covering a time span of fifteen years and published in peer-reviewed journals and conference proceedings, in order to provide an understanding of the scope of empirical research in the field. In total, 44 articles were reviewed to shed light on their methodological underpinnings, the gaming elements and the persuasive strategies of the deployed BCGs, the EC actions facilitated by the BCGs, and the impact of BCGs on players’ EC competences. Our findings indicate that while BCGs seem to promote pro-environmental knowledge and attitudes, such an assertion is not fully warranted for pro-environmental behaviors. We reflect on our findings and provide future research directions to push forward the field of BCGs for EC. Full article
Show Figures

Figure 1

21 pages, 8550 KiB  
Communication
Design and Research of a Sound-to-RGB Smart Acoustic Device
by Zlatin Zlatev, Julieta Ilieva, Daniela Orozova, Galya Shivacheva and Nadezhda Angelova
Multimodal Technol. Interact. 2023, 7(8), 79; https://doi.org/10.3390/mti7080079 - 13 Aug 2023
Viewed by 1566
Abstract
This paper presents a device that converts sound wave frequencies into colors to assist people with hearing problems in solving accessibility and communication problems in the hearing-impaired community. The device uses a precise mathematical apparatus and carefully selected hardware to achieve accurate conversion [...] Read more.
This paper presents a device that converts sound wave frequencies into colors to assist people with hearing problems in solving accessibility and communication problems in the hearing-impaired community. The device uses a precise mathematical apparatus and carefully selected hardware to achieve accurate conversion of sound to color, supported by specialized automatic processing software suitable for standardization. Experimental evaluation shows excellent performance for frequencies below 1000 Hz, although limitations are encountered at higher frequencies, requiring further investigation into advanced noise filtering and hardware optimization. The device shows promise for various applications, including education, art, and therapy. The study acknowledges its limitations and suggests future research to generalize the models for converting sound frequencies to color and improving usability for a broader range of hearing impairments. Feedback from the hearing-impaired community will play a critical role in further developing the device for practical use. Overall, this innovative device for converting sound to color represents a significant step toward improving accessibility and communication for people with hearing challenges. Continued research offers the potential to overcome challenges and extend the benefits of the device to a variety of areas, ultimately improving the quality of life for people with hearing impairments. Full article
Show Figures

Figure 1

19 pages, 16645 KiB  
Article
Multimodal Interaction for Cobot Using MQTT
by José Rouillard and Jean-Marc Vannobel
Multimodal Technol. Interact. 2023, 7(8), 78; https://doi.org/10.3390/mti7080078 - 03 Aug 2023
Cited by 1 | Viewed by 1805
Abstract
For greater efficiency, human–machine and human–robot interactions must be designed with the idea of multimodality in mind. To allow the use of several interaction modalities, such as the use of voice, touch, gaze tracking, on several different devices (computer, smartphone, tablets, etc.) and [...] Read more.
For greater efficiency, human–machine and human–robot interactions must be designed with the idea of multimodality in mind. To allow the use of several interaction modalities, such as the use of voice, touch, gaze tracking, on several different devices (computer, smartphone, tablets, etc.) and to integrate possible connected objects, it is necessary to have an effective and secure means of communication between the different parts of the system. This is even more important with the use of a collaborative robot (cobot) sharing the same space and very close to the human during their tasks. This study present research work in the field of multimodal interaction for a cobot using the MQTT protocol, in virtual (Webots) and real worlds (ESP microcontrollers, Arduino, IOT2040). We show how MQTT can be used efficiently, with a common publish/subscribe mechanism for several entities of the system, in order to interact with connected objects (like LEDs and conveyor belts), robotic arms (like the Ned Niryo), or mobile robots. We compare the use of MQTT with that of the Firebase Realtime Database used in several of our previous research works. We show how a “pick–wait–choose–and place” task can be carried out jointly by a cobot and a human, and what this implies in terms of communication and ergonomic rules, via health or industrial concerns (people with disabilities, and teleoperation). Full article
Show Figures

Figure 1

14 pages, 1216 KiB  
Article
Enhancing Object Detection for VIPs Using YOLOv4_Resnet101 and Text-to-Speech Conversion Model
by Tahani Jaser Alahmadi, Atta Ur Rahman, Hend Khalid Alkahtani and Hisham Kholidy
Multimodal Technol. Interact. 2023, 7(8), 77; https://doi.org/10.3390/mti7080077 - 02 Aug 2023
Cited by 6 | Viewed by 1620
Abstract
Vision impairment affects an individual’s quality of life, posing challenges for visually impaired people (VIPs) in various aspects such as object recognition and daily tasks. Previous research has focused on developing visual navigation systems to assist VIPs, but there is a need for [...] Read more.
Vision impairment affects an individual’s quality of life, posing challenges for visually impaired people (VIPs) in various aspects such as object recognition and daily tasks. Previous research has focused on developing visual navigation systems to assist VIPs, but there is a need for further improvements in accuracy, speed, and inclusion of a wider range of object categories that may obstruct VIPs’ daily lives. This study presents a modified version of YOLOv4_Resnet101 as backbone networks trained on multiple object classes to assist VIPs in navigating their surroundings. In comparison to the Darknet, with a backbone utilized in YOLOv4, the ResNet-101 backbone in YOLOv4_Resnet101 offers a deeper and more powerful feature extraction network. The ResNet-101’s greater capacity enables better representation of complex visual patterns, which increases the accuracy of object detection. The proposed model is validated using the Microsoft Common Objects in Context (MS COCO) dataset. Image pre-processing techniques are employed to enhance the training process, and manual annotation ensures accurate labeling of all images. The module incorporates text-to-speech conversion, providing VIPs with auditory information to assist in obstacle recognition. The model achieves an accuracy of 96.34% on the test images obtained from the dataset after 4000 iterations of training, with a loss error rate of 0.073%. Full article
Show Figures

Figure 1

26 pages, 1419 KiB  
Systematic Review
How Is Privacy Behavior Formulated? A Review of Current Research and Synthesis of Information Privacy Behavioral Factors
by Ioannis Paspatis, Aggeliki Tsohou and Spyros Kokolakis
Multimodal Technol. Interact. 2023, 7(8), 76; https://doi.org/10.3390/mti7080076 - 29 Jul 2023
Cited by 2 | Viewed by 2077
Abstract
What influences Information Communications and Technology (ICT) users’ privacy behavior? Several studies have shown that users state to care about their personal data. Contrary to that though, they perform unsafe privacy actions, such as ignoring to configure privacy settings. In this research, we [...] Read more.
What influences Information Communications and Technology (ICT) users’ privacy behavior? Several studies have shown that users state to care about their personal data. Contrary to that though, they perform unsafe privacy actions, such as ignoring to configure privacy settings. In this research, we present the results of an in-depth literature review on the factors affecting privacy behavior. We seek to investigate the underlying factors that influence individuals’ privacy-conscious behavior in the digital domain, as well as effective interventions to promote such behavior. Privacy decisions regarding the disclosure of personal information may have negative consequences on individuals’ lives, such as becoming a victim of identity theft, impersonation, etc. Moreover, third parties may exploit this information for their own benefit, such as targeted advertising practices. By identifying the factors that may affect SNS users’ privacy awareness, we can assist in creating methods for effective privacy protection and/or user-centered design. Examining the results of several research studies, we found evidence that privacy behavior is affected by a variety of factors, including individual ones (e.g., demographics) and contextual ones (e.g., financial exchanges). We synthesize a framework that aggregates the scattered factors that have been found in the literature to affect privacy behavior. Our framework can be beneficial to academics and practitioners in the private and public sectors. For example, academics can utilize our findings to create specialized information privacy courses and theoretical or laboratory modules. Full article
Show Figures

Figure 1

17 pages, 998 KiB  
Article
An Enhanced Diagnosis of Monkeypox Disease Using Deep Learning and a Novel Attention Model Senet on Diversified Dataset
by Shivangi Surati, Himani Trivedi, Bela Shrimali, Chintan Bhatt and Carlos M. Travieso-González
Multimodal Technol. Interact. 2023, 7(8), 75; https://doi.org/10.3390/mti7080075 - 27 Jul 2023
Cited by 2 | Viewed by 1313
Abstract
With the widespread of Monkeypox and increase in the weekly reported number of cases, it is observed that this outbreak continues to put the human beings in risk. The early detection and reporting of this disease will help monitoring and controlling the spread [...] Read more.
With the widespread of Monkeypox and increase in the weekly reported number of cases, it is observed that this outbreak continues to put the human beings in risk. The early detection and reporting of this disease will help monitoring and controlling the spread of it and hence, supporting international coordination for the same. For this purpose, the aim of this paper is to classify three diseases viz. Monkeypox, Chikenpox and Measles based on provided image dataset using trained standalone DL models (InceptionV3, EfficientNet, VGG16) and Squeeze and Excitation Network (SENet) Attention model. The first step to implement this approach is to search, collect and aggregate (if require) verified existing dataset(s). To the best of our knowledge, this is the first paper which has proposed the use of SENet based attention models in the classification task of Monkeypox and also targets to aggregate two different datasets from distinct sources in order to improve the performance parameters. The unexplored SENet attention architecture is incorporated with the trunk branch of InceptionV3 (SENet+InceptionV3), EfficientNet (SENet+EfficientNet) and VGG16 (SENet+VGG16) and these architectures improve the accuracy of the Monkeypox classification task significantly. Comprehensive experiments on three datasets depict that the proposed work achieves considerably high results with regard to accuracy, precision, recall and F1-score and hence, improving the overall performance of classification. Thus, the proposed research work is advantageous in enhanced diagnosis and classification of Monkeypox that can be utilized further by healthcare experts and researchers to confront its outspread. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop