Advances in Human–Machine Systems, Human–Machine Interfaces and Human Wearable Device Performance

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 September 2024 | Viewed by 7377

Special Issue Editors


E-Mail Website
Guest Editor
Department of Industrial Management, Chung Hua University, Hsin-Chu, Taiwan
Interests: drone ergonomics; human–virtual object interactions; physical ergonomics; human movement science
Special Issues, Collections and Topics in MDPI journals
Department of Engineering and Management, Nanjing Agricultural university, Nanjing 210095, China
Interests: human–machine interactions; safety and health at work; physical work assessments

Special Issue Information

Dear Colleagues,

The human–machine system (HMS), wherein the functions of humans and machines are integrated, is one of the core issues in human-factors engineering and ergonomics. Its function and performance depend on human capability, the function of the machine and how the system is integrated. HMSs exist wherever people are using or operating something, from using a screwdriver to navigating a commercial jet or cargo vessel. The human–machine interface (HMI), on the other hand, emphasizes the interactions of humans and machines. The communication between humans and machines via displays and control devices is a common HMI issue that includes the design and layout of control devices, the ways in which humans interact with the input devices, and human responses to the outputs of machines or devices. For this Special Issue, we welcome submissions related to HMSs and HMIs, functional allocations of humans and machines, and methods of system integration. We especially welcome submissions focusing on the design and layout of machine or system control panels (such as in a nuclear power plant control room); the human usage of wearable devices (such as augmented or virtual reality, extraskeletons, and special-purpose sensors); the operation of manned and unmanned vehicles (automobiles, vessels, aircrafts, robots, etc.); human material-handling aid interactions (such as carts, trolleys, and forklifts); and the environmental implications of human–machine systems.

Prof. Dr. Kaiway Li
Dr. Lu Peng
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human–machine system
  • human–machine interface
  • human–computer interactions
  • wearable devices
  • manned and unmanned system operation
  • augmented reality
  • virtual reality
  • control room and control panel design and assessment
  • transportation safety
  • human–robot interactions
  • extraskeletons
  • mental workload
  • vigilance

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 2709 KiB  
Article
Motion Sickness in Mixed-Reality Situational Awareness System
by Rain Eric Haamer, Nika Mikhailava, Veronika Podliesnova, Raido Saremat, Tõnis Lusmägi, Ana Petrinec and Gholamreza Anbarjafari
Appl. Sci. 2024, 14(6), 2231; https://doi.org/10.3390/app14062231 - 07 Mar 2024
Viewed by 509
Abstract
This research focuses on enhancing the user experience within a Mixed-Reality Situational Awareness System (MRSAS). The study employed the Simulator Sickness Questionnaire (SSQ) in order to gauge and quantify the user experience and to compare the effects of changes to the system. As [...] Read more.
This research focuses on enhancing the user experience within a Mixed-Reality Situational Awareness System (MRSAS). The study employed the Simulator Sickness Questionnaire (SSQ) in order to gauge and quantify the user experience and to compare the effects of changes to the system. As the results of SSQ are very dependant on inherent motion sickness susceptibility, the Motion Sickness Susceptibility Questionnaire (MSQ) was used to normalize the results. The experimental conditions were tested on a simulated setup which was also compared to its real-life counterpart. This simulated setup was adjusted to best match the conditions found in the real system by using post-processing effects. The test subjects in this research primarily consisted of 17–28 years old university students representing both male and female genders as well as a secondary set with a larger age range but predominantly male. In total, there were 41 unique test subjects in this study. The parameters that were analyzed in this study were the Field of View (FoV) of the headset, the effects of peripheral and general blurring, camera distortions, camera white balance and users adaptability to VR over time. All of the results are presented as the average of multiple user results and as scaled by user MSQ. The findings suggest that SSQ scores increase rapidly in the first 10–20 min of testing and level off at around 40–50 min. Repeated exposure to VR reduces MS buildup, and a FoV of 49–54 is ideal for a MRSAS setup. Additionally camera based effects like lens distortion and automatic white balance had negligible effests on MS. In this study a new MSQ based SSQ normalization technique was also developed and utilized for comparison. While the experiments in this research were primarily conducted with the goal of improving the physical Vegvisir system, the results themselves may be applicable for a broader array of VR/MR awareness systems and can help improve the UX of future applications. Full article
Show Figures

Figure 1

34 pages, 3694 KiB  
Article
Impact of Navigation Aid and Spatial Ability Skills on Wayfinding Performance and Workload in Indoor-Outdoor Campus Navigation: Challenges and Design
by Rabail Tahir and John Krogstie
Appl. Sci. 2023, 13(17), 9508; https://doi.org/10.3390/app13179508 - 22 Aug 2023
Cited by 1 | Viewed by 2565
Abstract
Wayfinding is important for everyone on a university campus to understand where they are and get to where they want to go to attend a meeting or a class. This study explores the dynamics of mobile navigation apps and the spatial ability skills [...] Read more.
Wayfinding is important for everyone on a university campus to understand where they are and get to where they want to go to attend a meeting or a class. This study explores the dynamics of mobile navigation apps and the spatial ability skills of individuals on a wayfinding performance and perceived workload on a university campus wayfinding, including indoor-outdoor navigation, by focusing on three research objectives. (1) Compare the effectiveness of Google Maps (outdoor navigation app) and MazeMap (indoor-outdoor navigation app) on wayfinding performance and perceived workload in university campus wayfinding. (2) Investigate the impact of participants’ spatial ability skills on their wayfinding performance and perceived workload regardless of the used navigation app. (3) Highlight the challenges in indoor-outdoor university campus wayfinding using mobile navigation apps. To achieve this, a controlled experiment was conducted with 22 participants divided into a control (using Google Maps) and an experiment group (using MazeMap). Participants were required to complete a time-bound wayfinding task of navigating to meeting rooms in different buildings within the Gløshaugen campus of the Norwegian University of Science and Technology in Trondheim, Norway. Participants were assessed on spatial ability tests, mental workload, and wayfinding performance using a questionnaire, observation notes and a short follow-up interview about the challenges they faced in the task. The findings reveal a negative correlation between overall spatial ability score (spatial reasoning, spatial orientation, and sense of direction) and perceived workload (NASA TLX score and Subjective Workload Rating) and a negative correlation between sense of direction score and total hesitation during wayfinding task. However, no significant difference was found between the Google Maps and the MazeMap group for wayfinding performance and perceived workload. The qualitative analysis resulted in five key challenge categories in university campus wayfinding, providing implications for designing navigation systems that better facilitate indoor-outdoor campus navigation. Full article
Show Figures

Figure 1

17 pages, 21306 KiB  
Article
Presenting Job Instructions Using an Augmented Reality Device, a Printed Manual, and a Video Display for Assembly and Disassembly Tasks: What Are the Differences?
by Halimoh Dorloh, Kai-Way Li and Samsiya Khaday
Appl. Sci. 2023, 13(4), 2186; https://doi.org/10.3390/app13042186 - 08 Feb 2023
Cited by 4 | Viewed by 2030
Abstract
Components assembly and disassembly are fundamental tasks in manufacturing and the product service industry. Job instructions are required for novice and inexperienced workers to perform such tasks. Conventionally, job instructions may be presented via printed manual and video display. Augmented reality (AR) device [...] Read more.
Components assembly and disassembly are fundamental tasks in manufacturing and the product service industry. Job instructions are required for novice and inexperienced workers to perform such tasks. Conventionally, job instructions may be presented via printed manual and video display. Augmented reality (AR) device has been one of the recent alternatives in conveying such information. This research compared the presentation of job instruction via AR display, video display, and a printed manual in performing computer component assembly and disassembly tasks in terms of efficiency, quality, and usability. A Microsoft® HoloLens 2 device and a laptop computer were adopted to present the job instruction for the AR and video conditions, respectively. A total of 21 healthy adults, including 11 males and 10 females, participated in the study. Our findings were that AR display led to the least efficiency but the best quality of the task being performed. The differences of the overall usability scores among the three job instruction types were insignificant. The participants felt that support from a technical person for the AR device was significantly more than the printed manual. More male participants felt the AR display was easier to use than their female counterparts. Full article
Show Figures

Figure 1

15 pages, 3412 KiB  
Article
Movement Time for Pointing Tasks in Real and Augmented Reality Environments
by Caijun Zhao, Kai Way Li and Lu Peng
Appl. Sci. 2023, 13(2), 788; https://doi.org/10.3390/app13020788 - 05 Jan 2023
Cited by 4 | Viewed by 1398
Abstract
Human–virtual target interactions are becoming more and more common due to the emergence and application of augmented reality (AR) devices. They are different from interacting with real objects. Quantification of movement time (MT) for human–virtual target interactions is essential for AR-based interface/environment design. [...] Read more.
Human–virtual target interactions are becoming more and more common due to the emergence and application of augmented reality (AR) devices. They are different from interacting with real objects. Quantification of movement time (MT) for human–virtual target interactions is essential for AR-based interface/environment design. This study aims to investigate the motion time when people interact with virtual targets and to compare the differences in motion time between real and AR environments. An experiment was conducted to measure the MT of pointing tasks on the basis of both a physical and a virtual calculator panel. A total of 30 healthy adults, 15 male and 15 female, joined. Each participant performed pointing tasks on both physical and virtual panels with an inclined angle of the panel, hand movement direction, target key, and handedness conditions. The participants wore an AR head piece (Microsoft Hololens 2) when they pointed on the virtual panel. When pointing on the physical panel, the participants pointed on a panel drawn on board. The results showed that the type of panel, inclined angle, gender, and handedness had significant (p < 0.0001) effects on the MT. A new finding of this study was that the MT of the pointing task on the virtual panel was significantly (p < 0.0001) higher than that of the physical one. Users using a Hololens 2 AR device had inferior performance in pointing tasks than on a physical panel. A revised Fitts’s model was proposed to incorporate both the physical–virtual component and inclined angle of the panel in estimating the MT. This model is novel. The index of difficulty and throughput of the pointing tasks between using the physical and virtual panels were compared and discussed. The information in this paper is beneficial to AR designers in promoting the usability of their designs so as to improve the user experience of their products. Full article
Show Figures

Figure 1

Back to TopTop