Advances in Augmenting Human-Machine Interface

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (15 February 2023) | Viewed by 8255

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science, University of Technology Sydney, Sydney, Australia
Interests: brain-computer interface; machine learning; mixed reality; physiological sensing
Institute of Automation, Chinese Academy of Sciences, Beijing, China
Interests: deep learning; electrode implantation robot for brain-computer interface; industrial vision detection
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
iCinema Centre for Interactive Cinema Research, University of New South Wales, Sydney, NSW 2052, Australia
Interests: virtual reality; VR sickness; augmented reality; human-computer interaction; immersive data visualization

Special Issue Information

Dear Colleagues,

Augmenting human–machine interface (HMI) is an emerging technology that intends to seamlessly enhance human and physical environment interaction through techniques such as brain–computer interface, mixed reality (virtual and augmented reality), and advanced technology adapted machine learning to improve interaction. The main aim of this Special Issue is to seek high-quality submissions that highlight emerging applications and address recent breakthroughs in augmenting human–machine interface such as novel approaches of interaction, machine learning methods utilizing physiological sensors, and real-time HMI systems. The topics of interest include but are not limited to:

  • Brain–computer interface for HMI    
  • Behavior adaption and learning techniques in HM
  • Machine learning techniques to enhance and adapt HMI
  • HMI in real-world settings
  • Application and case studies of augmentation in HMI
  • Novel sensors and their application for HMI.

Dr. Avinash Singh
Dr. Xian Tao
Dr. Carlos Tirado Cortes
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Brain–computer interface
  • Machine learning
  • Sensors
  • Physiological sensing
  • Human–machine interface
  • Human–computer interface

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 6186 KiB  
Article
Deep Comparisons of Neural Networks from the EEGNet Family
by Csaba Márton Köllőd, András Adolf, Kristóf Iván, Gergely Márton and István Ulbert
Electronics 2023, 12(12), 2743; https://doi.org/10.3390/electronics12122743 - 20 Jun 2023
Viewed by 1681
Abstract
A preponderance of brain–computer interface (BCI) publications proposing artificial neural networks for motor imagery (MI) electroencephalography (EEG) signal classification utilize one of the BCI Competition datasets. However, these databases encompass MI EEG data from a limited number of subjects, typically less than or [...] Read more.
A preponderance of brain–computer interface (BCI) publications proposing artificial neural networks for motor imagery (MI) electroencephalography (EEG) signal classification utilize one of the BCI Competition datasets. However, these databases encompass MI EEG data from a limited number of subjects, typically less than or equal to 10. Furthermore, the algorithms usually include only bandpass filtering as a means of reducing noise and increasing signal quality. In this study, we conducted a comparative analysis of five renowned neural networks (Shallow ConvNet, Deep ConvNet, EEGNet, EEGNet Fusion, and MI-EEGNet) utilizing open-access databases with a larger subject pool in conjunction with the BCI Competition IV 2a dataset to obtain statistically significant results. We employed the FASTER algorithm to eliminate artifacts from the EEG as a signal processing step and explored the potential for transfer learning to enhance classification results on artifact-filtered data. Our objective was to rank the neural networks; hence, in addition to classification accuracy, we introduced two supplementary metrics: accuracy improvement from chance level and the effect of transfer learning. The former is applicable to databases with varying numbers of classes, while the latter can underscore neural networks with robust generalization capabilities. Our metrics indicated that researchers should not disregard Shallow ConvNet and Deep ConvNet as they can outperform later published members of the EEGNet family. Full article
(This article belongs to the Special Issue Advances in Augmenting Human-Machine Interface)
Show Figures

Figure 1

17 pages, 3054 KiB  
Article
Effects of Exercise Type and Gameplay Mode on Physical Activity in Exergame
by Daeun Kim, Woohyun Kim and Kyoung Shin Park
Electronics 2022, 11(19), 3086; https://doi.org/10.3390/electronics11193086 - 27 Sep 2022
Cited by 4 | Viewed by 1388
Abstract
Exercise games (exergames) that combine both exercise and video gaming train people in a fun and competitive manner to lead a healthy lifestyle. Exergames promote more physical exertion and help users exercise more easily and independently in any place. Many studies have been [...] Read more.
Exercise games (exergames) that combine both exercise and video gaming train people in a fun and competitive manner to lead a healthy lifestyle. Exergames promote more physical exertion and help users exercise more easily and independently in any place. Many studies have been conducted to evaluate the positive effects of exergames. However, in most studies, heart rate was mainly used to measure the effect of exercise. In this study, we evaluate the effects of exercise according to the exercise type (rest, walking, tennis, and running) and gameplay mode (single, competition, and cooperation) of exergaming via quantitative measurements using electrocardiogram (ECG) and Kinect. The multiple comparison results reveal that physical activity measured with Kinect was statistically significant even in exergames that did not show statistically significant differences according to ECG. Running was statistically significant compared to other exercise types, and there was a significant difference in competition compared to other gameplay modes. Full article
(This article belongs to the Special Issue Advances in Augmenting Human-Machine Interface)
Show Figures

Figure 1

27 pages, 3451 KiB  
Article
Design and Field Test of a Mobile Augmented Reality Human–Machine Interface for Virtual Stops in Shared Automated Mobility On-Demand
by Fabian Hub and Michael Oehl
Electronics 2022, 11(17), 2687; https://doi.org/10.3390/electronics11172687 - 27 Aug 2022
Cited by 4 | Viewed by 2108
Abstract
Shared automated mobility on-demand (SAMOD) is considered as a promising mobility solution in the future. Users book trips on-demand via a smartphone, and service algorithms set up virtual stops (vStop) where users then need to walk to board the automated shuttle. Navigation and [...] Read more.
Shared automated mobility on-demand (SAMOD) is considered as a promising mobility solution in the future. Users book trips on-demand via a smartphone, and service algorithms set up virtual stops (vStop) where users then need to walk to board the automated shuttle. Navigation and identification of the virtual pickup location, which has no references in the real world, can be challenging. Providing users with an intuitive information system in that situation is essential to achieve high user acceptance of new automated mobility services. Our novel vStop human–machine interface (HMI) prototype for mobile augmented reality (AR) supports users with information in reference to the street environment. This work firstly presented the results of an online interview study (N = 21) to conceptualize an HMI. Secondly, the HMI was prototyped by means of AR and evaluated (N = 45) regarding user experience (UX), workload, and acceptance. The results show that the AR prototype provided high rates of UX especially in terms of high pragmatic quality. Furthermore, cognitive workload when using the HMI was low, and acceptance ratings were high. The results show the positive perception of AR for navigation tasks in general and the highly assistive character of the vStop prototype in particular. In the future, SAMOD services can provide customers with vStop HMIs to foster user acceptance and smooth operation of their service. Full article
(This article belongs to the Special Issue Advances in Augmenting Human-Machine Interface)
Show Figures

Figure 1

14 pages, 2950 KiB  
Article
Modeling and Calibration of Active Thermal-Infrared Visual System for Industrial HMI
by Mengjuan Chen, Simeng Tian, Fan He, Qingqin Fu, Qingyi Gu and Baolin Wu
Electronics 2022, 11(8), 1230; https://doi.org/10.3390/electronics11081230 - 13 Apr 2022
Cited by 1 | Viewed by 1775
Abstract
In the industrial application of the human-machine interface (HMI), thermal-infrared cameras can detect objects that are limited by visible-spectrum cameras. It can receive the energy radiated by a target through an infrared detector and obtain the thermal image corresponding to the heat distribution [...] Read more.
In the industrial application of the human-machine interface (HMI), thermal-infrared cameras can detect objects that are limited by visible-spectrum cameras. It can receive the energy radiated by a target through an infrared detector and obtain the thermal image corresponding to the heat distribution field on the target surface. Because of its special imaging principle, a thermal-infrared camera is not affected by the light source when imaging. Compared to visible-spectrum cameras, thermal imaging cameras can better detect defects with poor contrast but temperature differences or internal defects in products. Therefore, it can be used in many specific industrial inspection applications. However, thermal-infrared imaging has the phenomenon of thermal diffusion, which leads to noisy thermal infrared images and limits its applications in high precision industrial environments. In this paper, we proposed a high precision measurement system for industrial HMI based on thermal-infrared vision. The accurate measurement model of the system was established to deal with the problems caused by imaging noise. The experiments conducted suggest that the proposed model and calibration method is valid for the active thermal-infrared visual system and achieves high precision measurements. Full article
(This article belongs to the Special Issue Advances in Augmenting Human-Machine Interface)
Show Figures

Figure 1

Back to TopTop