sensors-logo

Journal Browser

Journal Browser

Computer Vision and Smart Sensors for Human-Computer Interaction

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 10 September 2024 | Viewed by 10577

Special Issue Editors


E-Mail Website
Guest Editor
Department of Automatic Control Engineering, Feng Chia University, Taichung 40724, Taiwan
Interests: pattern recognition; image processing; human–machine interface design
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Automatic Control Engineering, Feng Chia University, Taichung 40724, Taiwan
Interests: Internet of Things; deep learning; big data; RFID; data mining; hidden intelligent data

E-Mail Website
Guest Editor
Department of Electronic Engineering, Feng Chia University, Taichung 40724, Taiwan
Interests: integrated circuit design; flat-panel display drivers; high-speed electronic devices; numerical calculation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The scope of this Special Issue covers fundamental technology used in electronic, mechanical, and electrical engineering. We are particularly interested in sensor techniques, including the applications of human–computer interaction based on computer vision.

From a methodological point of view, the focus is on combining classical pattern recognition and deep learning techniques to create new computational paradigms for typical tasks in visual human–machine interactions, such as human pose detection, eye-tracking, and brainwave-controlled systems. On the practical side, we are looking for hardware and software components, used in electronic, mechanical, and electrical engineering.

We invite papers focusing on the synthesis and integration of human–machine interfaces, the design of electronic devices, sensing technologies, the evaluation of various performance characteristics, and the exploration of their broad applications to industry, modeling and simulation, stimulation analyses, and so forth. Topics of interest include but are not limited to the following:

  • Electronic devices;
  • Computer vision;
  • Smart sensors;
  • Methods of detection;
  • Novel sensors;
  • Psychophysiology and performance;
  • Interfaces design;
  • Eye-tracking;
  • Brainwave-controlled systems.

Prof. Dr. Chern-Sheng Lin
Dr. Chih-Cheng Chen
Prof. Dr. Tang-Chieh Liu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

 

Keywords

  • electronic devices
  • computer vision
  • smart sensors
  • methods of detection
  • novel sensors
  • psychophysiology and performance
  • interfaces design
  • eye-tracking
  • brainwave-controlled system

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 3045 KiB  
Article
Real-Time Monocular Skeleton-Based Hand Gesture Recognition Using 3D-Jointsformer
by Enmin Zhong, Carlos R. del-Blanco, Daniel Berjón, Fernando Jaureguizar and Narciso García
Sensors 2023, 23(16), 7066; https://doi.org/10.3390/s23167066 - 10 Aug 2023
Cited by 2 | Viewed by 1542
Abstract
Automatic hand gesture recognition in video sequences has widespread applications, ranging from home automation to sign language interpretation and clinical operations. The primary challenge lies in achieving real-time recognition while managing temporal dependencies that can impact performance. Existing methods employ 3D convolutional or [...] Read more.
Automatic hand gesture recognition in video sequences has widespread applications, ranging from home automation to sign language interpretation and clinical operations. The primary challenge lies in achieving real-time recognition while managing temporal dependencies that can impact performance. Existing methods employ 3D convolutional or Transformer-based architectures with hand skeleton estimation, but both have limitations. To address these challenges, a hybrid approach that combines 3D Convolutional Neural Networks (3D-CNNs) and Transformers is proposed. The method involves using a 3D-CNN to compute high-level semantic skeleton embeddings, capturing local spatial and temporal characteristics of hand gestures. A Transformer network with a self-attention mechanism is then employed to efficiently capture long-range temporal dependencies in the skeleton sequence. Evaluation of the Briareo and Multimodal Hand Gesture datasets resulted in accuracy scores of 95.49% and 97.25%, respectively. Notably, this approach achieves real-time performance using a standard CPU, distinguishing it from methods that require specialized GPUs. The hybrid approach’s real-time efficiency and high accuracy demonstrate its superiority over existing state-of-the-art methods. In summary, the hybrid 3D-CNN and Transformer approach effectively addresses real-time recognition challenges and efficient handling of temporal dependencies, outperforming existing methods in both accuracy and speed. Full article
(This article belongs to the Special Issue Computer Vision and Smart Sensors for Human-Computer Interaction)
Show Figures

Figure 1

18 pages, 5198 KiB  
Article
A Novel Steganography Method for Infrared Image Based on Smooth Wavelet Transform and Convolutional Neural Network
by Yu Bai, Li Li, Jianfeng Lu, Shanqing Zhang and Ning Chu
Sensors 2023, 23(12), 5360; https://doi.org/10.3390/s23125360 - 06 Jun 2023
Cited by 1 | Viewed by 1126
Abstract
Infrared images have been widely used in many research areas, such as target detection and scene monitoring. Therefore, the copyright protection of infrared images is very important. In order to accomplish the goal of image-copyright protection, a large number of image-steganography algorithms have [...] Read more.
Infrared images have been widely used in many research areas, such as target detection and scene monitoring. Therefore, the copyright protection of infrared images is very important. In order to accomplish the goal of image-copyright protection, a large number of image-steganography algorithms have been studied in the last two decades. Most of the existing image-steganography algorithms hide information based on the prediction error of pixels. Consequently, reducing the prediction error of pixels is very important for steganography algorithms. In this paper, we propose a novel framework SSCNNP: a Convolutional Neural-Network Predictor (CNNP) based on Smooth-Wavelet Transform (SWT) and Squeeze-Excitation (SE) attention for infrared image prediction, which combines Convolutional Neural Network (CNN) with SWT. Firstly, the Super-Resolution Convolutional Neural Network (SRCNN) and SWT are used for preprocessing half of the input infrared image. Then, CNNP is applied to predict the other half of the infrared image. To improve the prediction accuracy of CNNP, an attention mechanism is added to the proposed model. The experimental results demonstrate that the proposed algorithm reduces the prediction error of the pixels due to full utilization of the features around the pixel in both the spatial and the frequency domain. Moreover, the proposed model does not require either expensive equipment or a large amount of storage space during the training process. Experimental results show that the proposed algorithm had good performances in terms of imperceptibility and watermarking capacity compared with advanced steganography algorithms. The proposed algorithm improved the PSNR by 0.17 on average with the same watermark capacity. Full article
(This article belongs to the Special Issue Computer Vision and Smart Sensors for Human-Computer Interaction)
Show Figures

Figure 1

24 pages, 5952 KiB  
Article
Design of Digital-Twin Human-Machine Interface Sensor with Intelligent Finger Gesture Recognition
by Dong-Han Mo, Chuen-Lin Tien, Yu-Ling Yeh, Yi-Ru Guo, Chern-Sheng Lin, Chih-Chin Chen and Che-Ming Chang
Sensors 2023, 23(7), 3509; https://doi.org/10.3390/s23073509 - 27 Mar 2023
Cited by 3 | Viewed by 5086
Abstract
In this study, the design of a Digital-twin human-machine interface sensor (DT-HMIS) is proposed. This is a digital-twin sensor (DT-Sensor) that can meet the demands of human-machine automation collaboration in Industry 5.0. The DT-HMIS allows users/patients to add, modify, delete, query, and restore [...] Read more.
In this study, the design of a Digital-twin human-machine interface sensor (DT-HMIS) is proposed. This is a digital-twin sensor (DT-Sensor) that can meet the demands of human-machine automation collaboration in Industry 5.0. The DT-HMIS allows users/patients to add, modify, delete, query, and restore their previously memorized DT finger gesture mapping model and programmable logic controller (PLC) logic program, enabling the operation or access of the programmable controller input-output (I/O) interface and achieving the extended limb collaboration capability of users/patients. The system has two main functions: the first is gesture-encoded virtual manipulation, which indirectly accesses the PLC through the DT mapping model to complete control of electronic peripherals for extension-limbs ability by executing logic control program instructions. The second is gesture-based virtual manipulation to help non-verbal individuals create special verbal sentences through gesture commands to improve their expression ability. The design method uses primitive image processing and eight-way dual-bit signal processing algorithms to capture the movement of human finger gestures and convert them into digital signals. The system service maps control instructions by observing the digital signals of the DT-HMIS and drives motion control through mechatronics integration or speech synthesis feedback to express the operation requirements of inconvenient work or complex handheld physical tools. Based on the human-machine interface sensor of DT computer vision, it can reflect the user’s command status without the need for additional wearable devices and promote interaction with the virtual world. When used for patients, the system ensures that the user’s virtual control is mapped to physical device control, providing the convenience of independent operation while reducing caregiver fatigue. This study shows that the recognition accuracy can reach 99%, demonstrating practicality and application prospects. In future applications, users/patients can interact virtually with other peripheral devices through the DT-HMIS to meet their own interaction needs and promote industry progress. Full article
(This article belongs to the Special Issue Computer Vision and Smart Sensors for Human-Computer Interaction)
Show Figures

Figure 1

14 pages, 6082 KiB  
Article
Manipulating XXY Planar Platform Positioning Accuracy by Computer Vision Based on Reinforcement Learning
by Yi-Cheng Huang and Yung-Chun Chan
Sensors 2023, 23(6), 3027; https://doi.org/10.3390/s23063027 - 10 Mar 2023
Viewed by 1261
Abstract
With the rise of Industry 4.0 and artificial intelligence, the demand for industrial automation and precise control has increased. Machine learning can reduce the cost of machine parameter tuning and improve high-precision positioning motion. In this study, a visual image recognition system was [...] Read more.
With the rise of Industry 4.0 and artificial intelligence, the demand for industrial automation and precise control has increased. Machine learning can reduce the cost of machine parameter tuning and improve high-precision positioning motion. In this study, a visual image recognition system was used to observe the displacement of an XXY planar platform. Ball-screw clearance, backlash, nonlinear frictional force, and other factors affect the accuracy and reproducibility of positioning. Therefore, the actual positioning error was determined by inputting images captured by a charge-coupled device camera into a reinforcement Q-learning algorithm. Time-differential learning and accumulated rewards were used to perform Q-value iteration to enable optimal platform positioning. A deep Q-network model was constructed and trained through reinforcement learning for effectively estimating the XXY platform’s positioning error and predicting the command compensation according to the error history. The constructed model was validated through simulations. The adopted methodology can be extended to other control applications based on the interaction between feedback measurement and artificial intelligence. Full article
(This article belongs to the Special Issue Computer Vision and Smart Sensors for Human-Computer Interaction)
Show Figures

Figure 1

Back to TopTop