sensors-logo

Journal Browser

Journal Browser

Sensor Systems for Gesture Recognition II

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (28 February 2023) | Viewed by 32137

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
Department of Electronic Engineering, University of Tor Vergata Rome, 00133 Rome, Italy
Interests: wearable sensors; brain–computer interface; motion tracking; gait analysis; sensory glove; biotechnologies
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Gesture recognition (GR) aims to interpret human gestures by means of mathematical algorithms. Its achievement will have widespread applications in a number of different fields, with impacts that can help or meaningfully improve our quality of life.

In the real world, GR can interpret communication meanings at a distance or can “translate” “sign language” into written sentences or as a synthetized voice.  In a virtual reality (VR) and augmented reality (AR) world, GR can allow navigation and interaction, as occurs, for instance, with the user interface (UI) of a smart TV controlled by hand gestures.

The possible applications are countless, and we can mention just a few. In the health field, GR makes it possible to augment the motion capabilities of disabled people or to support surgeons in surgical settings. In gaming, GR frees gamers from input devices such as keyboards, mouses, and joysticks. In the automotive industry, GR allows drivers to control car appliances (see BMW 7 Series). In cinematography, GR is used to computer-generate effects or creatures. In everyday life, GR is a means to interact with smartphone apps (e.g., uSens, Inc.; Gestigon GmbH). In human–robot interactions, GR keeps the operator in safe conditions, while their gestures become the remote commands for tele-operating a robot. GR also enables music creation, converting human movements into sounds.

GR is achieved through (1) data acquisition, (2) pattern identification, and (3) interpretation (each of these phases can consist of different stages).

Data can be acquired by means of sensor systems based on different measurement principles, such as mechanical, magnetic, optic, acoustic, inertial principles, or hybrid sensors. Within this frame, optical technologies are historically the most explored ones (since 1870, when animal movements were analyzed via image sequences) and represent the current state of the art. However, optical technologies are expensive and require a dedicated room and skilled personnel. Therefore, non-optical technologies, in particular those based on wearable sensors, are increasingly gaining importance.

In order to obtain GR, different methods can be adopted for data segmentation, feature extraction, and classification. These methods highly depend on the type of data (according to the adopted type of sensor system) and the type of gestures to be recognized.

The supervised on unsupervised) recognition of patterns in data (i.e., regularities, arrangements, characteristics) can be approached by machine learning or heuristics, and can be linked to artificial intelligence (AI).

In sum, sensor systems for gesture recognition deal with an ensemble of topics that can singularly or jointly be accessed, and that represent a great opportunity for further developments, with widespread potential applications.

This call for papers invites technical contributions to a Sensors Special Issue providing an up-to-date overview on “Sensor Systems for Gesture Recognition”. This Special Issue will deal with theory, solutions, and innovative applications. Potential topics include, but are not limited to:

Sensor systems

Gesture recognition

Gesture recognition technologies

Gesture extraction methods

Gesture detection sensors

 Wearable sensors

 Human tracking

 Human postures and movements

 Motion detection and tracking

 Hand gesture recognition

 Sign language recognition

 Gait analysis

 Remote controlling

 Pattern recognition for gesture recognition

 Machine learning for gesture recognition

 Applications of gesture recognitions

 Algorithms for gesture recognition

Prof. Dr. Giovanni Saggio
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Sensor systems
  • Wearable sensors
  • Video-based gesture recognition
  • Motion tracking
  • Motion detection
  • Pattern recognition
  • Hand gestures
  • Gait analysis

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 3835 KiB  
Article
Benchmarking Dataset of Signals from a Commercial MEMS Magnetic–Angular Rate–Gravity (MARG) Sensor Manipulated in Regions with and without Geomagnetic Distortion
by Pontakorn Sonchan, Neeranut Ratchatanantakit, Nonnarit O-larnnithipong, Malek Adjouadi and Armando Barreto
Sensors 2023, 23(8), 3786; https://doi.org/10.3390/s23083786 - 07 Apr 2023
Cited by 1 | Viewed by 1397
Abstract
In this paper, we present the FIU MARG Dataset (FIUMARGDB) of signals from the tri-axial accelerometer, gyroscope, and magnetometer contained in a low-cost miniature magnetic–angular rate–gravity (MARG) sensor module (also known as magnetic inertial measurement unit, MIMU) for the evaluation of MARG orientation [...] Read more.
In this paper, we present the FIU MARG Dataset (FIUMARGDB) of signals from the tri-axial accelerometer, gyroscope, and magnetometer contained in a low-cost miniature magnetic–angular rate–gravity (MARG) sensor module (also known as magnetic inertial measurement unit, MIMU) for the evaluation of MARG orientation estimation algorithms. The dataset contains 30 files resulting from different volunteer subjects executing manipulations of the MARG in areas with and without magnetic distortion. Each file also contains reference (“ground truth”) MARG orientations (as quaternions) determined by an optical motion capture system during the recording of the MARG signals. The creation of FIUMARGDB responds to the increasing need for the objective comparison of the performance of MARG orientation estimation algorithms, using the same inputs (accelerometer, gyroscope, and magnetometer signals) recorded under varied circumstances, as MARG modules hold great promise for human motion tracking applications. This dataset specifically addresses the need to study and manage the degradation of orientation estimates that occur when MARGs operate in regions with known magnetic field distortions. To our knowledge, no other dataset with these characteristics is currently available. FIUMARGDB can be accessed through the URL indicated in the conclusions section. It is our hope that the availability of this dataset will lead to the development of orientation estimation algorithms that are more resilient to magnetic distortions, for the benefit of fields as diverse as human–computer interaction, kinesiology, motor rehabilitation, etc. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition II)
Show Figures

Figure 1

12 pages, 1615 KiB  
Article
A System of Emotion Recognition and Judgment and Its Application in Adaptive Interactive Game
by Wenqian Lin, Chao Li and Yunjian Zhang
Sensors 2023, 23(6), 3250; https://doi.org/10.3390/s23063250 - 19 Mar 2023
Viewed by 1360
Abstract
A system of emotion recognition and judgment (SERJ) based on a set of optimal signal features is established, and an emotion adaptive interactive game (EAIG) is designed. The change in a player’s emotion can be detected with the SERJ during the process of [...] Read more.
A system of emotion recognition and judgment (SERJ) based on a set of optimal signal features is established, and an emotion adaptive interactive game (EAIG) is designed. The change in a player’s emotion can be detected with the SERJ during the process of playing the game. A total of 10 subjects were selected to test the EAIG and SERJ. The results show that the SERJ and designed EAIG are effective. The game adapted itself by judging the corresponding special events triggered by a player’s emotion and, as a result, enhanced the player’s game experience. It was found that, in the process of playing the game, a player’s perception of the change in emotion was different, and the test experience of a player had an effect on the test results. A SERJ that is based on a set of optimal signal features is better than a SERJ that is based on the conventional machine learning-based method. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition II)
Show Figures

Figure 1

23 pages, 20417 KiB  
Article
Sign2Pose: A Pose-Based Approach for Gloss Prediction Using a Transformer Model
by Jennifer Eunice, Andrew J, Yuichi Sei and D. Jude Hemanth
Sensors 2023, 23(5), 2853; https://doi.org/10.3390/s23052853 - 06 Mar 2023
Cited by 6 | Viewed by 2415
Abstract
Word-level sign language recognition (WSLR) is the backbone for continuous sign language recognition (CSLR) that infers glosses from sign videos. Finding the relevant gloss from the sign sequence and detecting explicit boundaries of the glosses from sign videos is a persistent challenge. In [...] Read more.
Word-level sign language recognition (WSLR) is the backbone for continuous sign language recognition (CSLR) that infers glosses from sign videos. Finding the relevant gloss from the sign sequence and detecting explicit boundaries of the glosses from sign videos is a persistent challenge. In this paper, we propose a systematic approach for gloss prediction in WLSR using the Sign2Pose Gloss prediction transformer model. The primary goal of this work is to enhance WLSR’s gloss prediction accuracy with reduced time and computational overhead. The proposed approach uses hand-crafted features rather than automated feature extraction, which is computationally expensive and less accurate. A modified key frame extraction technique is proposed that uses histogram difference and Euclidean distance metrics to select and drop redundant frames. To enhance the model’s generalization ability, pose vector augmentation using perspective transformation along with joint angle rotation is performed. Further, for normalization, we employed YOLOv3 (You Only Look Once) to detect the signing space and track the hand gestures of the signers in the frames. The proposed model experiments on WLASL datasets achieved the top 1% recognition accuracy of 80.9% in WLASL100 and 64.21% in WLASL300. The performance of the proposed model surpasses state-of-the-art approaches. The integration of key frame extraction, augmentation, and pose estimation improved the performance of the proposed gloss prediction model by increasing the model’s precision in locating minor variations in their body posture. We observed that introducing YOLOv3 improved gloss prediction accuracy and helped prevent model overfitting. Overall, the proposed model showed 17% improved performance in the WLASL 100 dataset. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition II)
Show Figures

Figure 1

14 pages, 3817 KiB  
Article
Sensor-Based Activity Recognition Using Frequency Band Enhancement Filters and Model Ensembles
by Hyuga Tsutsumi, Kei Kondo, Koki Takenaka and Tatsuhito Hasegawa
Sensors 2023, 23(3), 1465; https://doi.org/10.3390/s23031465 - 28 Jan 2023
Cited by 1 | Viewed by 1324
Abstract
Deep learning methods are widely used in sensor-based activity recognition, contributing to improved recognition accuracy. Accelerometer and gyroscope data are mainly used as input to the models. Accelerometer data are sometimes converted to a frequency spectrum. However, data augmentation based on frequency characteristics [...] Read more.
Deep learning methods are widely used in sensor-based activity recognition, contributing to improved recognition accuracy. Accelerometer and gyroscope data are mainly used as input to the models. Accelerometer data are sometimes converted to a frequency spectrum. However, data augmentation based on frequency characteristics has not been thoroughly investigated. This study proposes an activity recognition method that uses ensemble learning and filters that emphasize the frequency that is important for recognizing a certain activity. To realize the proposed method, we experimentally identified the important frequency of various activities by masking some frequency bands in the accelerometer data and comparing the accuracy using the masked data. To demonstrate the effectiveness of the proposed method, we compared its accuracy with and without enhancement filters during training and testing and with and without ensemble learning. The results showed that applying a frequency band enhancement filter during training and testing and ensemble learning achieved the highest recognition accuracy. In order to demonstrate the robustness of the proposed method, we used four different datasets and compared the recognition accuracy between a single model and a model using ensemble learning. As a result, in three of the four datasets, the proposed method showed the highest recognition accuracy, indicating the robustness of the proposed method. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition II)
Show Figures

Figure 1

20 pages, 2080 KiB  
Article
k-Tournament Grasshopper Extreme Learner for FMG-Based Gesture Recognition
by Rim Barioul and Olfa Kanoun
Sensors 2023, 23(3), 1096; https://doi.org/10.3390/s23031096 - 18 Jan 2023
Cited by 2 | Viewed by 1785
Abstract
The recognition of hand signs is essential for several applications. Due to the variation of possible signals and the complexity of sensor-based systems for hand gesture recognition, a new artificial neural network algorithm providing high accuracy with a reduced architecture and automatic feature [...] Read more.
The recognition of hand signs is essential for several applications. Due to the variation of possible signals and the complexity of sensor-based systems for hand gesture recognition, a new artificial neural network algorithm providing high accuracy with a reduced architecture and automatic feature selection is needed. In this paper, a novel classification method based on an extreme learning machine (ELM), supported by an improved grasshopper optimization algorithm (GOA) as a core for a weight-pruning process, is proposed. The k-tournament grasshopper optimization algorithm was implemented to select and prune the ELM weights resulting in the proposed k-tournament grasshopper extreme learner (KTGEL) classifier. Myographic methods, such as force myography (FMG), deliver interesting signals that can build the basis for hand sign recognition. FMG was investigated to limit the number of sensors at suitable positions and provide adequate signal processing algorithms for perspective implementation in wearable embedded systems. Based on the proposed KTGEL, the number of sensors and the effect of the number of subjects was investigated in the first stage. It was shown that by increasing the number of subjects participating in the data collection, eight was the minimal number of sensors needed to result in acceptable sign recognition performance. Moreover, implemented with 3000 hidden nodes, after the feature selection wrapper, the ELM had both a microaverage precision and a microaverage sensitivity of 97% for the recognition of a set of gestures, including a middle ambiguity level. The KTGEL reduced the hidden nodes to only 1000, reaching the same total sensitivity with a reduced total precision of only 1% without needing an additional feature selection method. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition II)
Show Figures

Figure 1

18 pages, 41460 KiB  
Article
Hand Gesture Recognition Using EMG-IMU Signals and Deep Q-Networks
by Juan Pablo Vásconez, Lorena Isabel Barona López, Ángel Leonardo Valdivieso Caraguay and Marco E. Benalcázar
Sensors 2022, 22(24), 9613; https://doi.org/10.3390/s22249613 - 08 Dec 2022
Cited by 9 | Viewed by 4705
Abstract
Hand gesture recognition systems (HGR) based on electromyography signals (EMGs) and inertial measurement unit signals (IMUs) have been studied for different applications in recent years. Most commonly, cutting-edge HGR methods are based on supervised machine learning methods. However, the potential benefits of reinforcement [...] Read more.
Hand gesture recognition systems (HGR) based on electromyography signals (EMGs) and inertial measurement unit signals (IMUs) have been studied for different applications in recent years. Most commonly, cutting-edge HGR methods are based on supervised machine learning methods. However, the potential benefits of reinforcement learning (RL) techniques have shown that these techniques could be a viable option for classifying EMGs. Methods based on RL have several advantages such as promising classification performance and online learning from experience. In this work, we developed an HGR system made up of the following stages: pre-processing, feature extraction, classification, and post-processing. For the classification stage, we built an RL-based agent capable of learning to classify and recognize eleven hand gestures—five static and six dynamic—using a deep Q-network (DQN) algorithm based on EMG and IMU information. The proposed system uses a feed-forward artificial neural network (ANN) for the representation of the agent policy. We carried out the same experiments with two different types of sensors to compare their performance, which are the Myo armband sensor and the G-force sensor. We performed experiments using training, validation, and test set distributions, and the results were evaluated for user-specific HGR models. The final accuracy results demonstrated that the best model was able to reach up to 97.50%±1.13% and 88.15%±2.84% for the classification and recognition, respectively, with regard to static gestures, and 98.95%±0.62% and 90.47%±4.57% for the classification and recognition, respectively, with regard to dynamic gestures with the Myo armband sensor. The results obtained in this work demonstrated that RL methods such as the DQN are capable of learning a policy from online experience to classify and recognize static and dynamic gestures using EMG and IMU signals. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition II)
Show Figures

Figure 1

19 pages, 9085 KiB  
Article
sEMG-Based Hand Posture Recognition and Visual Feedback Training for the Forearm Amputee
by Jongman Kim, Sumin Yang, Bummo Koo, Seunghee Lee, Sehoon Park, Seunggi Kim, Kang Hee Cho and Youngho Kim
Sensors 2022, 22(20), 7984; https://doi.org/10.3390/s22207984 - 19 Oct 2022
Cited by 5 | Viewed by 1585
Abstract
sEMG-based gesture recognition is useful for human–computer interactions, especially for technology supporting rehabilitation training and the control of electric prostheses. However, high variability in the sEMG signals of untrained users degrades the performance of gesture recognition algorithms. In this study, the hand posture [...] Read more.
sEMG-based gesture recognition is useful for human–computer interactions, especially for technology supporting rehabilitation training and the control of electric prostheses. However, high variability in the sEMG signals of untrained users degrades the performance of gesture recognition algorithms. In this study, the hand posture recognition algorithm and radar plot-based visual feedback training were developed using multichannel sEMG sensors. Ten healthy adults and one bilateral forearm amputee participated by repeating twelve hand postures ten times. The visual feedback training was performed for two days and five days in healthy adults and a forearm amputee, respectively. Artificial neural network classifiers were trained with two types of feature vectors: a single feature vector and a combination of feature vectors. The classification accuracy of the forearm amputee increased significantly after three days of hand posture training. These results indicate that the visual feedback training efficiently improved the performance of sEMG-based hand posture recognition by reducing variability in the sEMG signal. Furthermore, a bilateral forearm amputee was able to participate in the rehabilitation training by using a radar plot, and the radar plot-based visual feedback training would help the amputees to control various electric prostheses. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition II)
Show Figures

Figure 1

19 pages, 3200 KiB  
Article
Characterization of Infants’ General Movements Using a Commercial RGB-Depth Sensor and a Deep Neural Network Tracking Processing Tool: An Exploratory Study
by Diletta Balta, HsinHung Kuo, Jing Wang, Ilaria Giuseppina Porco, Olga Morozova, Manon Maitland Schladen, Andrea Cereatti, Peter Stanley Lum and Ugo Della Croce
Sensors 2022, 22(19), 7426; https://doi.org/10.3390/s22197426 - 29 Sep 2022
Cited by 6 | Viewed by 1809
Abstract
Cerebral palsy, the most common childhood neuromotor disorder, is often diagnosed through visual assessment of general movements (GM) in infancy. This skill requires extensive training and is thus difficult to implement on a large scale. Automated analysis of GM performed using low-cost instrumentation [...] Read more.
Cerebral palsy, the most common childhood neuromotor disorder, is often diagnosed through visual assessment of general movements (GM) in infancy. This skill requires extensive training and is thus difficult to implement on a large scale. Automated analysis of GM performed using low-cost instrumentation in the home may be used to estimate quantitative metrics predictive of movement disorders. This study explored if infants’ GM may be successfully evaluated in a familiar environment by processing the 3D trajectories of points of interest (PoI) obtained from recordings of a single commercial RGB-D sensor. The RGB videos were processed using an open-source markerless motion tracking method which allowed the estimation of the 2D trajectories of the selected PoI and a purposely developed method which allowed the reconstruction of their 3D trajectories making use of the data recorded with the depth sensor. Eight infants’ GM were recorded in the home at 3, 4, and 5 months of age. Eight GM metrics proposed in the literature in addition to a novel metric were estimated from the PoI trajectories at each timepoint. A pediatric neurologist and physiatrist provided an overall clinical evaluation from infants’ video. Subsequently, a comparison between metrics and clinical evaluation was performed. The results demonstrated that GM metrics may be meaningfully estimated and potentially used for early identification of movement disorders. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition II)
Show Figures

Figure 1

16 pages, 5271 KiB  
Article
Multi-Modal Deep Learning for Assessing Surgeon Technical Skill
by Kevin Kasa, David Burns, Mitchell G. Goldenberg, Omar Selim, Cari Whyne and Michael Hardisty
Sensors 2022, 22(19), 7328; https://doi.org/10.3390/s22197328 - 27 Sep 2022
Cited by 5 | Viewed by 1835
Abstract
This paper introduces a new dataset of a surgical knot-tying task, and a multi-modal deep learning model that achieves comparable performance to expert human raters on this skill assessment task. Seventy-two surgical trainees and faculty were recruited for the knot-tying task, and were [...] Read more.
This paper introduces a new dataset of a surgical knot-tying task, and a multi-modal deep learning model that achieves comparable performance to expert human raters on this skill assessment task. Seventy-two surgical trainees and faculty were recruited for the knot-tying task, and were recorded using video, kinematic, and image data. Three expert human raters conducted the skills assessment using the Objective Structured Assessment of Technical Skill (OSATS) Global Rating Scale (GRS). We also designed and developed three deep learning models: a ResNet-based image model, a ResNet-LSTM kinematic model, and a multi-modal model leveraging the image and time-series kinematic data. All three models demonstrate performance comparable to the expert human raters on most GRS domains. The multi-modal model demonstrates the best overall performance, as measured using the mean squared error (MSE) and intraclass correlation coefficient (ICC). This work is significant since it demonstrates that multi-modal deep learning has the potential to replicate human raters on a challenging human-performed knot-tying task. The study demonstrates an algorithm with state-of-the-art performance in surgical skill assessment. As objective assessment of technical skill continues to be a growing, but resource-heavy, element of surgical education, this study is an important step towards automated surgical skill assessment, ultimately leading to reduced burden on training faculty and institutes. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition II)
Show Figures

Figure 1

16 pages, 4970 KiB  
Article
Improved Fully Convolutional Siamese Networks for Visual Object Tracking Based on Response Behaviour Analysis
by Xianyun Huang, Songxiao Cao, Chenguang Dong, Tao Song and Zhipeng Xu
Sensors 2022, 22(17), 6550; https://doi.org/10.3390/s22176550 - 30 Aug 2022
Cited by 1 | Viewed by 1266
Abstract
Siamese networks have recently attracted significant attention in the visual tracking community due to their balanced accuracy and speed. However, as a result of the non-update of the appearance model and the changing appearance of the target, the problem of tracking drift is [...] Read more.
Siamese networks have recently attracted significant attention in the visual tracking community due to their balanced accuracy and speed. However, as a result of the non-update of the appearance model and the changing appearance of the target, the problem of tracking drift is a regular occurrence, particularly in background clutter scenarios. As a means of addressing this problem, this paper proposes an improved fully convolutional Siamese tracker that is based on response behaviour analysis (SiamFC-RBA). Firstly, the response map of the SiamFC is normalised to an 8-bit grey image, and the isohypse contours that represent the candidate target region are generated through thresholding. Secondly, the dynamic behaviour of the contours is analysed in order to check if there are distractors approaching the tracked target. Finally, a peak switching strategy is used as a means of determining the real tracking position of all candidates. Extensive experiments conducted on visual tracking benchmarks, including OTB100, GOT-10k and LaSOT, demonstrated that the proposed tracker outperformed the compared trackers such as DaSiamRPN, SiamRPN, SiamFC, CSK, CFNet and Staple and achieved state-of-the-art performance. In addition, the response behaviour analysis module was embedded into DiMP, with the experimental results showing the performance of the tracker to be improved through the use of the proposed architecture. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition II)
Show Figures

Figure 1

13 pages, 5105 KiB  
Article
Interactive Application of Data Glove Based on Emotion Recognition and Judgment System
by Wenqian Lin, Chao Li and Yunjian Zhang
Sensors 2022, 22(17), 6327; https://doi.org/10.3390/s22176327 - 23 Aug 2022
Cited by 6 | Viewed by 1602
Abstract
In this paper, the interactive application of data gloves based on emotion recognition and judgment system is investigated. A system of emotion recognition and judgment is established based on the set of optimal features of physiological signals, and then a data glove with [...] Read more.
In this paper, the interactive application of data gloves based on emotion recognition and judgment system is investigated. A system of emotion recognition and judgment is established based on the set of optimal features of physiological signals, and then a data glove with multi-channel data transmission based on the recognition of hand posture and emotion is constructed. Finally, the system of virtual hand control and a manipulator driven by emotion is built. Five subjects were selected for the test of the above systems. The test results show that the virtual hand and manipulator can be simultaneously controlled by the data glove. In the case that the subjects do not make any hand gesture change, the system can directly control the gesture of the virtual hand by reading the physiological signal of the subject, at which point the gesture control and emotion control can be carried out at the same time. In the test of the manipulator driven by emotion, only the results driven by two emotional trends achieve the desired purpose. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition II)
Show Figures

Figure 1

13 pages, 2394 KiB  
Article
Motion Analysis of Football Kick Based on an IMU Sensor
by Chun Yu, Ting-Yuan Huang and Hsi-Pin Ma
Sensors 2022, 22(16), 6244; https://doi.org/10.3390/s22166244 - 19 Aug 2022
Cited by 8 | Viewed by 4166
Abstract
A greater variety of technologies are being applied in sports and health with the advancement of technology, but most optoelectronic systems have strict environmental restrictions and are usually costly. To visualize and perform quantitative analysis on the football kick, we introduce a 3D [...] Read more.
A greater variety of technologies are being applied in sports and health with the advancement of technology, but most optoelectronic systems have strict environmental restrictions and are usually costly. To visualize and perform quantitative analysis on the football kick, we introduce a 3D motion analysis system based on a six-axis inertial measurement unit (IMU) to reconstruct the motion trajectory, in the meantime analyzing the velocity and the highest point of the foot during the backswing. We build a signal processing system in MATLAB and standardize the experimental process, allowing users to reconstruct the foot trajectory and obtain information about the motion within a short time. This paper presents a system that directly analyzes the instep kicking motion rather than recognizing different motions or obtaining biomechanical parameters. For the instep kicking motion of path length around 3.63 m, the root mean square error (RMSE) is about 0.07 m. The RMSE of the foot velocity is 0.034 m/s, which is around 0.45% of the maximum velocity. For the maximum velocity of the foot and the highest point of the backswing, the error is approximately 4% and 2.8%, respectively. With less complex hardware, our experimental results achieve excellent velocity accuracy. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition II)
Show Figures

Figure 1

13 pages, 1258 KiB  
Article
Detection of Horse Locomotion Modifications Due to Training with Inertial Measurement Units: A Proof-of-Concept
by Benoît Pasquiet, Sophie Biau, Quentin Trébot, Jean-François Debril, François Durand and Laetitia Fradet
Sensors 2022, 22(13), 4981; https://doi.org/10.3390/s22134981 - 01 Jul 2022
Cited by 4 | Viewed by 2111
Abstract
Detecting fatigue during training sessions would help riders and trainers to optimize their training. It has been shown that fatigue could affect movement patterns. Inertial measurement units (IMUs) are wearable sensors that measure linear accelerations and angular velocities, and can also provide orientation [...] Read more.
Detecting fatigue during training sessions would help riders and trainers to optimize their training. It has been shown that fatigue could affect movement patterns. Inertial measurement units (IMUs) are wearable sensors that measure linear accelerations and angular velocities, and can also provide orientation estimates. These sensors offer the possibility of a non-invasive and continuous monitoring of locomotion during training sessions. However, the indicators extracted from IMUs and their ability to show these locomotion changes are not known. The present study aims at defining which kinematic variables and indicators could highlight locomotion changes during a training session expected to be particularly demanding for the horses. Heart rate and lactatemia were measured to attest for the horse’s fatigue following the training session. Indicators derived from acceleration, angular velocities, and orientation estimates obtained from nine IMUs placed on 10 high-level dressage horses were compared before and after a training session using a non-parametric Wilcoxon paired test. These indicators were correlation coefficients (CC) and root mean square deviations (RMSD) comparing gait cycle kinematics measured before and after the training session and also movement smoothness estimates (SPARC, LDLJ). Heart rate and lactatemia measures did not attest to a significant physiological fatigue. However, the statistics show an effect of the training session (p < 0.05) on many CC and RMSD computed on the kinematic variables, indicating a change in the locomotion with the training session as well as on SPARCs indicators (p < 0.05), and revealing here a change in the movement smoothness both in canter and trot. IMUs seem then to be able to track locomotion pattern modifications due to training. Future research should be conducted to be able to fully attribute the modifications of these indicators to fatigue. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition II)
Show Figures

Figure 1

13 pages, 2339 KiB  
Article
A Lightweight Pose Sensing Scheme for Contactless Abnormal Gait Behavior Measurement
by Yuliang Zhao, Jian Li, Xiaoai Wang, Fan Liu, Peng Shan, Lianjiang Li and Qiang Fu
Sensors 2022, 22(11), 4070; https://doi.org/10.3390/s22114070 - 27 May 2022
Cited by 5 | Viewed by 1656
Abstract
The recognition of abnormal gait behavior is important in the field of motion assessment and disease diagnosis. Currently, abnormal gait behavior is primarily recognized by pressure and inertial data obtained from wearable sensors. However, the data drift and wearing difficulties for patients have [...] Read more.
The recognition of abnormal gait behavior is important in the field of motion assessment and disease diagnosis. Currently, abnormal gait behavior is primarily recognized by pressure and inertial data obtained from wearable sensors. However, the data drift and wearing difficulties for patients have impeded the application of these wearable sensors. Here, we propose a contactless abnormal gait behavior recognition method that captures human pose data using a monocular camera. A lightweight OpenPose (OP) model is generated with Depthwise Separable Convolution to recognize joint points and extract their coordinates during walking in real time. For the walking data errors extracted in the 2D plane, a 3D reconstruction is performed on the walking data, and a total of 11 types of abnormal gait features are extracted by the OP model. Finally, the XGBoost algorithm is used for feature screening. The final experimental results show that the Random Forest (RF) algorithm in combination with 3D features delivers the highest precision (92.13%) for abnormal gait behavior recognition. The proposed scheme overcomes the data drift of inertial sensors and sensor wearing challenges in the elderly while reducing the hardware requirements for model deployment. With excellent real-time and contactless capabilities, the scheme is expected to enjoy a wide range of applications in the field of abnormal gait measurement. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition II)
Show Figures

Figure 1

Back to TopTop