sensors-logo

Journal Browser

Journal Browser

Sensors for Human Activity Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (30 September 2022) | Viewed by 29327

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors

Cognitive Systems Laboratory, Faculty of Mathematic/Informatics, University of Bremen, 28359 Bremen, Germany
Interests: biosignal processing; feature selection and feature space reduction; human activity recognition; real-time recognition systems; knee bandage; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculdade de Ciências e Tecnologia form Universidade Nova de Lisboa, Lisboa, Portugal
Interests: instrumentation; signal processing; machine learning, human activity recognition (HAR)
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computer Science, Faculty of Mathematic/Informatics, University of Bremen, 28359 Bremen, Germany
Interests: biosignal processing; human-centered man–machine interfaces; user states and traits modeling; machine learning; interfaces based on muscle and brain activities; automatic speech recognition; silent speech interfaces; brain interfaces
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The Special Issue "Sensors for Human Activity Recognition II" is online and welcome for your further contribution!

Human activity recognition (HAR) has been playing an increasingly important role in the digital age.

High-quality sensory observations applicable to recognizing users' activities, whether through external or internal (wearable) sensing technology, are inseparable from sensors' sophisticated design and appropriate application.

Having been studied and verified adequately, traditional sensors suitable for human activity recognition—such as external sensors for smart homes, optical sensors like cameras for capturing video signals, and bioelectrical and biomechanical sensors for wearable applications—continue to be researched in-depth for more effective and efficient usage, among which specific areas of life facilitated by sensor-based HAR have been continuously increasing.

Meanwhile, innovative sensor research for HAR is also very active in the academic community, including brand new types of sensors appropriate for HAR, new designs and applications of the above-mentioned traditional sensors, and the introduction of non-traditional HAR-related sensor types into HAR tasks, among others.

This Special Issue aims to provide researchers in related fields a platform to demonstrate their unique insights and late-breaking achievements, encouraging authors to submit their state-of-the-art research and contributions about sensors for HAR.

The main topics for this Issue include, but are not limited to:

  • Sensor design and development.
  • Embedded signal processing.
  • Biosignal instrumentation.
  • Mobile sensing and mobile-phone-based signal processing.
  • Wearable sensors and body sensor networks.
  • Printable sensors.
  • Implants.
  • Behavior recognition.
  • Applications to health care, sports, edutainment, and others.
  • Sensor-based machine learning.

Dr. Hui Liu
Prof. Dr. Hugo Gamboa
Prof. Dr. Tanja Schultz
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Other

4 pages, 185 KiB  
Editorial
Sensor-Based Human Activity and Behavior Research: Where Advanced Sensing and Recognition Technologies Meet
by Hui Liu, Hugo Gamboa and Tanja Schultz
Sensors 2023, 23(1), 125; https://doi.org/10.3390/s23010125 - 23 Dec 2022
Cited by 19 | Viewed by 2077
Abstract
Human activity recognition (HAR) and human behavior recognition (HBR) have been playing increasingly important roles in the digital age [...] Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition)

Research

Jump to: Editorial, Other

19 pages, 3914 KiB  
Article
Gated Recurrent Unit Network for Psychological Stress Classification Using Electrocardiograms from Wearable Devices
by Jun Zhong, Yongfeng Liu, Xiankai Cheng, Liming Cai, Weidong Cui and Dong Hai
Sensors 2022, 22(22), 8664; https://doi.org/10.3390/s22228664 - 10 Nov 2022
Cited by 3 | Viewed by 1546
Abstract
In recent years, research on human psychological stress using wearable devices has gradually attracted attention. However, the physical and psychological differences among individuals and the high cost of data collection are the main challenges for further research on this problem. In this work, [...] Read more.
In recent years, research on human psychological stress using wearable devices has gradually attracted attention. However, the physical and psychological differences among individuals and the high cost of data collection are the main challenges for further research on this problem. In this work, our aim is to build a model to detect subjects’ psychological stress in different states through electrocardiogram (ECG) signals. Therefore, we design a VR high-altitude experiment to induce psychological stress for the subject to obtain the ECG signal dataset. In the experiment, participants wear smart ECG T-shirts with embedded sensors to complete different tasks so as to record their ECG signals synchronously. Considering the temporal continuity of individual psychological stress, a deep, gated recurrent unit (GRU) neural network is developed to capture the mapping relationship between subjects’ ECG signals and stress in different states through heart rate variability features at different moments, so as to build a neural network model from the ECG signal to psychological stress detection. The experimental results show that compared with all comparison methods, our method has the best classification performance on the four stress states of resting, VR scene adaptation, VR task and recovery, and it can be a remote stress monitoring solution for some special industries. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition)
Show Figures

Figure 1

20 pages, 1563 KiB  
Article
Comparing Handcrafted Features and Deep Neural Representations for Domain Generalization in Human Activity Recognition
by Nuno Bento, Joana Rebelo, Marília Barandas, André V. Carreiro, Andrea Campagner, Federico Cabitza and Hugo Gamboa
Sensors 2022, 22(19), 7324; https://doi.org/10.3390/s22197324 - 27 Sep 2022
Cited by 9 | Viewed by 2416
Abstract
Human Activity Recognition (HAR) has been studied extensively, yet current approaches are not capable of generalizing across different domains (i.e., subjects, devices, or datasets) with acceptable performance. This lack of generalization hinders the applicability of these models in real-world environments. As deep neural [...] Read more.
Human Activity Recognition (HAR) has been studied extensively, yet current approaches are not capable of generalizing across different domains (i.e., subjects, devices, or datasets) with acceptable performance. This lack of generalization hinders the applicability of these models in real-world environments. As deep neural networks are becoming increasingly popular in recent work, there is a need for an explicit comparison between handcrafted and deep representations in Out-of-Distribution (OOD) settings. This paper compares both approaches in multiple domains using homogenized public datasets. First, we compare several metrics to validate three different OOD settings. In our main experiments, we then verify that even though deep learning initially outperforms models with handcrafted features, the situation is reversed as the distance from the training distribution increases. These findings support the hypothesis that handcrafted features may generalize better across specific domains. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition)
Show Figures

Figure 1

21 pages, 4423 KiB  
Article
Practical and Accurate Indoor Localization System Using Deep Learning
by Jeonghyeon Yoon and Seungku Kim
Sensors 2022, 22(18), 6764; https://doi.org/10.3390/s22186764 - 07 Sep 2022
Cited by 5 | Viewed by 2273
Abstract
Indoor localization is an important technology for providing various location-based services to smartphones. Among the various indoor localization technologies, pedestrian dead reckoning using inertial measurement units is a simple and highly practical solution for indoor localization. In this study, we propose a smartphone-based [...] Read more.
Indoor localization is an important technology for providing various location-based services to smartphones. Among the various indoor localization technologies, pedestrian dead reckoning using inertial measurement units is a simple and highly practical solution for indoor localization. In this study, we propose a smartphone-based indoor localization system using pedestrian dead reckoning. To create a deep learning model for estimating the moving speed, accelerometer data and GPS values were used as input data and data labels, respectively. This is a practical solution compared with conventional indoor localization mechanisms using deep learning. We improved the positioning accuracy via data preprocessing, data augmentation, deep learning modeling, and correction of heading direction. In a horseshoe-shaped indoor building of 240 m in length, the experimental results show a distance error of approximately 3 to 5 m. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition)
Show Figures

Figure 1

17 pages, 18068 KiB  
Article
A Customer Behavior Recognition Method for Flexibly Adapting to Target Changes in Retail Stores
by Jiahao Wen, Toru Abe and Takuo Suganuma
Sensors 2022, 22(18), 6740; https://doi.org/10.3390/s22186740 - 06 Sep 2022
Cited by 5 | Viewed by 1443
Abstract
To provide analytic materials for business management for smart retail solutions, it is essential to recognize various customer behaviors (CB) from video footage acquired by in-store cameras. Along with frequent changes in needs and environments, such as promotion plans, product categories, in-store layouts, [...] Read more.
To provide analytic materials for business management for smart retail solutions, it is essential to recognize various customer behaviors (CB) from video footage acquired by in-store cameras. Along with frequent changes in needs and environments, such as promotion plans, product categories, in-store layouts, etc., the targets of customer behavior recognition (CBR) also change frequently. Therefore, one of the requirements of the CBR method is the flexibility to adapt to changes in recognition targets. However, existing approaches, mostly based on machine learning, usually take a great deal of time to re-collect training data and train new models when faced with changing target CBs, reflecting their lack of flexibility. In this paper, we propose a CBR method to achieve flexibility by considering CB in combination with primitives. A primitive is a unit that describes an object’s motion or multiple objects’ relationships. The combination of different primitives can characterize a particular CB. Since primitives can be reused to define a wide range of different CBs, our proposed method is capable of flexibly adapting to target CB changes in retail stores. In experiments undertaken, we utilized both our collected laboratory dataset and the public MERL dataset. We changed the combination of primitives to cope with the changes in target CBs between different datasets. As a result, our proposed method achieved good flexibility with acceptable recognition accuracy. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition)
Show Figures

Figure 1

19 pages, 3525 KiB  
Article
Device-Free Multi-Location Human Activity Recognition Using Deep Complex Network
by Xue Ding, Chunlei Hu, Weiliang Xie, Yi Zhong, Jianfei Yang and Ting Jiang
Sensors 2022, 22(16), 6178; https://doi.org/10.3390/s22166178 - 18 Aug 2022
Cited by 3 | Viewed by 1404
Abstract
Wi-Fi-based human activity recognition has attracted broad attention for its advantages, which include being device-free, privacy-protected, unaffected by light, etc. Owing to the development of artificial intelligence techniques, existing methods have made great improvements in sensing accuracy. However, the performance of multi-location recognition [...] Read more.
Wi-Fi-based human activity recognition has attracted broad attention for its advantages, which include being device-free, privacy-protected, unaffected by light, etc. Owing to the development of artificial intelligence techniques, existing methods have made great improvements in sensing accuracy. However, the performance of multi-location recognition is still a challenging issue. According to the principle of wireless sensing, wireless signals that characterize activity are also seriously affected by location variations. Existing solutions depend on adequate data samples at different locations, which are labor-intensive. To solve the above concerns, we present an amplitude- and phase-enhanced deep complex network (AP-DCN)-based multi-location human activity recognition method, which can fully utilize the amplitude and phase information simultaneously so as to mine more abundant information from limited data samples. Furthermore, considering the unbalanced sample number at different locations, we propose a perception method based on the deep complex network-transfer learning (DCN-TL) structure, which effectively realizes knowledge sharing among various locations. To fully evaluate the performance of the proposed method, comprehensive experiments have been carried out with a dataset collected in an office environment with 24 locations and five activities. The experimental results illustrate that the approaches can achieve 96.85% and 94.02% recognition accuracy, respectively. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition)
Show Figures

Figure 1

16 pages, 31414 KiB  
Article
Semi-Supervised Adversarial Learning Using LSTM for Human Activity Recognition
by Sung-Hyun Yang, Dong-Gwon Baek and Keshav Thapa
Sensors 2022, 22(13), 4755; https://doi.org/10.3390/s22134755 - 23 Jun 2022
Cited by 8 | Viewed by 1939
Abstract
The training of Human Activity Recognition (HAR) models requires a substantial amount of labeled data. Unfortunately, despite being trained on enormous datasets, most current models have poor performance rates when evaluated against anonymous data from new users. Furthermore, due to the limits and [...] Read more.
The training of Human Activity Recognition (HAR) models requires a substantial amount of labeled data. Unfortunately, despite being trained on enormous datasets, most current models have poor performance rates when evaluated against anonymous data from new users. Furthermore, due to the limits and problems of working with human users, capturing adequate data for each new user is not feasible. This paper presents semi-supervised adversarial learning using the LSTM (Long-short term memory) approach for human activity recognition. This proposed method trains annotated and unannotated data (anonymous data) by adapting the semi-supervised learning paradigms on which adversarial learning capitalizes to improve the learning capabilities in dealing with errors that appear in the process. Moreover, it adapts to the change in human activity routine and new activities, i.e., it does not require prior understanding and historical information. Simultaneously, this method is designed as a temporal interactive model instantiation and shows the capacity to estimate heteroscedastic uncertainty owing to inherent data ambiguity. Our methodology also benefits from multiple parallel input sequential data predicting an output exploiting the synchronized LSTM. The proposed method proved to be the best state-of-the-art method with more than 98% accuracy in implementation utilizing the publicly available datasets collected from the smart home environment facilitated with heterogeneous sensors. This technique is a novel approach for high-level human activity recognition and is likely to be a broad application prospect for HAR. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition)
Show Figures

Figure 1

29 pages, 2133 KiB  
Article
The State-of-the-Art Sensing Techniques in Human Activity Recognition: A Survey
by Sizhen Bian, Mengxi Liu, Bo Zhou and Paul Lukowicz
Sensors 2022, 22(12), 4596; https://doi.org/10.3390/s22124596 - 17 Jun 2022
Cited by 15 | Viewed by 4218
Abstract
Human activity recognition (HAR) has become an intensive research topic in the past decade because of the pervasive user scenarios and the overwhelming development of advanced algorithms and novel sensing approaches. Previous HAR-related sensing surveys were primarily focused on either a specific branch [...] Read more.
Human activity recognition (HAR) has become an intensive research topic in the past decade because of the pervasive user scenarios and the overwhelming development of advanced algorithms and novel sensing approaches. Previous HAR-related sensing surveys were primarily focused on either a specific branch such as wearable sensing and video-based sensing or a full-stack presentation of both sensing and data processing techniques, resulting in weak focus on HAR-related sensing techniques. This work tries to present a thorough, in-depth survey on the state-of-the-art sensing modalities in HAR tasks to supply a solid understanding of the variant sensing principles for younger researchers of the community. First, we categorized the HAR-related sensing modalities into five classes: mechanical kinematic sensing, field-based sensing, wave-based sensing, physiological sensing, and hybrid/others. Specific sensing modalities are then presented in each category, and a thorough description of the sensing tricks and the latest related works were given. We also discussed the strengths and weaknesses of each modality across the categorization so that newcomers could have a better overview of the characteristics of each sensing modality for HAR tasks and choose the proper approaches for their specific application. Finally, we summarized the presented sensing techniques with a comparison concerning selected performance metrics and proposed a few outlooks on the future sensing techniques used for HAR tasks. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition)
Show Figures

Figure 1

18 pages, 3909 KiB  
Article
A Novel Central Camera Calibration Method Recording Point-to-Point Distortion for Vision-Based Human Activity Recognition
by Ziyi Jin, Zhixue Li, Tianyuan Gan, Zuoming Fu, Chongan Zhang, Zhongyu He, Hong Zhang, Peng Wang, Jiquan Liu and Xuesong Ye
Sensors 2022, 22(9), 3524; https://doi.org/10.3390/s22093524 - 05 May 2022
Cited by 7 | Viewed by 2055
Abstract
The camera is the main sensor of vison-based human activity recognition, and its high-precision calibration of distortion is an important prerequisite of the task. Current studies have shown that multi-parameter model methods achieve higher accuracy than traditional methods in the process of camera [...] Read more.
The camera is the main sensor of vison-based human activity recognition, and its high-precision calibration of distortion is an important prerequisite of the task. Current studies have shown that multi-parameter model methods achieve higher accuracy than traditional methods in the process of camera calibration. However, these methods need hundreds or even thousands of images to optimize the camera model, which limits their practical use. Here, we propose a novel point-to-point camera distortion calibration method that requires only dozens of images to get a dense distortion rectification map. We have designed an objective function based on deformation between the original images and the projection of reference images, which can eliminate the effect of distortion when optimizing camera parameters. Dense features between the original images and the projection of the reference images are calculated by digital image correlation (DIC). Experiments indicate that our method obtains a comparable result with the multi-parameter model method using a large number of pictures, and contributes a 28.5% improvement to the reprojection error over the polynomial distortion model. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition)
Show Figures

Figure 1

22 pages, 2010 KiB  
Article
How Validation Methodology Influences Human Activity Recognition Mobile Systems
by Hendrio Bragança, Juan G. Colonna, Horácio A. B. F. Oliveira and Eduardo Souto
Sensors 2022, 22(6), 2360; https://doi.org/10.3390/s22062360 - 18 Mar 2022
Cited by 17 | Viewed by 3042
Abstract
In this article, we introduce explainable methods to understand how Human Activity Recognition (HAR) mobile systems perform based on the chosen validation strategies. Our results introduce a new way to discover potential bias problems that overestimate the prediction accuracy of an algorithm because [...] Read more.
In this article, we introduce explainable methods to understand how Human Activity Recognition (HAR) mobile systems perform based on the chosen validation strategies. Our results introduce a new way to discover potential bias problems that overestimate the prediction accuracy of an algorithm because of the inappropriate choice of validation methodology. We show how the SHAP (Shapley additive explanations) framework, used in literature to explain the predictions of any machine learning model, presents itself as a tool that can provide graphical insights into how human activity recognition models achieve their results. Now it is possible to analyze which features are important to a HAR system in each validation methodology in a simplified way. We not only demonstrate that the validation procedure k-folds cross-validation (k-CV), used in most works to evaluate the expected error in a HAR system, can overestimate by about 13% the prediction accuracy in three public datasets but also choose a different feature set when compared with the universal model. Combining explainable methods with machine learning algorithms has the potential to help new researchers look inside the decisions of the machine learning algorithms, avoiding most times the overestimation of prediction accuracy, understanding relations between features, and finding bias before deploying the system in real-world scenarios. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition)
Show Figures

Figure 1

Other

Jump to: Editorial, Research

15 pages, 1282 KiB  
Systematic Review
Research and Development of Ankle–Foot Orthoses: A Review
by Congcong Zhou, Zhao Yang, Kaitai Li and Xuesong Ye
Sensors 2022, 22(17), 6596; https://doi.org/10.3390/s22176596 - 01 Sep 2022
Cited by 13 | Viewed by 4751
Abstract
The ankle joint is one of the important joints of the human body to maintain the ability to walk. Diseases such as stroke and ankle osteoarthritis could weaken the body’s ability to control joints, causing people’s gait to be out of balance. Ankle–foot [...] Read more.
The ankle joint is one of the important joints of the human body to maintain the ability to walk. Diseases such as stroke and ankle osteoarthritis could weaken the body’s ability to control joints, causing people’s gait to be out of balance. Ankle–foot orthoses can assist users with neuro/muscular or ankle injuries to restore their natural gait. Currently, passive ankle–foot orthoses are mostly designed to fix the ankle joint and provide support for walking. With the development of materials, sensing, and control science, semi-active orthoses that release mechanical energy to assist walking when needed and can store the energy generated by body movement in elastic units, as well as active ankle–foot orthoses that use external energy to transmit enhanced torque to the ankle, have received increasing attention. This article reviews the development process of ankle–foot orthoses and proposes that the integration of new ankle–foot orthoses with rehabilitation technologies such as monitoring or myoelectric stimulation will play an important role in reducing the walking energy consumption of patients in the study of human-in-the-loop models and promoting neuro/muscular rehabilitation. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition)
Show Figures

Figure 1

Back to TopTop