sensors-logo

Journal Browser

Journal Browser

Smart Sensing Technology for Human Activity Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Wearables".

Deadline for manuscript submissions: 31 October 2024 | Viewed by 9369

Special Issue Editors


E-Mail Website
Guest Editor
Department of Information Engineering, Università Politecnica delle Marche, 60131 Ancona (AN), Italy
Interests: analog, digital and mixed signal circuit design and simulation; embedded systems design; wireless sensors and networks; signal processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Information Engineering, Università Politecnica delle Marche, 60131 Ancona, AN, Italy
Interests: statistical integrated circuit design and device modeling; mixed-signal and RF circuit design; nanoelectronics and nanodevices; biomedical circuits and systems; bio-signal analysis and classification; signal processing; neural networks; system identification
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Human activity recognition (HAR) aims to recognize specific user activities or discover long-term patterns from a set of observations in real-life contexts, captured through a variety of sensors that can span many domains (inertial, bioelectrical, optical, etc.). Its potential of being non-invasive in everyday life makes it one of the most promising enabling technologies for ambient assisted living.

The aim of this Special Issue is to collect relevant papers that deal with innovative aspects of HAR systems, including sensor hardware design, all stages of data processing (from acquisition to the presentation of the results thus acquired), with focus on intelligent algorithms that are able to automate the classification or recognition of activities, or improve the quality of the acquired information. We thus seek papers that describe innovative developments in the acquisition of signals related to a person’s activity, and the interpretation of the data through automated techniques such as machine learning and artificial intelligence.

Review articles that provide readers with scholarly educational material about the current research trends on the matter are also welcome.

Submissions are encouraged which address topics including, but not limited to, the following:

  • Wearable devices and systems;
  • Smart sensors;
  • Signal acquisition and conditioning;
  • Activity tracking;
  • Activity diarization;
  • Automatic classification;
  • Fitness tracking;
  • Automated sports coaching;
  • Ambient assisted living.

Prof. Dr. Giorgio Biagetti
Prof. Dr. Paolo Crippa
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

11 pages, 1270 KiB  
Article
Human Activity Recognition in a Free-Living Environment Using an Ear-Worn Motion Sensor
by Lukas Boborzi, Julian Decker, Razieh Rezaei, Roman Schniepp and Max Wuehr
Sensors 2024, 24(9), 2665; https://doi.org/10.3390/s24092665 - 23 Apr 2024
Viewed by 274
Abstract
Human activity recognition (HAR) technology enables continuous behavior monitoring, which is particularly valuable in healthcare. This study investigates the viability of using an ear-worn motion sensor for classifying daily activities, including lying, sitting/standing, walking, ascending stairs, descending stairs, and running. Fifty healthy participants [...] Read more.
Human activity recognition (HAR) technology enables continuous behavior monitoring, which is particularly valuable in healthcare. This study investigates the viability of using an ear-worn motion sensor for classifying daily activities, including lying, sitting/standing, walking, ascending stairs, descending stairs, and running. Fifty healthy participants (between 20 and 47 years old) engaged in these activities while under monitoring. Various machine learning algorithms, ranging from interpretable shallow models to state-of-the-art deep learning approaches designed for HAR (i.e., DeepConvLSTM and ConvTransformer), were employed for classification. The results demonstrate the ear sensor’s efficacy, with deep learning models achieving a 98% accuracy rate of classification. The obtained classification models are agnostic regarding which ear the sensor is worn and robust against moderate variations in sensor orientation (e.g., due to differences in auricle anatomy), meaning no initial calibration of the sensor orientation is required. The study underscores the ear’s efficacy as a suitable site for monitoring human daily activity and suggests its potential for combining HAR with in-ear vital sign monitoring. This approach offers a practical method for comprehensive health monitoring by integrating sensors in a single anatomical location. This integration facilitates individualized health assessments, with potential applications in tele-monitoring, personalized health insights, and optimizing athletic training regimes. Full article
(This article belongs to the Special Issue Smart Sensing Technology for Human Activity Recognition)
Show Figures

Figure 1

19 pages, 4377 KiB  
Article
Wi-CHAR: A WiFi Sensing Approach with Focus on Both Scenes and Restricted Data
by Zhanjun Hao, Kaikai Han, Zinan Zhang and Xiaochao Dang
Sensors 2024, 24(7), 2364; https://doi.org/10.3390/s24072364 - 08 Apr 2024
Viewed by 426
Abstract
Significant strides have been made in the field of WiFi-based human activity recognition, yet recent wireless sensing methodologies still grapple with the reliance on copious amounts of data. When assessed in unfamiliar domains, the majority of models experience a decline in accuracy. To [...] Read more.
Significant strides have been made in the field of WiFi-based human activity recognition, yet recent wireless sensing methodologies still grapple with the reliance on copious amounts of data. When assessed in unfamiliar domains, the majority of models experience a decline in accuracy. To address this challenge, this study introduces Wi-CHAR, a novel few-shot learning-based cross-domain activity recognition system. Wi-CHAR is meticulously designed to tackle both the intricacies of specific sensing environments and pertinent data-related issues. Initially, Wi-CHAR employs a dynamic selection methodology for sensing devices, tailored to mitigate the diminished sensing capabilities observed in specific regions within a multi-WiFi sensor device ecosystem, thereby augmenting the fidelity of sensing data. Subsequent refinement involves the utilization of the MF-DBSCAN clustering algorithm iteratively, enabling the rectification of anomalies and enhancing the quality of subsequent behavior recognition processes. Furthermore, the Re-PN module is consistently engaged, dynamically adjusting feature prototype weights to facilitate cross-domain activity sensing in scenarios with limited sample data, effectively distinguishing between accurate and noisy data samples, thus streamlining the identification of new users and environments. The experimental results show that the average accuracy is more than 93% (five-shot) in various scenarios. Even in cases where the target domain has fewer data samples, better cross-domain results can be achieved. Notably, evaluation on publicly available datasets, WiAR and Widar 3.0, corroborates Wi-CHAR’s robust performance, boasting accuracy rates of 89.7% and 92.5%, respectively. In summary, Wi-CHAR delivers recognition outcomes on par with state-of-the-art methodologies, meticulously tailored to accommodate specific sensing environments and data constraints. Full article
(This article belongs to the Special Issue Smart Sensing Technology for Human Activity Recognition)
Show Figures

Figure 1

19 pages, 684 KiB  
Article
Exercise Promotion System for Single Households Based on Agent-Oriented IoT Architecture
by Taku Yamazaki, Tianyu Fan and Takumi Miyoshi
Sensors 2024, 24(7), 2029; https://doi.org/10.3390/s24072029 - 22 Mar 2024
Viewed by 864
Abstract
People living alone encounter well-being challenges due to unnoticed personal situations. Thus, it is essential to monitor their activities and encourage them to adopt healthy lifestyle habits without imposing a mental burden, aiming to enhance their overall well-being. To realize such a support [...] Read more.
People living alone encounter well-being challenges due to unnoticed personal situations. Thus, it is essential to monitor their activities and encourage them to adopt healthy lifestyle habits without imposing a mental burden, aiming to enhance their overall well-being. To realize such a support system, its components should be simple and loosely coupled to handle various internet of things (IoT)-based smart home applications. In this study, we propose an exercise promotion system for individuals living alone to encourage them to adopt good lifestyle habits. The system comprises autonomous IoT devices as agents and is realized using an agent-oriented IoT architecture. It estimates user activity via sensors and offers exercise advice based on recognized conditions, surroundings, and preferences. The proposed system accepts user feedback to improve status estimation accuracy and offers better advice. The proposed system was evaluated from three perspectives through experiments with subjects. Initially, we demonstrated the system’s operation through agent cooperation. Then, we showed it adapts to user preferences within two weeks. Third, the users expressed satisfaction with the detection accuracy regarding their stay-at-home status and the relevance of the advice provided. They were also motivated to engage in exercise based on a subjective evaluation, as indicated by preliminary results. Full article
(This article belongs to the Special Issue Smart Sensing Technology for Human Activity Recognition)
Show Figures

Figure 1

12 pages, 2304 KiB  
Article
Single Person Identification and Activity Estimation in a Room from Waist-Level Contours Captured by 2D Light Detection and Ranging
by Mizuki Enoki, Kai Watanabe and Hiroshi Noguchi
Sensors 2024, 24(4), 1272; https://doi.org/10.3390/s24041272 - 17 Feb 2024
Viewed by 540
Abstract
To develop socially assistive robots for monitoring older adults at home, a sensor is required to identify residents and capture activities within the room without violating privacy. We focused on 2D Light Detection and Ranging (2D-LIDAR) capable of robustly measuring human contours in [...] Read more.
To develop socially assistive robots for monitoring older adults at home, a sensor is required to identify residents and capture activities within the room without violating privacy. We focused on 2D Light Detection and Ranging (2D-LIDAR) capable of robustly measuring human contours in a room. While horizontal 2D contour data can provide human location, identifying humans and activities from these contours is challenging. To address this issue, we developed novel methods using deep learning techniques. This paper proposes methods for person identification and activity estimation in a room using contour point clouds captured by a single 2D-LIDAR at hip height. In this approach, human contours were extracted from 2D-LIDAR data using density-based spatial clustering of applications with noise. Subsequently, the person and activity within a 10-s interval were estimated employing deep learning techniques. Two deep learning models, namely Long Short-Term Memory (LSTM) and image classification (VGG16), were compared. In the experiment, a total of 120 min of walking data and 100 min of additional activities (door opening, sitting, and standing) were collected from four participants. The LSTM-based and VGG16-based methods achieved accuracies of 65.3% and 89.7%, respectively, for person identification among the four individuals. Furthermore, these methods demonstrated accuracies of 94.2% and 97.9%, respectively, for the estimation of the four activities. Despite the 2D-LIDAR point clouds at hip height containing small features related to gait, the results indicate that the VGG16-based method has the capability to identify individuals and accurately estimate their activities. Full article
(This article belongs to the Special Issue Smart Sensing Technology for Human Activity Recognition)
Show Figures

Figure 1

18 pages, 4329 KiB  
Article
Advancing Human Motion Recognition with SkeletonCLIP++: Weighted Video Feature Integration and Enhanced Contrastive Sample Discrimination
by Lin Yuan, Zhen He, Qiang Wang and Leiyang Xu
Sensors 2024, 24(4), 1189; https://doi.org/10.3390/s24041189 - 11 Feb 2024
Viewed by 601
Abstract
This paper introduces ‘SkeletonCLIP++’, an extension of our prior work in human action recognition, emphasizing the use of semantic information beyond traditional label-based methods. The innovation, ‘Weighted Frame Integration’ (WFI), shifts video feature computation from averaging to a weighted frame approach, enabling a [...] Read more.
This paper introduces ‘SkeletonCLIP++’, an extension of our prior work in human action recognition, emphasizing the use of semantic information beyond traditional label-based methods. The innovation, ‘Weighted Frame Integration’ (WFI), shifts video feature computation from averaging to a weighted frame approach, enabling a more nuanced representation of human movements in line with semantic relevance. Another key development, ‘Contrastive Sample Identification’ (CSI), introduces a novel discriminative task within the model. This task involves identifying the most similar negative sample among positive ones, enhancing the model’s ability to distinguish between closely related actions. Incorporating the ‘BERT Text Encoder Integration’ (BTEI) leverages the pre-trained BERT model as our text encoder to refine the model’s performance. Empirical evaluations on HMDB-51, UCF-101, and NTU RGB+D 60 datasets illustrate positive improvements, especially in smaller datasets. ‘SkeletonCLIP++’ thus offers a refined approach to human action recognition, ensuring semantic integrity and detailed differentiation in video data analysis. Full article
(This article belongs to the Special Issue Smart Sensing Technology for Human Activity Recognition)
Show Figures

Figure 1

14 pages, 1087 KiB  
Article
Wearable Sensor to Monitor Quality of Upper Limb Task Practice for Stroke Survivors at Home
by Na Jin Seo, Kristen Coupland, Christian Finetto and Gabrielle Scronce
Sensors 2024, 24(2), 554; https://doi.org/10.3390/s24020554 - 16 Jan 2024
Cited by 1 | Viewed by 870
Abstract
Many stroke survivors experience persistent upper extremity impairment that limits performance in activities of daily living. Upper limb recovery requires high repetitions of task-specific practice. Stroke survivors are often prescribed task practices at home to supplement rehabilitation therapy. A poor quality of task [...] Read more.
Many stroke survivors experience persistent upper extremity impairment that limits performance in activities of daily living. Upper limb recovery requires high repetitions of task-specific practice. Stroke survivors are often prescribed task practices at home to supplement rehabilitation therapy. A poor quality of task practices, such as the use of compensatory movement patterns, results in maladaptive neuroplasticity and suboptimal motor recovery. There currently lacks a tool for the remote monitoring of movement quality of stroke survivors’ task practices at home. The objective of this study was to evaluate the feasibility of classifying movement quality at home using a wearable IMU. Nineteen stroke survivors wore an IMU sensor on the paretic wrist and performed four functional upper limb tasks in the lab and later at home while videorecording themselves. The lab data served as reference data to classify home movement quality using dynamic time warping. Incorrect and correct movement quality was labeled by a therapist. The home task practice movement quality was classified with an accuracy of 92% and F1 score of 0.95 for all tasks combined. Movement types contributing to misclassification were further investigated. The results support the feasibility of a home movement quality monitoring system to assist with upper limb rehabilitation post stroke. Full article
(This article belongs to the Special Issue Smart Sensing Technology for Human Activity Recognition)
Show Figures

Graphical abstract

17 pages, 3408 KiB  
Article
A Passive RF Testbed for Human Posture Classification in FM Radio Bands
by João Pereira, Eugene Casmin and Rodolfo Oliveira
Sensors 2023, 23(23), 9563; https://doi.org/10.3390/s23239563 - 01 Dec 2023
Viewed by 655
Abstract
This paper explores the opportunities and challenges for classifying human posture in indoor scenarios by analyzing the Frequency-Modulated (FM) radio broadcasting signal received at multiple locations. More specifically, we present a passive RF testbed operating in FM radio bands, which allows experimentation with [...] Read more.
This paper explores the opportunities and challenges for classifying human posture in indoor scenarios by analyzing the Frequency-Modulated (FM) radio broadcasting signal received at multiple locations. More specifically, we present a passive RF testbed operating in FM radio bands, which allows experimentation with innovative human posture classification techniques. After introducing the details of the proposed testbed, we describe a simple methodology to detect and classify human posture. The methodology includes a detailed study of feature engineering and the assumption of three traditional classification techniques. The implementation of the proposed methodology in software-defined radio devices allows an evaluation of the testbed’s capability to classify human posture in real time. The evaluation results presented in this paper confirm that the accuracy of the classification can be approximately 90%, showing the effectiveness of the proposed testbed and its potential to support the development of future innovative classification techniques by only sensing FM bands in a passive mode. Full article
(This article belongs to the Special Issue Smart Sensing Technology for Human Activity Recognition)
Show Figures

Figure 1

20 pages, 5818 KiB  
Article
Recognition of Grasping Patterns Using Deep Learning for Human–Robot Collaboration
by Pedro Amaral, Filipe Silva and Vítor Santos
Sensors 2023, 23(21), 8989; https://doi.org/10.3390/s23218989 - 05 Nov 2023
Cited by 1 | Viewed by 1299
Abstract
Recent advances in the field of collaborative robotics aim to endow industrial robots with prediction and anticipation abilities. In many shared tasks, the robot’s ability to accurately perceive and recognize the objects being manipulated by the human operator is crucial to make predictions [...] Read more.
Recent advances in the field of collaborative robotics aim to endow industrial robots with prediction and anticipation abilities. In many shared tasks, the robot’s ability to accurately perceive and recognize the objects being manipulated by the human operator is crucial to make predictions about the operator’s intentions. In this context, this paper proposes a novel learning-based framework to enable an assistive robot to recognize the object grasped by the human operator based on the pattern of the hand and finger joints. The framework combines the strengths of the commonly available software MediaPipe in detecting hand landmarks in an RGB image with a deep multi-class classifier that predicts the manipulated object from the extracted keypoints. This study focuses on the comparison between two deep architectures, a convolutional neural network and a transformer, in terms of prediction accuracy, precision, recall and F1-score. We test the performance of the recognition system on a new dataset collected with different users and in different sessions. The results demonstrate the effectiveness of the proposed methods, while providing valuable insights into the factors that limit the generalization ability of the models. Full article
(This article belongs to the Special Issue Smart Sensing Technology for Human Activity Recognition)
Show Figures

Figure 1

22 pages, 7126 KiB  
Article
Orientation-Independent Human Activity Recognition Using Complementary Radio Frequency Sensing
by Muhammad Muaaz, Sahil Waqar and Matthias Pätzold
Sensors 2023, 23(13), 5810; https://doi.org/10.3390/s23135810 - 22 Jun 2023
Cited by 2 | Viewed by 1429
Abstract
RF sensing offers an unobtrusive, user-friendly, and privacy-preserving method for detecting accidental falls and recognizing human activities. Contemporary RF-based HAR systems generally employ a single monostatic radar to recognize human activities. However, a single monostatic radar cannot detect the motion of a target, [...] Read more.
RF sensing offers an unobtrusive, user-friendly, and privacy-preserving method for detecting accidental falls and recognizing human activities. Contemporary RF-based HAR systems generally employ a single monostatic radar to recognize human activities. However, a single monostatic radar cannot detect the motion of a target, e.g., a moving person, orthogonal to the boresight axis of the radar. Owing to this inherent physical limitation, a single monostatic radar fails to efficiently recognize orientation-independent human activities. In this work, we present a complementary RF sensing approach that overcomes the limitation of existing single monostatic radar-based HAR systems to robustly recognize orientation-independent human activities and falls. Our approach used a distributed mmWave MIMO radar system that was set up as two separate monostatic radars placed orthogonal to each other in an indoor environment. These two radars illuminated the moving person from two different aspect angles and consequently produced two time-variant micro-Doppler signatures. We first computed the mean Doppler shifts (MDSs) from the micro-Doppler signatures and then extracted statistical and time- and frequency-domain features. We adopted feature-level fusion techniques to fuse the extracted features and a support vector machine to classify orientation-independent human activities. To evaluate our approach, we used an orientation-independent human activity dataset, which was collected from six volunteers. The dataset consisted of more than 1350 activity trials of five different activities that were performed in different orientations. The proposed complementary RF sensing approach achieved an overall classification accuracy ranging from 98.31 to 98.54%. It overcame the inherent limitations of a conventional single monostatic radar-based HAR and outperformed it by 6%. Full article
(This article belongs to the Special Issue Smart Sensing Technology for Human Activity Recognition)
Show Figures

Figure 1

Other

Jump to: Research

19 pages, 435 KiB  
Perspective
The Lifespan of Human Activity Recognition Systems for Smart Homes
by Shruthi K. Hiremath and Thomas Plötz
Sensors 2023, 23(18), 7729; https://doi.org/10.3390/s23187729 - 07 Sep 2023
Cited by 1 | Viewed by 1242
Abstract
With the growing interest in smart home environments and in providing seamless interactions with various smart devices, robust and reliable human activity recognition (HAR) systems are becoming essential. Such systems provide automated assistance to residents or to longitudinally monitor their daily activities for [...] Read more.
With the growing interest in smart home environments and in providing seamless interactions with various smart devices, robust and reliable human activity recognition (HAR) systems are becoming essential. Such systems provide automated assistance to residents or to longitudinally monitor their daily activities for health and well-being assessments, as well as for tracking (long-term) behavior changes. These systems thus contribute towards an understanding of the health and continued well-being of residents. Smart homes are personalized settings where residents engage in everyday activities in their very own idiosyncratic ways. In order to provide a fully functional HAR system that requires minimal supervision, we provide a systematic analysis and a technical definition of the lifespan of activity recognition systems for smart homes. Such a designed lifespan provides for the different phases of building the HAR system, where these different phases are motivated by an application scenario that is typically observed in the home setting. Through the aforementioned phases, we detail the technical solutions that are required to be developed for each phase such that it becomes possible to derive and continuously improve the HAR system through data-driven procedures. The detailed lifespan can be used as a framework for the design of state-of-the-art procedures corresponding to the different phases. Full article
(This article belongs to the Special Issue Smart Sensing Technology for Human Activity Recognition)
Show Figures

Figure 1

Back to TopTop