sensors-logo

Journal Browser

Journal Browser

Sensors for Human Activity Recognition II

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Wearables".

Deadline for manuscript submissions: closed (29 February 2024) | Viewed by 12318

Special Issue Editors


E-Mail Website
Guest Editor
Computer Science, Faculty of Mathematic/Informatics, University of Bremen, 28359 Bremen, Germany
Interests: biosignal processing; human-centered man–machine interfaces; user states and traits modeling; machine learning; interfaces based on muscle and brain activities; automatic speech recognition; silent speech interfaces; brain interfaces
Special Issues, Collections and Topics in MDPI journals
Cognitive Systems Laboratory, Faculty of Mathematic/Informatics, University of Bremen, 28359 Bremen, Germany
Interests: biosignal processing; feature selection and feature space reduction; human activity recognition; real-time recognition systems; knee bandage; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculdade de Ciências e Tecnologia form Universidade Nova de Lisboa, Lisboa, Portugal
Interests: instrumentation; signal processing; machine learning, human activity recognition (HAR)
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Human activity recognition (HAR) has been playing an increasingly important role in the digital age.

High-quality sensory observations applicable to recognizing users' activities, whether through external or internal (wearable) sensing technology, are inseparable from sensors' sophisticated design and appropriate application.

Having been studied and verified adequately, traditional sensors suitable for human activity recognition—such as external sensors for smart homes, optical sensors such as cameras for capturing video signals, and bioelectrical and biomechanical sensors for wearable applications—continue to be researched in depth for more effective and efficient usage, among which specific areas of life facilitated by sensor-based HAR have been continuously increasing.

Meanwhile, innovative sensor research for HAR is also very active in the academic community, including brand new types of sensors appropriate for HAR, new designs and applications of the abovementioned traditional sensors, and the introduction of non-traditional HAR-related sensor types into HAR tasks, among others.

This Special Issue aims to provide researchers in related fields a platform to demonstrate their unique insights and late-breaking achievements, encouraging authors to submit their state-of-the-art research and contributions on sensors for HAR.

The main topics for this Issue include but are not limited to:

  • Sensor design and development;
  • Embedded signal processing;
  • Biosignal instrumentation;
  • Mobile sensing and mobile-phone-based signal processing;
  • Wearable sensors and body sensor networks;
  • Printable sensors;
  • Implants;
  • Behavior recognition;
  • Applications to healthcare, sports, edutainment, and others;
  • Sensor-based machine learning;
  • HAR in XR.

Prof. Dr. Tanja Schultz
Dr. Hui Liu
Prof. Dr. Hugo Gamboa
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

12 pages, 61978 KiB  
Article
Adopting Graph Neural Networks to Analyze Human–Object Interactions for Inferring Activities of Daily Living
by Peng Su and Dejiu Chen
Sensors 2024, 24(8), 2567; https://doi.org/10.3390/s24082567 - 17 Apr 2024
Viewed by 230
Abstract
Human Activity Recognition (HAR) refers to a field that aims to identify human activities by adopting multiple techniques. In this field, different applications, such as smart homes and assistive robots, are introduced to support individuals in their Activities of Daily Living (ADL) by [...] Read more.
Human Activity Recognition (HAR) refers to a field that aims to identify human activities by adopting multiple techniques. In this field, different applications, such as smart homes and assistive robots, are introduced to support individuals in their Activities of Daily Living (ADL) by analyzing data collected from various sensors. Apart from wearable sensors, the adoption of camera frames to analyze and classify ADL has emerged as a promising trend for achieving the identification and classification of ADL. To accomplish this, the existing approaches typically rely on object classification with pose estimation using the image frames collected from cameras. Given the existence of inherent correlations between human–object interactions and ADL, further efforts are often needed to leverage these correlations for more effective and well justified decisions. To this end, this work proposes a framework where Graph Neural Networks (GNN) are adopted to explicitly analyze human–object interactions for more effectively recognizing daily activities. By automatically encoding the correlations among various interactions detected through some collected relational data, the framework infers the existence of different activities alongside their corresponding environmental objects. As a case study, we use the Toyota Smart Home dataset to evaluate the proposed framework. Compared with conventional feed-forward neural networks, the results demonstrate significantly superior performance in identifying ADL, allowing for the classification of different daily activities with an accuracy of 0.88. Furthermore, the incorporation of encoded information from relational data enhances object-inference performance compared to the GNN without joint prediction, increasing accuracy from 0.71 to 0.77. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition II)
Show Figures

Figure 1

24 pages, 5574 KiB  
Article
MeshID: Few-Shot Finger Gesture Based User Identification Using Orthogonal Signal Interference
by Weiling Zheng, Yu Zhang, Landu Jiang, Dian Zhang and Tao Gu
Sensors 2024, 24(6), 1978; https://doi.org/10.3390/s24061978 - 20 Mar 2024
Viewed by 430
Abstract
Radio frequency (RF) technology has been applied to enable advanced behavioral sensing in human-computer interaction. Due to its device-free sensing capability and wide availability on Internet of Things devices. Enabling finger gesture-based identification with high accuracy can be challenging due to low RF [...] Read more.
Radio frequency (RF) technology has been applied to enable advanced behavioral sensing in human-computer interaction. Due to its device-free sensing capability and wide availability on Internet of Things devices. Enabling finger gesture-based identification with high accuracy can be challenging due to low RF signal resolution and user heterogeneity. In this paper, we propose MeshID, a novel RF-based user identification scheme that enables identification through finger gestures with high accuracy. MeshID significantly improves the sensing sensitivity on RF signal interference, and hence is able to extract subtle individual biometrics through velocity distribution profiling (VDP) features from less-distinct finger motions such as drawing digits in the air. We design an efficient few-shot model retraining framework based on first component reverse module, achieving high model robustness and performance in a complex environment. We conduct comprehensive real-world experiments and the results show that MeshID achieves a user identification accuracy of 95.17% on average in three indoor environments. The results indicate that MeshID outperforms the state-of-the-art in identification performance with less cost. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition II)
Show Figures

Figure 1

22 pages, 592 KiB  
Article
Empowering Participatory Research in Urban Health: Wearable Biometric and Environmental Sensors for Activity Recognition
by Rok Novak, Johanna Amalia Robinson, Tjaša Kanduč, Dimosthenis Sarigiannis, Sašo Džeroski and David Kocman
Sensors 2023, 23(24), 9890; https://doi.org/10.3390/s23249890 - 18 Dec 2023
Cited by 1 | Viewed by 1023
Abstract
Participatory exposure research, which tracks behaviour and assesses exposure to stressors like air pollution, traditionally relies on time-activity diaries. This study introduces a novel approach, employing machine learning (ML) to empower laypersons in human activity recognition (HAR), aiming to reduce dependence on manual [...] Read more.
Participatory exposure research, which tracks behaviour and assesses exposure to stressors like air pollution, traditionally relies on time-activity diaries. This study introduces a novel approach, employing machine learning (ML) to empower laypersons in human activity recognition (HAR), aiming to reduce dependence on manual recording by leveraging data from wearable sensors. Recognising complex activities such as smoking and cooking presents unique challenges due to specific environmental conditions. In this research, we combined wearable environment/ambient and wrist-worn activity/biometric sensors for complex activity recognition in an urban stressor exposure study, measuring parameters like particulate matter concentrations, temperature, and humidity. Two groups, Group H (88 individuals) and Group M (18 individuals), wore the devices and manually logged their activities hourly and minutely, respectively. Prioritising accessibility and inclusivity, we selected three classification algorithms: k-nearest neighbours (IBk), decision trees (J48), and random forests (RF), based on: (1) proven efficacy in existing literature, (2) understandability and transparency for laypersons, (3) availability on user-friendly platforms like WEKA, and (4) efficiency on basic devices such as office laptops or smartphones. Accuracy improved with finer temporal resolution and detailed activity categories. However, when compared to other published human activity recognition research, our accuracy rates, particularly for less complex activities, were not as competitive. Misclassifications were higher for vague activities (resting, playing), while well-defined activities (smoking, cooking, running) had few errors. Including environmental sensor data increased accuracy for all activities, especially playing, smoking, and running. Future work should consider exploring other explainable algorithms available on diverse tools and platforms. Our findings underscore ML’s potential in exposure studies, emphasising its adaptability and significance for laypersons while also highlighting areas for improvement. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition II)
Show Figures

Figure 1

26 pages, 5411 KiB  
Article
Decoding Mental Effort in a Quasi-Realistic Scenario: A Feasibility Study on Multimodal Data Fusion and Classification
by Sabrina Gado, Katharina Lingelbach, Maria Wirzberger and Mathias Vukelić
Sensors 2023, 23(14), 6546; https://doi.org/10.3390/s23146546 - 20 Jul 2023
Cited by 1 | Viewed by 1308
Abstract
Humans’ performance varies due to the mental resources that are available to successfully pursue a task. To monitor users’ current cognitive resources in naturalistic scenarios, it is essential to not only measure demands induced by the task itself but also consider situational and [...] Read more.
Humans’ performance varies due to the mental resources that are available to successfully pursue a task. To monitor users’ current cognitive resources in naturalistic scenarios, it is essential to not only measure demands induced by the task itself but also consider situational and environmental influences. We conducted a multimodal study with 18 participants (nine female, M = 25.9 with SD = 3.8 years). In this study, we recorded respiratory, ocular, cardiac, and brain activity using functional near-infrared spectroscopy (fNIRS) while participants performed an adapted version of the warship commander task with concurrent emotional speech distraction. We tested the feasibility of decoding the experienced mental effort with a multimodal machine learning architecture. The architecture comprised feature engineering, model optimisation, and model selection to combine multimodal measurements in a cross-subject classification. Our approach reduces possible overfitting and reliably distinguishes two different levels of mental effort. These findings contribute to the prediction of different states of mental effort and pave the way toward generalised state monitoring across individuals in realistic applications. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition II)
Show Figures

Graphical abstract

12 pages, 469 KiB  
Article
Exploring Regularization Methods for Domain Generalization in Accelerometer-Based Human Activity Recognition
by Nuno Bento, Joana Rebelo, André V. Carreiro, François Ravache and Marília Barandas
Sensors 2023, 23(14), 6511; https://doi.org/10.3390/s23146511 - 19 Jul 2023
Cited by 1 | Viewed by 1114
Abstract
The study of Domain Generalization (DG) has gained considerable momentum in the Machine Learning (ML) field. Human Activity Recognition (HAR) inherently encompasses diverse domains (e.g., users, devices, or datasets), rendering it an ideal testbed for exploring Domain Generalization. Building upon recent work, this [...] Read more.
The study of Domain Generalization (DG) has gained considerable momentum in the Machine Learning (ML) field. Human Activity Recognition (HAR) inherently encompasses diverse domains (e.g., users, devices, or datasets), rendering it an ideal testbed for exploring Domain Generalization. Building upon recent work, this paper investigates the application of regularization methods to bridge the generalization gap between traditional models based on handcrafted features and deep neural networks. We apply various regularizers, including sparse training, Mixup, Distributionally Robust Optimization (DRO), and Sharpness-Aware Minimization (SAM), to deep learning models and assess their performance in Out-of-Distribution (OOD) settings across multiple domains using homogenized public datasets. Our results show that Mixup and SAM are the best-performing regularizers. However, they are unable to match the performance of models based on handcrafted features. This suggests that while regularization techniques can improve OOD robustness to some extent, handcrafted features remain superior for domain generalization in HAR tasks. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition II)
Show Figures

Figure 1

19 pages, 6496 KiB  
Article
Classification and Analysis of Human Body Movement Characteristics Associated with Acrophobia Induced by Virtual Reality Scenes of Heights
by Xiankai Cheng, Benkun Bao, Weidong Cui, Shuai Liu, Jun Zhong, Liming Cai and Hongbo Yang
Sensors 2023, 23(12), 5482; https://doi.org/10.3390/s23125482 - 10 Jun 2023
Viewed by 1262
Abstract
Acrophobia (fear of heights), a prevalent psychological disorder, elicits profound fear and evokes a range of adverse physiological responses in individuals when exposed to heights, which will lead to a very dangerous state for people in actual heights. In this paper, we explore [...] Read more.
Acrophobia (fear of heights), a prevalent psychological disorder, elicits profound fear and evokes a range of adverse physiological responses in individuals when exposed to heights, which will lead to a very dangerous state for people in actual heights. In this paper, we explore the behavioral influences in terms of movements in people confronted with virtual reality scenes of extreme heights and develop an acrophobia classification model based on human movement characteristics. To this end, we used wireless miniaturized inertial navigation sensors (WMINS) network to obtain the information of limb movements in the virtual environment. Based on these data, we constructed a series of data feature processing processes, proposed a system model for the classification of acrophobia and non-acrophobia based on human motion feature analysis, and realized the classification recognition of acrophobia and non-acrophobia through the designed integrated learning model. The final accuracy of acrophobia dichotomous classification based on limb motion information reached 94.64%, which has higher accuracy and efficiency compared with other existing research models. Overall, our study demonstrates a strong correlation between people’s mental state during fear of heights and their limb movements at that time. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition II)
Show Figures

Figure 1

16 pages, 8327 KiB  
Article
Counting Activities Using Weakly Labeled Raw Acceleration Data: A Variable-Length Sequence Approach with Deep Learning to Maintain Event Duration Flexibility
by Georgios Sopidis, Michael Haslgrübler and Alois Ferscha
Sensors 2023, 23(11), 5057; https://doi.org/10.3390/s23115057 - 25 May 2023
Cited by 2 | Viewed by 1515
Abstract
This paper presents a novel approach for counting hand-performed activities using deep learning and inertial measurement units (IMUs). The particular challenge in this task is finding the correct window size for capturing activities with different durations. Traditionally, fixed window sizes have been used, [...] Read more.
This paper presents a novel approach for counting hand-performed activities using deep learning and inertial measurement units (IMUs). The particular challenge in this task is finding the correct window size for capturing activities with different durations. Traditionally, fixed window sizes have been used, which occasionally result in incorrectly represented activities. To address this limitation, we propose segmenting the time series data into variable-length sequences using ragged tensors to store and process the data. Additionally, our approach utilizes weakly labeled data to simplify the annotation process and reduce the time to prepare annotated data for machine learning algorithms. Thus, the model receives only partial information about the performed activity. Therefore, we propose an LSTM-based architecture, which takes into account both the ragged tensors and the weak labels. To the best of our knowledge, no prior studies attempted counting utilizing variable-size IMU acceleration data with relatively low computational requirements using the number of completed repetitions of hand-performed activities as a label. Hence, we present the data segmentation method we employed and the model architecture that we implemented to show the effectiveness of our approach. Our results are evaluated using the Skoda public dataset for Human activity recognition (HAR) and demonstrate a repetition error of ±1 even in the most challenging cases. The findings of this study have applications and can be beneficial for various fields, including healthcare, sports and fitness, human–computer interaction, robotics, and the manufacturing industry. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition II)
Show Figures

Figure 1

21 pages, 16467 KiB  
Article
SENS+: A Co-Existing Fabrication System for a Smart DFA Environment Based on Energy Fusion Information
by Teng-Wen Chang, Hsin-Yi Huang, Cheng-Chun Hong, Sambit Datta and Walaiporn Nakapan
Sensors 2023, 23(6), 2890; https://doi.org/10.3390/s23062890 - 07 Mar 2023
Viewed by 2947
Abstract
In factories, energy conservation is a crucial issue. The co-fabrication space is a modern-day equivalent of a new factory type, and it makes use of Internet of Things (IoT) devices, such as sensors, software, and online connectivity, to keep track of various building [...] Read more.
In factories, energy conservation is a crucial issue. The co-fabrication space is a modern-day equivalent of a new factory type, and it makes use of Internet of Things (IoT) devices, such as sensors, software, and online connectivity, to keep track of various building features, analyze data, and produce reports on usage patterns and trends that can be used to improve building operations and the environment. The co-fabrication user requires dynamic and flexible space, which is different from the conventional user’s usage. Because the user composition in a co-fabrication space is dynamic and unstable, we cannot use the conventional approach to assess their usage and rentals. Prototyping necessitates a specifically designed energy-saving strategy. The research uses a “seeing–moving–seeing” design thinking framework, which enables designers to more easily convey their ideas to others through direct observation of the outcomes of their intuitive designs and the representation of their works through design media. The three components of human behavior, physical manufacture, and digital interaction are primarily the focus of this work. The computing system that connects the physical machine is created through communication between the designer and the digital interface, giving the designer control over the physical machine. It is an interactive fabrication process formed by behavior. The Sensible Energy System+ is an interactive fabrication process of virtual and real coexistence created by combining the already-existing technology, the prototype fabrication machine, and SENS. This process analyzes each step of the fabrication process and energy, fits it into the computing system mode to control the prototype fabrication machine, and reduces the problem between virtual and physical fabrication and energy consumption. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition II)
Show Figures

Figure 1

Review

Jump to: Research

15 pages, 718 KiB  
Review
Understanding Naturalistic Facial Expressions with Deep Learning and Multimodal Large Language Models
by Yifan Bian, Dennis Küster, Hui Liu and Eva G. Krumhuber
Sensors 2024, 24(1), 126; https://doi.org/10.3390/s24010126 - 26 Dec 2023
Cited by 2 | Viewed by 1359
Abstract
This paper provides a comprehensive overview of affective computing systems for facial expression recognition (FER) research in naturalistic contexts. The first section presents an updated account of user-friendly FER toolboxes incorporating state-of-the-art deep learning models and elaborates on their neural architectures, datasets, and [...] Read more.
This paper provides a comprehensive overview of affective computing systems for facial expression recognition (FER) research in naturalistic contexts. The first section presents an updated account of user-friendly FER toolboxes incorporating state-of-the-art deep learning models and elaborates on their neural architectures, datasets, and performances across domains. These sophisticated FER toolboxes can robustly address a variety of challenges encountered in the wild such as variations in illumination and head pose, which may otherwise impact recognition accuracy. The second section of this paper discusses multimodal large language models (MLLMs) and their potential applications in affective science. MLLMs exhibit human-level capabilities for FER and enable the quantification of various contextual variables to provide context-aware emotion inferences. These advancements have the potential to revolutionize current methodological approaches for studying the contextual influences on emotions, leading to the development of contextualized emotion models. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition II)
Show Figures

Figure 1

Back to TopTop