sensors-logo

Journal Browser

Journal Browser

Sensors for Activity Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (15 November 2020) | Viewed by 38150

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Telematics, Campus Süd, Karlsruhe Institute of Technology (KIT), 76131 Karlsruhe, Germany
Interests: wearable sensors; sensor systems; ubiquitous computing; IoT; machine learning for sensor systems
Karlsruhe Institute of Technology (KIT), TECO/Pervasive Computing Systems, 76131 Karlsruhe, Germany
Interests: Human Computer Interaction; Experience Sampling; Context Recognition; User Perception of Smartphone Notifications; Interruptibility-Aware Management of Smartphones Notifications

Special Issue Information

Dear Colleagues,

Activity Recognition is already a long-term research topic with papers dating back 20 years ago. With the advent of novel devices on the body, activity research recently gained fast raising attention. This special issue intends to collect the newest developments in this emerging area.
The MDPI "Sensors" Journal special issue is seeking for contributions to a special issue on Sensors for Activity Recognition. The special issue will accept all facets of research in the area of sensor-based activity recognition, including novel sensing systems, methods and algorithms for model learning, applications and case studies, as well as open challenges and future research directions. In this Special Issue, particular emphasis will be given to the evaluation of these sensors and models, including objectives, metrics, tools, procedure, methodologies.

Topics of the special issue include but are not limited to:

  • Sensing devices, systems and hardware for activity recognition (e.g., using mobile, wearable, or environmental sensors)
    •    Sensor fusion in activity recognition
    •    Tools and algorithms for activity recognition with focus on sensing
    •    Real world application, experiences and evaluation with sensors for activity recognition
    •    Dataset acquisition, annotation and validation based on sensors for activity recognition, including specific topics such as noise robustness
    •    Crowdsourcing and participatory sensing in activity recognition
    •    AI based methods for sensors for activity recognition (e.g. Feature extraction, selection, and evaluation, transfer learning, (un/semi)supervised learning)
    •    Performance evaluation of sensors, and sensing in activity recognition
    •    ...
    Selected papers presented at the Conference on Activity and Behavior Computing (ABC), the  Workshop on Human Activity Sensing Corpus and Applications (HASCA) as well as the Workshop on Earable Computing (EarComp) are invited to submit their extended versions to this Special Issue of the journal "Sensors". All submitted papers will undergo a peer-review procedure. Accepted papers will be published in open access format in "Sensors" and collected together on the Special Issue website.

Conference papers should be cited and noted on the first page of the paper; authors are asked to disclose that it is a conference paper in their cover letter and include a statement on what has been changed compared to the original conference paper. Please note that the submitted extended paper should contain at least 50% new content (e.g., in the form of technical extensions, more in-depth evaluations, or additional use cases) and not exceed 30% copy/paste from the conference paper.

Prof. Dr. Michael Beigl
Dr. Anja Exler
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

16 pages, 2129 KiB  
Article
ExerSense: Physical Exercise Recognition and Counting Algorithm from Wearables Robust to Positioning
by Shun Ishii, Anna Yokokubo, Mika Luimula and Guillaume Lopez
Sensors 2021, 21(1), 91; https://doi.org/10.3390/s21010091 - 25 Dec 2020
Cited by 21 | Viewed by 5047
Abstract
Wearable devices are currently popular for fitness tracking. However, these general usage devices only can track limited and prespecified exercises. In our previous work, we introduced ExerSense that segments, classifies, and counts multiple physical exercises in real-time based on a correlation method. It [...] Read more.
Wearable devices are currently popular for fitness tracking. However, these general usage devices only can track limited and prespecified exercises. In our previous work, we introduced ExerSense that segments, classifies, and counts multiple physical exercises in real-time based on a correlation method. It also can track user-specified exercises collected only one motion in advance. This paper is the extension of that work. We collected acceleration data for five types of regular exercises by four different wearable devices. To find the best accurate device and its position for multiple exercise recognition, we conducted 50 times random validations. Our result shows the robustness of ExerSense, working well with various devices. Among the four general usage devices, the chest-mounted sensor is the best for our target exercises, and the upper-arm-mounted smartphone is a close second. The wrist-mounted smartwatch is third, and the worst one is the ear-mounted sensor. Full article
(This article belongs to the Special Issue Sensors for Activity Recognition)
Show Figures

Figure 1

16 pages, 5932 KiB  
Article
Fusion of Multiple Lidars and Inertial Sensors for the Real-Time Pose Tracking of Human Motion
by Ashok Kumar Patil, Adithya Balasubramanyam, Jae Yeong Ryu, Pavan Kumar B N, Bharatesh Chakravarthi and Young Ho Chai
Sensors 2020, 20(18), 5342; https://doi.org/10.3390/s20185342 - 18 Sep 2020
Cited by 22 | Viewed by 5252
Abstract
Today, enhancement in sensing technology enables the use of multiple sensors to track human motion/activity precisely. Tracking human motion has various applications, such as fitness training, healthcare, rehabilitation, human-computer interaction, virtual reality, and activity recognition. Therefore, the fusion of multiple sensors creates new [...] Read more.
Today, enhancement in sensing technology enables the use of multiple sensors to track human motion/activity precisely. Tracking human motion has various applications, such as fitness training, healthcare, rehabilitation, human-computer interaction, virtual reality, and activity recognition. Therefore, the fusion of multiple sensors creates new opportunities to develop and improve an existing system. This paper proposes a pose-tracking system by fusing multiple three-dimensional (3D) light detection and ranging (lidar) and inertial measurement unit (IMU) sensors. The initial step estimates the human skeletal parameters proportional to the target user’s height by extracting the point cloud from lidars. Next, IMUs are used to capture the orientation of each skeleton segment and estimate the respective joint positions. In the final stage, the displacement drift in the position is corrected by fusing the data from both sensors in real time. The installation setup is relatively effortless, flexible for sensor locations, and delivers results comparable to the state-of-the-art pose-tracking system. We evaluated the proposed system regarding its accuracy in the user’s height estimation, full-body joint position estimation, and reconstruction of the 3D avatar. We used a publicly available dataset for the experimental evaluation wherever possible. The results reveal that the accuracy of height and the position estimation is well within an acceptable range of ±3–5 cm. The reconstruction of the motion based on the publicly available dataset and our data is precise and realistic. Full article
(This article belongs to the Special Issue Sensors for Activity Recognition)
Show Figures

Figure 1

24 pages, 3764 KiB  
Article
Data Quality and Reliability Assessment of Wearable EMG and IMU Sensor for Construction Activity Recognition
by Srikanth Sagar Bangaru, Chao Wang and Fereydoun Aghazadeh
Sensors 2020, 20(18), 5264; https://doi.org/10.3390/s20185264 - 15 Sep 2020
Cited by 27 | Viewed by 5227
Abstract
The workforce shortage is one of the significant problems in the construction industry. To overcome the challenges due to workforce shortage, various researchers have proposed wearable sensor-based systems in the area of construction safety and health. Although sensors provide rich and detailed information, [...] Read more.
The workforce shortage is one of the significant problems in the construction industry. To overcome the challenges due to workforce shortage, various researchers have proposed wearable sensor-based systems in the area of construction safety and health. Although sensors provide rich and detailed information, not all sensors can be used for construction applications. This study evaluates the data quality and reliability of forearm electromyography (EMG) and inertial measurement unit (IMU) of armband sensors for construction activity classification. To achieve the proposed objective, the forearm EMG and IMU data collected from eight participants while performing construction activities such as screwing, wrenching, lifting, and carrying on two different days were used to analyze the data quality and reliability for activity recognition through seven different experiments. The results of these experiments show that the armband sensor data quality is comparable to the conventional EMG and IMU sensors with excellent relative and absolute reliability between trials for all the five activities. The activity classification results were highly reliable, with minimal change in classification accuracies for both the days. Moreover, the results conclude that the combined EMG and IMU models classify activities with higher accuracies compared to individual sensor models. Full article
(This article belongs to the Special Issue Sensors for Activity Recognition)
Show Figures

Figure 1

28 pages, 15652 KiB  
Article
Facial Muscle Activity Recognition with Reconfigurable Differential Stethoscope-Microphones
by Hymalai Bello, Bo Zhou and Paul Lukowicz
Sensors 2020, 20(17), 4904; https://doi.org/10.3390/s20174904 - 30 Aug 2020
Cited by 7 | Viewed by 3436
Abstract
Many human activities and states are related to the facial muscles’ actions: from the expression of emotions, stress, and non-verbal communication through health-related actions, such as coughing and sneezing to nutrition and drinking. In this work, we describe, in detail, the design and [...] Read more.
Many human activities and states are related to the facial muscles’ actions: from the expression of emotions, stress, and non-verbal communication through health-related actions, such as coughing and sneezing to nutrition and drinking. In this work, we describe, in detail, the design and evaluation of a wearable system for facial muscle activity monitoring based on a re-configurable differential array of stethoscope-microphones. In our system, six stethoscopes are placed at locations that could easily be integrated into the frame of smart glasses. The paper describes the detailed hardware design and selection and adaptation of appropriate signal processing and machine learning methods. For the evaluation, we asked eight participants to imitate a set of facial actions, such as expressions of happiness, anger, surprise, sadness, upset, and disgust, and gestures, like kissing, winkling, sticking the tongue out, and taking a pill. An evaluation of a complete data set of 2640 events with 66% training and a 33% testing rate has been performed. Although we encountered high variability of the volunteers’ expressions, our approach shows a recall = 55%, precision = 56%, and f1-score of 54% for the user-independent scenario(9% chance-level). On a user-dependent basis, our worst result has an f1-score = 60% and best result with f1-score = 89%. Having a recall 60% for expressions like happiness, anger, kissing, sticking the tongue out, and neutral(Null-class). Full article
(This article belongs to the Special Issue Sensors for Activity Recognition)
Show Figures

Figure 1

23 pages, 2372 KiB  
Article
A Method for Sensor-Based Activity Recognition in Missing Data Scenario
by Tahera Hossain, Md. Atiqur Rahman Ahad and Sozo Inoue
Sensors 2020, 20(14), 3811; https://doi.org/10.3390/s20143811 - 08 Jul 2020
Cited by 21 | Viewed by 5215
Abstract
Sensor-based human activity recognition has various applications in the arena of healthcare, elderly smart-home, sports, etc. There are numerous works in this field—to recognize various human activities from sensor data. However, those works are based on data patterns that are clean data and [...] Read more.
Sensor-based human activity recognition has various applications in the arena of healthcare, elderly smart-home, sports, etc. There are numerous works in this field—to recognize various human activities from sensor data. However, those works are based on data patterns that are clean data and have almost no missing data, which is a genuine concern for real-life healthcare centers. Therefore, to address this problem, we explored the sensor-based activity recognition when some partial data were lost in a random pattern. In this paper, we propose a novel method to improve activity recognition while having missing data without any data recovery. For the missing data pattern, we considered data to be missing in a random pattern, which is a realistic missing pattern for sensor data collection. Initially, we created different percentages of random missing data only in the test data, while the training was performed on good quality data. In our proposed approach, we explicitly induce different percentages of missing data randomly in the raw sensor data to train the model with missing data. Learning with missing data reinforces the model to regulate missing data during the classification of various activities that have missing data in the test module. This approach demonstrates the plausibility of the machine learning model, as it can learn and predict from an identical domain. We exploited several time-series statistical features to extricate better features in order to comprehend various human activities. We explored both support vector machine and random forest as machine learning models for activity classification. We developed a synthetic dataset to empirically evaluate the performance and show that the method can effectively improve the recognition accuracy from 80.8% to 97.5%. Afterward, we tested our approach with activities from two challenging benchmark datasets: the human activity sensing consortium (HASC) dataset and single chest-mounted accelerometer dataset. We examined the method for different missing percentages, varied window sizes, and diverse window sliding widths. Our explorations demonstrated improved recognition performances even in the presence of missing data. The achieved results provide persuasive findings on sensor-based activity recognition in the presence of missing data. Full article
(This article belongs to the Special Issue Sensors for Activity Recognition)
Show Figures

Figure 1

17 pages, 4717 KiB  
Article
Dynamic Hand Gesture Recognition Based on a Leap Motion Controller and Two-Layer Bidirectional Recurrent Neural Network
by Linchu Yang, Ji’an Chen and Weihang Zhu
Sensors 2020, 20(7), 2106; https://doi.org/10.3390/s20072106 - 08 Apr 2020
Cited by 35 | Viewed by 7510
Abstract
Dynamic hand gesture recognition is one of the most significant tools for human–computer interaction. In order to improve the accuracy of the dynamic hand gesture recognition, in this paper, a two-layer Bidirectional Recurrent Neural Network for the recognition of dynamic hand gestures from [...] Read more.
Dynamic hand gesture recognition is one of the most significant tools for human–computer interaction. In order to improve the accuracy of the dynamic hand gesture recognition, in this paper, a two-layer Bidirectional Recurrent Neural Network for the recognition of dynamic hand gestures from a Leap Motion Controller (LMC) is proposed. In addition, based on LMC, an efficient way to capture the dynamic hand gestures is identified. Dynamic hand gestures are represented by sets of feature vectors from the LMC. The proposed system has been tested on the American Sign Language (ASL) datasets with 360 samples and 480 samples, and the Handicraft-Gesture dataset, respectively. On the ASL dataset with 360 samples, the system achieves accuracies of 100% and 96.3% on the training and testing sets. On the ASL dataset with 480 samples, the system achieves accuracies of 100% and 95.2%. On the Handicraft-Gesture dataset, the system achieves accuracies of 100% and 96.7%. In addition, 5-fold, 10-fold, and Leave-One-Out cross-validation are performed on these datasets. The accuracies are 93.33%, 94.1%, and 98.33% (360 samples), 93.75%, 93.5%, and 98.13% (480 samples), and 88.66%, 90%, and 92% on ASL and Handicraft-Gesture datasets, respectively. The developed system demonstrates similar or better performance compared to other approaches in the literature. Full article
(This article belongs to the Special Issue Sensors for Activity Recognition)
Show Figures

Figure 1

Other

Jump to: Research

14 pages, 591 KiB  
Letter
A Comparative Analysis of Hybrid Deep Learning Models for Human Activity Recognition
by Saedeh Abbaspour, Faranak Fotouhi, Ali Sedaghatbaf, Hossein Fotouhi, Maryam Vahabi and Maria Linden
Sensors 2020, 20(19), 5707; https://doi.org/10.3390/s20195707 - 07 Oct 2020
Cited by 49 | Viewed by 5560
Abstract
Recent advances in artificial intelligence and machine learning (ML) led to effective methods and tools for analyzing the human behavior. Human Activity Recognition (HAR) is one of the fields that has seen an explosive research interest among the ML community due to its [...] Read more.
Recent advances in artificial intelligence and machine learning (ML) led to effective methods and tools for analyzing the human behavior. Human Activity Recognition (HAR) is one of the fields that has seen an explosive research interest among the ML community due to its wide range of applications. HAR is one of the most helpful technology tools to support the elderly’s daily life and to help people suffering from cognitive disorders, Parkinson’s disease, dementia, etc. It is also very useful in areas such as transportation, robotics and sports. Deep learning (DL) is a branch of ML based on complex Artificial Neural Networks (ANNs) that has demonstrated a high level of accuracy and performance in HAR. Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are two types of DL models widely used in the recent years to address the HAR problem. The purpose of this paper is to investigate the effectiveness of their integration in recognizing daily activities, e.g., walking. We analyze four hybrid models that integrate CNNs with four powerful RNNs, i.e., LSTMs, BiLSTMs, GRUs and BiGRUs. The outcomes of our experiments on the PAMAP2 dataset indicate that our proposed hybrid models achieve an outstanding level of performance with respect to several indicative measures, e.g., F-score, accuracy, sensitivity, and specificity. Full article
(This article belongs to the Special Issue Sensors for Activity Recognition)
Show Figures

Figure 1

Back to TopTop