sensors-logo

Journal Browser

Journal Browser

Multi-sensor for Human Activity Recognition: 2nd Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 15 September 2024 | Viewed by 1736

Special Issue Editors


E-Mail Website
Guest Editor
Centre for Research and Technology Hellas, Information Technologies Institute, 6th Km Charilaou-Thermi, 57001 Thessaloniki, Greece
Interests: activity recognition; wearable sensors; accelerometers; context-awareness; context modeling; ubiquitous computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Informatics, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
Interests: knowledge representation; semantic web; context-based multisensor seasoning and fusion; semantic dialogue management; knowledge-driven decision making
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor

E-Mail Website
Guest Editor
Centre for Research and Technology Hellas, Information Technologies Institute, 6th Km Charilaou-Thermi, 57001 Thessaloniki, Greece
Interests: semantic multimedia analysis; indexing and retrieval; social media and big data analysis; knowledge structures; reasoning and personalization for multimedia applications; e-health and environmental applications
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Human activity recognition (HAR) is a research topic that has achieved significant progress recently, attracting growing attention in a number of disciplines and application domains. However, whether it is video-based or sensor-based, data-driven or knowledge-driven, HAR still faces many issues and challenges that motivate the development of new activity recognition techniques to improve accuracy under more realistic conditions. Key challenges in the domain include, among others, difficulty in feature extraction, data annotation scarcity, data heterogeneity, recognition of concurrent, overlapped and multi-occupant activities, increased computational costs, temporal imperfections, noise, context-based and high-level interpretability, and non-invasive activity sensing and privacy.

This Special Issue focuses on the current state of the art in HAR approaches, with a special emphasis on multi-sensor environments, where information is typically collected from multiple sources and complementary modalities, such as from multimedia streams (e.g., using video analysis and speech recognition), lifestyle, and environmental sensors. The main objective is to stimulate original, unpublished research addressing the challenges above through the concurrent use of multiple sensors and innovative fusion schemes, frameworks, algorithms, and platforms. Surveys are very welcomed, too.

Authors are invited to submit original contributions or survey papers for publication in the open access Sensors journal. Topics of interest include (but are not limited to) the following:

  • Modeling and analysis of multi-sensors for human activity recognition;
  • Knowledge-driven multi-sensor fusion frameworks for human activity recognition;
  • Data-driven and machine-learning-driven multi-sensor fusion frameworks for human activity recognition;
  • Distributed sensor networks and IoT for human activity recognition;
  • Interoperability frameworks and semantic situational awareness for high-level human activity recognition and decision making;
  • Multi-sensor human activity recognition under uncertainty, noise, and incomplete data;
  • Multi-sensor human activity recognition in healthcare;
  • Multi-sensor human activity recognition in security and surveillance applications;
  • Multi-sensor human activity recognition in ambient assisted living; 
  • Multi-sensor human activity recognition to assist human–computer interaction;
  • Multi-sensor human activity recognition in augmented/virtual reality applications;
  • Security and privacy issues in multi-sensor human activity recognition.

Dr. Athina Tsanousa
Dr. Georgios Meditskos
Dr. Stefanos Vrochidis
Dr. Ioannis Yiannis Kompatsiaris
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • multi-sensor human activity recognition
  • multi-sensor fusion and interpretation
  • IoT networks and interoperability
  • data- and knowledge-driven multi-sensor human activity recognition
  • security and privacy in multi-sensor monitoring
  • surveys on multi-sensor human activity recognition

Related Special Issue

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 9609 KiB  
Article
Development of Wearable Devices for Collecting Digital Rehabilitation/Fitness Data from Lower Limbs
by Yu-Jung Huang, Chao-Shu Chang, Yu-Chi Wu, Chin-Chuan Han, Yuan-Yang Cheng and Hsian-Min Chen
Sensors 2024, 24(6), 1935; https://doi.org/10.3390/s24061935 - 18 Mar 2024
Viewed by 559
Abstract
Lower extremity exercises are considered a standard and necessary treatment for rehabilitation and a well-rounded fitness routine, which builds strength, flexibility, and balance. The efficacy of rehabilitation programs hinges on meticulous monitoring of both adherence to home exercise routines and the quality of [...] Read more.
Lower extremity exercises are considered a standard and necessary treatment for rehabilitation and a well-rounded fitness routine, which builds strength, flexibility, and balance. The efficacy of rehabilitation programs hinges on meticulous monitoring of both adherence to home exercise routines and the quality of performance. However, in a home environment, patients often tend to inaccurately report the number of exercises performed and overlook the correctness of their rehabilitation motions, lacking quantifiable and systematic standards, thus impeding the recovery process. To address these challenges, there is a crucial need for a lightweight, unbiased, cost-effective, and objective wearable motion capture (Mocap) system designed for monitoring and evaluating home-based rehabilitation/fitness programs. This paper focuses on the development of such a system to gather exercise data into usable metrics. Five radio frequency (RF) inertial measurement unit (IMU) devices (RF-IMUs) were developed and strategically placed on calves, thighs, and abdomens. A two-layer long short-term memory (LSTM) model was used for fitness activity recognition (FAR) with an average accuracy of 97.4%. An intelligent smartphone algorithm was developed to track motion, recognize activity, and calculate key exercise variables in real time for squat, high knees, and lunge exercises. Additionally, a 3D avatar on the smartphone App allows users to observe and track their progress in real time or by replaying their exercise motions. A dynamic time warping (DTW) algorithm was also integrated into the system for scoring the similarity in two motions. The system’s adaptability shows promise for applications in medical rehabilitation and sports. Full article
(This article belongs to the Special Issue Multi-sensor for Human Activity Recognition: 2nd Edition)
Show Figures

Figure 1

27 pages, 5246 KiB  
Article
On the Evaluation of Diverse Vision Systems towards Detecting Human Pose in Collaborative Robot Applications
by Aswin K. Ramasubramanian, Marios Kazasidis, Barry Fay and Nikolaos Papakostas
Sensors 2024, 24(2), 578; https://doi.org/10.3390/s24020578 - 17 Jan 2024
Cited by 1 | Viewed by 920
Abstract
Tracking human operators working in the vicinity of collaborative robots can improve the design of safety architecture, ergonomics, and the execution of assembly tasks in a human–robot collaboration scenario. Three commercial spatial computation kits were used along with their Software Development Kits that [...] Read more.
Tracking human operators working in the vicinity of collaborative robots can improve the design of safety architecture, ergonomics, and the execution of assembly tasks in a human–robot collaboration scenario. Three commercial spatial computation kits were used along with their Software Development Kits that provide various real-time functionalities to track human poses. The paper explored the possibility of combining the capabilities of different hardware systems and software frameworks that may lead to better performance and accuracy in detecting the human pose in collaborative robotic applications. This study assessed their performance in two different human poses at six depth levels, comparing the raw data and noise-reducing filtered data. In addition, a laser measurement device was employed as a ground truth indicator, together with the average Root Mean Square Error as an error metric. The obtained results were analysed and compared in terms of positional accuracy and repeatability, indicating the dependence of the sensors’ performance on the tracking distance. A Kalman-based filter was applied to fuse the human skeleton data and then to reconstruct the operator’s poses considering their performance in different distance zones. The results indicated that at a distance less than 3 m, Microsoft Azure Kinect demonstrated better tracking performance, followed by Intel RealSense D455 and Stereolabs ZED2, while at ranges higher than 3 m, ZED2 had superior tracking performance. Full article
(This article belongs to the Special Issue Multi-sensor for Human Activity Recognition: 2nd Edition)
Show Figures

Figure 1

Back to TopTop