applsci-logo

Journal Browser

Journal Browser

Human Activity Recognition (HAR) in Healthcare

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Applied Biosciences and Bioengineering".

Deadline for manuscript submissions: closed (30 August 2023) | Viewed by 18636

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Department of Civil, Energy, Environmental and Materials Engineering (DICEAM), Mediterranean University of Reggio Calabria, Reggio Calabria, Italy
Interests: biomedical signal processing and sensors; photonics; optical fibers; MEMS; metamaterials; nanotechnology; artificial intelligence; neural network; virtual reality; augmented reality; indoor navigation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Departamento de Engenharia Elétrica, Pontifícia Universidade Católica do Rio de Janeiro, Rio de Janeiro 22451-000, Brazil
Interests: artificial intelligence; neural network; evolutionary algorithm; computational intelligence

Special Issue Information

Dear Colleagues,

Technological advances, including those in the medical field, have improved patients' quality of life. These results have led to an increased elderly population with a greater demand for healthcare, which is difficult to meet due to caregivers' expensive and scarce availability. Advances in artificial intelligence, wireless connection systems, and nanotechnologies allow intelligent human health monitoring systems to be created, avoiding hospitalization with apparent cost containment. Recognizing human activities (HAR) is fundamental in the health monitoring system. They are based on the use of data collected through sensors or on viewing through images captured by cameras. In addition, they can guarantee functions of activity recognition, monitoring of vital functions, traceability, detection of falls and safety alarms, and cognitive assistance. The rapid developments of the Internet of Things (IoT) support research into a wide range of automated and interconnected solutions to improve the quality of life of older people and their independence. With IoT, it is possible to create innovative solutions in Ambient Intelligence (Aml) and Ambient Assisted Living (AAL).

Dr. Luigi Bibbò
Prof. Dr. Marley M.B.R. Vellasco
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human activity recognition
  • machine learning
  • wearable sensor
  • internet of things
  • ambient assisted living
  • ambient intelligent

Related Special Issue

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

9 pages, 246 KiB  
Editorial
Human Activity Recognition (HAR) in Healthcare
by Luigi Bibbò and Marley M. B. R. Vellasco
Appl. Sci. 2023, 13(24), 13009; https://doi.org/10.3390/app132413009 - 6 Dec 2023
Cited by 1 | Viewed by 1330
Abstract
Developments in the medical and technological fields have led to a longer life expectancy [...] Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare)

Research

Jump to: Editorial

18 pages, 849 KiB  
Article
MLPs Are All You Need for Human Activity Recognition
by Kamsiriochukwu Ojiako and Katayoun Farrahi
Appl. Sci. 2023, 13(20), 11154; https://doi.org/10.3390/app132011154 - 11 Oct 2023
Cited by 2 | Viewed by 937
Abstract
Convolution, recurrent, and attention-based deep learning techniques have produced the most recent state-of-the-art results in multiple sensor-based human activity recognition (HAR) datasets. However, these techniques have high computing costs, restricting their use in low-powered devices. Different methods have been employed to increase the [...] Read more.
Convolution, recurrent, and attention-based deep learning techniques have produced the most recent state-of-the-art results in multiple sensor-based human activity recognition (HAR) datasets. However, these techniques have high computing costs, restricting their use in low-powered devices. Different methods have been employed to increase the efficiency of these techniques; however, this often results in worse performance. Recently, pure multi-layer perceptron (MLP) architectures have demonstrated competitive performance in vision-based tasks with lower computation costs than other deep-learning techniques. The MLP-Mixer is a pioneering pureMLP architecture that produces competitive results with state-of-the-art models in computer vision tasks. This paper shows the viability of the MLP-Mixer in sensor-based HAR. Furthermore, experiments are performed to gain insight into the Mixer modules essential for HAR, and a visual analysis of the Mixer’s weights is provided, validating the Mixer’s learning capabilities. As a result, the Mixer achieves F1 scores of 97%, 84.2%, 91.2%, and 90% on the PAMAP2, Daphnet Gait, Opportunity Gestures, and Opportunity Locomotion datasets, respectively, outperforming state-of-the-art models in all datasets except Opportunity Gestures. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare)
Show Figures

Figure 1

13 pages, 2652 KiB  
Article
An Evaluation Study on the Analysis of People’s Domestic Routines Based on Spatial, Temporal and Sequential Aspects
by Aitor Arribas Velasco, John McGrory and Damon Berry
Appl. Sci. 2023, 13(19), 10608; https://doi.org/10.3390/app131910608 - 23 Sep 2023
Viewed by 686
Abstract
The concept of collecting data on people’s domestic routines is not novel. However, the methods and processes used to decipher these raw data and transform them into useful and appropriate information (i.e., sequence, duration, and timing derived from monitoring domestic routines) have presented [...] Read more.
The concept of collecting data on people’s domestic routines is not novel. However, the methods and processes used to decipher these raw data and transform them into useful and appropriate information (i.e., sequence, duration, and timing derived from monitoring domestic routines) have presented challenges and are the focus of numerous research groups. But how are the results of the decoded transposition received, interpreted and used by the various professionals (e.g., occupational therapists and architects) who consume the information? This paper describes the inclusive evaluation process undertaken, which involved a selected group of stakeholders including health carers, engineers and end-users (not the occupants themselves, but more so the care team managing the occupant). Finally, our study suggests that making accessible key spatial and temporal aspects derived from people’s domestic routines can be of great value to different professionals. Shedding light on how a systematic approach for collecting, processing and mapping low-level sensor data into higher forms and representations can be a valuable source of knowledge for improving the domestic living experience. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare)
Show Figures

Figure 1

22 pages, 4831 KiB  
Article
Human Action Recognition Based on Hierarchical Multi-Scale Adaptive Conv-Long Short-Term Memory Network
by Qian Huang, Weiliang Xie, Chang Li, Yanfang Wang and Yanwei Liu
Appl. Sci. 2023, 13(19), 10560; https://doi.org/10.3390/app131910560 - 22 Sep 2023
Viewed by 927
Abstract
Recently, human action recognition has gained widespread use in fields such as human–robot interaction, healthcare, and sports. With the popularity of wearable devices, we can easily access sensor data of human actions for human action recognition. However, extracting spatio-temporal motion patterns from sensor [...] Read more.
Recently, human action recognition has gained widespread use in fields such as human–robot interaction, healthcare, and sports. With the popularity of wearable devices, we can easily access sensor data of human actions for human action recognition. However, extracting spatio-temporal motion patterns from sensor data and capturing fine-grained action processes remain a challenge. To address this problem, we proposed a novel hierarchical multi-scale adaptive Conv-LSTM network structure called HMA Conv-LSTM. The spatial information of sensor signals is extracted by hierarchical multi-scale convolution with finer-grained features, and the multi-channel features are fused by adaptive channel feature fusion to retain important information and improve the efficiency of the model. The dynamic channel-selection-LSTM based on the attention mechanism captures the temporal context information and long-term dependence of the sensor signals. Experimental results show that the proposed model achieves Macro F1-scores of 0.68, 0.91, 0.53, and 0.96 on four public datasets: Opportunity, PAMAP2, USC-HAD, and Skoda, respectively. Our model demonstrates competitive performance when compared to several state-of-the-art approaches. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare)
Show Figures

Figure 1

22 pages, 16394 KiB  
Article
Attention-Based Hybrid Deep Learning Network for Human Activity Recognition Using WiFi Channel State Information
by Sakorn Mekruksavanich, Wikanda Phaphan, Narit Hnoohom and Anuchit Jitpattanakul
Appl. Sci. 2023, 13(15), 8884; https://doi.org/10.3390/app13158884 - 1 Aug 2023
Cited by 10 | Viewed by 1604
Abstract
The recognition of human movements is a crucial aspect of AI-related research fields. Although methods using vision and sensors provide more valuable data, they come at the expense of inconvenience to users and social limitations including privacy issues. WiFi-based sensing methods are increasingly [...] Read more.
The recognition of human movements is a crucial aspect of AI-related research fields. Although methods using vision and sensors provide more valuable data, they come at the expense of inconvenience to users and social limitations including privacy issues. WiFi-based sensing methods are increasingly being used to collect data on human activity due to their ubiquity, versatility, and high performance. Channel state information (CSI), a characteristic of WiFi signals, can be employed to identify various human activities. Traditional machine learning approaches depend on manually designed features, so recent studies propose leveraging deep learning capabilities to automatically extract features from raw CSI data. This research introduces a versatile framework for recognizing human activities by utilizing CSI data and evaluates its effectiveness on different deep learning networks. A hybrid deep learning network called CNN-GRU-AttNet is proposed to automatically extract informative spatial-temporal features from raw CSI data and efficiently classify activities. The effectiveness of a hybrid model is assessed by comparing it with five conventional deep learning models (CNN, LSTM, BiLSTM, GRU, and BiGRU) on two widely recognized benchmark datasets (CSI-HAR and StanWiFi). The experimental results demonstrate that the CNN-GRU-AttNet model surpasses previous state-of-the-art techniques, leading to an average accuracy improvement of up to 4.62%. Therefore, the proposed hybrid model is suitable for identifying human actions using CSI data. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare)
Show Figures

Figure 1

24 pages, 345 KiB  
Article
Virtually Connected in a Multiverse of Madness?—Perceptions of Gaming, Animation, and Metaverse
by Abílio Oliveira and Mónica Cruz
Appl. Sci. 2023, 13(15), 8573; https://doi.org/10.3390/app13158573 - 25 Jul 2023
Cited by 5 | Viewed by 1678
Abstract
Few studies analyze what are the common representations of the metaverse. Regarding what has been said about this concept, our research aims to verify how adults perceive and represent the metaverse. We carried out a study with focus groups, having as participants Portuguese [...] Read more.
Few studies analyze what are the common representations of the metaverse. Regarding what has been said about this concept, our research aims to verify how adults perceive and represent the metaverse. We carried out a study with focus groups, having as participants Portuguese adults all considered habitual gamers (or users of digital games). The objectives for this study were seven: verify how the metaverse is being represented and characterized; identify which technologies stimulate the immersion experience; identify the main dimensions that influence the acceptance of the metaverse concept; understand the perceptions of the metaverse and virtual reality regarding socialization and wellbeing; verify the perceptions of a gamer’s daily life regarding the metaverse, virtual reality, and gaming concepts; understand the impact of social representations on the gaming concept; and to understand the perceived role of animation regarding the metaverse, virtual reality, and gaming concepts. Our results reveal a common understanding of the metaverse, despite some confusion about this concept. We also verified the high importance of wellbeing and social dimensions in metaverse immersive experiences provided by technology or gaming characteristics. This exploratory study gave us essential findings about the perceptions of the metaverse and a deep understanding of the relations between the metaverse, virtual reality, animation, and gaming. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare)
23 pages, 5223 KiB  
Article
Predicting Adherence to Home-Based Cardiac Rehabilitation with Data-Driven Methods
by Dimitris Filos, Jomme Claes, Véronique Cornelissen, Evangelia Kouidi and Ioanna Chouvarda
Appl. Sci. 2023, 13(10), 6120; https://doi.org/10.3390/app13106120 - 16 May 2023
Cited by 1 | Viewed by 1711
Abstract
Cardiac rehabilitation (CR) focuses on the improvement of health or the prevention of further disease progression after an event. Despite the documented benefits of CR programs, the participation remains suboptimal. Home-based CR programs have been proposed to improve uptake and adherence. The goal [...] Read more.
Cardiac rehabilitation (CR) focuses on the improvement of health or the prevention of further disease progression after an event. Despite the documented benefits of CR programs, the participation remains suboptimal. Home-based CR programs have been proposed to improve uptake and adherence. The goal of this study was to apply an end-to-end methodology including machine learning techniques to predict the 6-month adherence of cardiovascular disease (CVD) patients to a home-based telemonitoring CR program, combining patients’ clinical information with their actual program participation during a short familiarization phase. Fifty CVD patients participated in such a program for 6 months, enabling personalized guidance during a phase III CR study. Clinical, fitness, and psychological data were measured at baseline, whereas actual adherence, in terms of weekly exercise session duration and patient heart rate, was measured using wearables. Hierarchical clustering was used to identify different groups based on (1) patients’ clinical baseline characteristics, (2) exercise adherence during the familiarization phase, and (3) the whole program adherence, whereas the output of the clustering was determined using repetitive decision trees (DTs) and random forest (RF) techniques to predict long-term adherence. Finally, for each cluster of patients, network analysis was applied to discover correlations of their characteristics that link to adherence. Based on baseline characteristics, patients were clustered into three groups, with differences in behavior and risk factors, whereas adherent, non-adherent, and transient adherent patients were identified during the familiarization phase. Regarding the prediction of long-term adherence, the most common DT showed higher performance compared with RF (precision: 80.2 ± 19.5% and 71.8 ± 25.8%, recall: 94.5 ± 14.5% and 71.8 ± 25.8% for DT and RF accordingly). The analysis of the DT rules and the analysis of the feature importance of the RF model highlighted the significance of non-adherence during the familiarization phase, as well as that of the baseline characteristics to predict future adherence. Network analysis revealed different relationships in different clusters of patients and the interplay between their behavioral characteristics. In conclusion, the main novelty of this study is the application of machine learning techniques combining patient characteristics before the start of the home-based CR programs with data during a short familiarization phase, which can predict long-term adherence with high accuracy. The data used in this study are available through connected health technologies and standard measurements in CR; thus, the proposed methodology can be generalized to other telerehabilitation programs and help healthcare providers to improve patient-tailored enrolment strategies and resource allocation. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare)
Show Figures

Figure 1

17 pages, 16347 KiB  
Article
Human Activity Recognition by the Image Type Encoding Method of 3-Axial Sensor Data
by Changmin Kim and Woobeom Lee
Appl. Sci. 2023, 13(8), 4961; https://doi.org/10.3390/app13084961 - 14 Apr 2023
Viewed by 1429
Abstract
HAR technology uses computer and machine vision to analyze human activity and gestures by processing sensor data. The 3-axis acceleration and gyro sensor data are particularly effective in measuring human activity as they can calculate movement speed, direction, and angle. Our paper emphasizes [...] Read more.
HAR technology uses computer and machine vision to analyze human activity and gestures by processing sensor data. The 3-axis acceleration and gyro sensor data are particularly effective in measuring human activity as they can calculate movement speed, direction, and angle. Our paper emphasizes the importance of developing a method to expand the recognition range of human activity due to the many types of activities and similar movements that can result in misrecognition. The proposed method uses 3-axis acceleration and gyro sensor data to visually define human activity patterns and improve recognition accuracy, particularly for similar activities. The method involves converting the sensor data into an image format, removing noise using time series features, generating visual patterns of waveforms, and standardizing geometric patterns. The resulting data (1D, 2D, and 3D) can simultaneously process each type by extracting pattern features using parallel convolution layers and performing classification by applying two fully connected layers in parallel to the merged data from the output data of three convolution layers. The proposed neural network model achieved 98.1% accuracy and recognized 18 types of activities, three times more than previous studies, with a shallower layer structure due to the enhanced input data features. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare)
Show Figures

Figure 1

17 pages, 4183 KiB  
Article
Leg-Joint Angle Estimation from a Single Inertial Sensor Attached to Various Lower-Body Links during Walking Motion
by Tsige Tadesse Alemayoh, Jae Hoon Lee and Shingo Okamoto
Appl. Sci. 2023, 13(8), 4794; https://doi.org/10.3390/app13084794 - 11 Apr 2023
Cited by 4 | Viewed by 2292
Abstract
Gait analysis is important in a variety of applications such as animation, healthcare, and virtual reality. So far, high-cost experimental setups employing special cameras, markers, and multiple wearable sensors have been used for indoor human pose-tracking and gait-analysis purposes. Since locomotive activities such [...] Read more.
Gait analysis is important in a variety of applications such as animation, healthcare, and virtual reality. So far, high-cost experimental setups employing special cameras, markers, and multiple wearable sensors have been used for indoor human pose-tracking and gait-analysis purposes. Since locomotive activities such as walking are rhythmic and exhibit a kinematically constrained motion, fewer wearable sensors can be employed for gait and pose analysis. One of the core parts of gait analysis and pose-tracking is lower-limb-joint angle estimation. Therefore, this study proposes a neural network-based lower-limb-joint angle-estimation method from a single inertial sensor unit. As proof of concept, four different neural-network models were investigated, including bidirectional long short-term memory (BLSTM), convolutional neural network, wavelet neural network, and unidirectional LSTM. Not only could the selected network affect the estimation results, but also the sensor placement. Hence, the waist, thigh, shank, and foot were selected as candidate inertial sensor positions. From these inertial sensors, two sets of lower-limb-joint angles were estimated. One set contains only four sagittal-plane leg-joint angles, while the second includes six sagittal-plane leg-joint angles and two coronal-plane leg-joint angles. After the assessment of different combinations of networks and datasets, the BLSTM network with either shank or thigh inertial datasets performed well for both joint-angle sets. Hence, the shank and thigh parts are the better candidates for a single inertial sensor-based leg-joint estimation. Consequently, a mean absolute error (MAE) of 3.65° and 5.32° for the four-joint-angle set and the eight-joint-angle set were obtained, respectively. Additionally, the actual leg motion was compared to a computer-generated simulation of the predicted leg joints, which proved the possibility of estimating leg-joint angles during walking with a single inertial sensor unit. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare)
Show Figures

Figure 1

18 pages, 11455 KiB  
Article
Device Orientation Independent Human Activity Recognition Model for Patient Monitoring Based on Triaxial Acceleration
by Sara Caramaschi, Gabriele B. Papini and Enrico G. Caiani
Appl. Sci. 2023, 13(7), 4175; https://doi.org/10.3390/app13074175 - 24 Mar 2023
Cited by 2 | Viewed by 1287
Abstract
Tracking a person’s activities is relevant in a variety of contexts, from health and group-specific assessments, such as elderly care, to fitness tracking and human–computer interaction. In a clinical context, sensor-based activity tracking could help monitor patients’ progress or deterioration during their hospitalization [...] Read more.
Tracking a person’s activities is relevant in a variety of contexts, from health and group-specific assessments, such as elderly care, to fitness tracking and human–computer interaction. In a clinical context, sensor-based activity tracking could help monitor patients’ progress or deterioration during their hospitalization time. However, during routine hospital care, devices could face displacements in their position and orientation caused by incorrect device application, patients’ physical peculiarities, or patients’ day-to-day free movement. These aspects can significantly reduce algorithms’ performances. In this work, we investigated how shifts in orientation could impact Human Activity Recognition (HAR) classification. To reach this purpose, we propose an HAR model based on a single three-axis accelerometer that can be located anywhere on the participant’s trunk, capable of recognizing activities from multiple movement patterns, and, thanks to data augmentation, can deal with device displacement. Developed models were trained and validated using acceleration measurements acquired in fifteen participants, and tested on twenty-four participants, of which twenty were from a different study protocol for external validation. The obtained results highlight the impact of changes in device orientation on a HAR algorithm and the potential of simple wearable sensor data augmentation for tackling this challenge. When applying small rotations (<20 degrees), the error of the baseline non-augmented model steeply increased. On the contrary, even when considering rotations ranging from 0 to 180 along the frontal axis, our model reached a f1-score of 0.85±0.11 against a baseline model f1-score equal to 0.49±0.12. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare)
Show Figures

Figure 1

22 pages, 3141 KiB  
Article
Emotional Health Detection in HAR: New Approach Using Ensemble SNN
by Luigi Bibbo’, Francesco Cotroneo and Marley Vellasco
Appl. Sci. 2023, 13(5), 3259; https://doi.org/10.3390/app13053259 - 3 Mar 2023
Cited by 4 | Viewed by 2064
Abstract
Computer recognition of human activity is an important area of research in computer vision. Human activity recognition (HAR) involves identifying human activities in real-life contexts and plays an important role in interpersonal interaction. Artificial intelligence usually identifies activities by analyzing data collected using [...] Read more.
Computer recognition of human activity is an important area of research in computer vision. Human activity recognition (HAR) involves identifying human activities in real-life contexts and plays an important role in interpersonal interaction. Artificial intelligence usually identifies activities by analyzing data collected using different sources. These can be wearable sensors, MEMS devices embedded in smartphones, cameras, or CCTV systems. As part of HAR, computer vision technology can be applied to the recognition of the emotional state through facial expressions using facial positions such as the nose, eyes, and lips. Human facial expressions change with different health states. Our application is oriented toward the detection of the emotional health of subjects using a self-normalizing neural network (SNN) in cascade with an ensemble layer. We identify the subjects’ emotional states through which the medical staff can derive useful indications of the patient’s state of health. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare)
Show Figures

Figure 1

12 pages, 314 KiB  
Article
Association between Body Mass Index and the Use of Digital Platforms to Record Food Intake: Cross-Sectional Analysis
by Héctor José Tricás-Vidal, María Concepción Vidal-Peracho, María Orosia Lucha-López, César Hidalgo-García, Sofía Monti-Ballano, Sergio Márquez-Gonzalvo and José Miguel Tricás-Moreno
Appl. Sci. 2022, 12(23), 12144; https://doi.org/10.3390/app122312144 - 28 Nov 2022
Viewed by 1309
Abstract
An inadequate diet has been shown to be a cause of obesity. Nowadays, digital resources are replacing traditional methods of recording food consumption. Thus, the objective of this study was to analyze a sample of United States of America (USA) residents to determine [...] Read more.
An inadequate diet has been shown to be a cause of obesity. Nowadays, digital resources are replacing traditional methods of recording food consumption. Thus, the objective of this study was to analyze a sample of United States of America (USA) residents to determine if the usage of any meal tracker platform to record food intake was related to an improved body mass index (BMI). An analytical cross-sectional study that included 896 subjects with an Instagram account who enrolled to participate in an anonymous online survey was performed. Any meal tracker platform used to record food intake over the last month was employed by 34.2% of the sample. A total of 85.3% of the participants who had tracked their food intake were women (p < 0.001), and 33.3% (p = 0.018) had a doctorate degree. Participants who used any meal tracker platform also had higher BMIs (median: 24.9 (Q1: 22.7–Q3: 27.9), p < 0.001), invested more hours a week on Instagram looking over nutrition or physical activity (median: 2.0 (Q1: 1.0–Q3: 4.0), p = 0.028) and performed more minutes per week of strong physical activity (median: 240.0 (Q1: 135.0–Q3: 450.0), p = 0.007). Conclusions: USA residents with an Instagram account who had been using any meal tracker platform to record food intake were predominantly highly educated women. They had higher BMIs despite the fact they were engaged in stronger exercise and invested more hours a week on Instagram looking over nutrition or physical activity. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare)
Back to TopTop