Next Article in Journal
A Cost-Effective Lightning Current Measuring Instrument with Wide Current Range Detection Using Dual Signal Conditioning Circuits
Next Article in Special Issue
Efficient Authentication Scheme for 5G-Enabled Vehicular Networks Using Fog Computing
Previous Article in Journal
Recognition of Occluded Goods under Prior Inference Based on Generative Adversarial Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using a Hybrid Neural Network and a Regularized Extreme Learning Machine for Human Activity Recognition with Smartphone and Smartwatch

1
Department of Electrical Engineering, National Taipei University of Technology, Taipei 10608, Taiwan
2
Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taichung 413310, Taiwan
3
Department of Computer Science and Software Engineering, United Arab Emirates University, Al-Ain 15551, United Arab Emirates
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(6), 3354; https://doi.org/10.3390/s23063354
Submission received: 26 February 2023 / Revised: 19 March 2023 / Accepted: 20 March 2023 / Published: 22 March 2023
(This article belongs to the Special Issue Recent Developments in Wireless Network Technology)

Abstract

:
Mobile health (mHealth) utilizes mobile devices, mobile communication techniques, and the Internet of Things (IoT) to improve not only traditional telemedicine and monitoring and alerting systems, but also fitness and medical information awareness in daily life. In the last decade, human activity recognition (HAR) has been extensively studied because of the strong correlation between people’s activities and their physical and mental health. HAR can also be used to care for elderly people in their daily lives. This study proposes an HAR system for classifying 18 types of physical activity using data from sensors embedded in smartphones and smartwatches. The recognition process consists of two parts: feature extraction and HAR. To extract features, a hybrid structure consisting of a convolutional neural network (CNN) and a bidirectional gated recurrent unit GRU (BiGRU) was used. For activity recognition, a single-hidden-layer feedforward neural network (SLFN) with a regularized extreme machine learning (RELM) algorithm was used. The experimental results show an average precision of 98.3%, recall of 98.4%, an F1-score of 98.4%, and accuracy of 98.3%, which results are superior to those of existing schemes.

1. Introduction

In 2019, the World Health Organization (WHO) proposed guidelines for digital health interventions which provide information on the potential benefits, harms, feasibility, and resources required for such interventions [1]. Digital health techniques include mobile health (mHealth) and electronic health (eHealth) and have been recognized as important tools for combating pandemic diseases [2,3]. mHealth employs mobile devices, mobile communication techniques, and the Internet of Things (IoT) to enhance healthcare in various areas, including traditional telemedicine, healthcare monitoring and alerting systems, drug-delivery programs, and medical information awareness, detection, and prevention [4,5,6].
Presently, smartphones and smartwatches are the most important mobile devices in mHealth [7,8]. They are equipped with various sensors and have many applications in the monitoring, prevention, and detection of diseases. In more advanced services, they can even provide basic diagnoses for conditions such as cardiology [9,10], diabetes [11,12], obesity [13,14], smoking cessation [15], and chronic diseases [16]. Health and fitness applications (apps), which can detect the numbers of steps walked and stairs climbed in a day using accelerometers and gyroscopes, are the most popular apps. These physical activities are used to calculate the number of calories spent. Over the past decade, recognition of physical activities has been applied to prevent falls among the elderly [17,18,19]. However, with the COVID-19 pandemic and an aging society, monitoring quarantined or elderly individuals has become a major issue in mHealth. Numerous studies have shown that people’s activities have strong correlations with their physical and mental health [20,21]. Therefore, recognizing physical activities using accelerometers and gyroscopes embedded in smartphones and smartwatches is a critical challenge in mHealth.
In recent years, deep learning (DL) and machine learning (ML) have been widely applied in mHealth [22,23,24,25]. In these studies, DL and ML models are not only used for diagnosing, estimating, mining, and delivering physiological signals, but also for preventing chronic diseases. However, in mHealth, the big data need to be delivered to servers, such as hospitals or health management centers. Therefore, telecommunications and navigation technologies are also important, in which the technologies of artificial intelligence have been applied [26,27]. Stefanova-Pavlova et al. proposed the refined generalized net (GN) to track users’ locations [28]. Silva et al. used Petri nets to process the reliability and availability of wireless sensor networks in a smart hospital [29]. Ruiz et al. proposed a tele-rehabilitation system to assist with physical rehabilitation during the COVID-19 pandemic [30].
Convolutional neural networks (CNNs) can extract features from signals, while long short-term memory (LSTM) can recognize time-sequential features. Therefore, some studies have proposed deep neural networks that combine CNNs and LSTM to recognize physical activities [31,32]. Li et al. utilized bidirectional LSTM (BiLSTM) for continuous human activity recognition (HAR) and fall detection with soft feature fusion between the signals measured by wearable sensors and radar [33]. The extreme learning machine (ELM) has shown excellent results in classification tasks with extremely fast learning speed [34]. Chen et al. proposed an ensemble ELM algorithm for HAR using smartphone sensors [35]. Their results showed that the performance was better than those of other methods, such as artificial neural networks (ANNs), support vector machines (SVMs), random forests (RFs), and deep LSTM. In order to improve the accuracy of HAR systems, more complex deep learning models have been proposed. Tan et al. used smartphone sensors for HAR. They proposed an ensemble learning algorithm (ELA) that combined a gated recurrent unit (GRU), a hybrid CNN+GRU, and a multilayer neural network, then fused them with the fully connected three layers [36]. In 2020, the International Data Corporation (IDC) reported that wearable devices are being used more frequently to monitor health due to the COVID-19 pandemic, resulting in a 35.1% increase in smartwatch sales [37]. Thus, more activities could be classified and higher accuracies could be approached if smartphones and smartwatches are synchronously used for HAR. Weiss et al. used smartphone and smartwatch sensors for HAR with an RF algorithm [38]. Mekruksavanich et al. also used smartphone and smartwatch sensors for HAR with a hybrid deep learning model called CNN+LSTM [39]. Prior studies have shown that adding hand-movement signals measured by smartwatch sensors can enhance the accuracy of HAR.
To improve the accuracy of HAR systems, the development of more complex deep learning models will be necessary. Thus, this study focuses on recognizing 18 different physical activities, including body and hand movements, as well as eating movements, utilizing data from sensors embedded in smartphones and smartwatches. The recognition process involves two steps: feature extraction and HAR. To extract features, a hybrid structure was used that consisted of a CNN and a recurrent neural network (RNN), while a multilayer perceptron neural network (MPNN) was used for the recognition of activities. The RNN was replaced with various other models, such as LSTM, GRU, BiLSTM, and bidirectional GRU, to optimize the hybrid structure. The MPNN was trained separately using backpropagation (BP), the ELM, and the regularized ELM (RELM). The HAR dataset used in this study was obtained from the UCI Machine Learning Repository and specifically the WISDM smartphone and smartwatch activity and biometrics dataset [31]. According to the experimental results, the proposed HAR system demonstrated superior performance when compared to the systems developed in existing studies.

2. Materials and Methods

The proposed HAR system has three components: a data processing unit, a feature extraction unit, and a classification unit, as illustrated in Figure 1. Physical activity signals are captured by a smartphone and a smartwatch and are subsequently sampled, segmented, and reshaped for further processing. The sensor data features are extracted using a hybrid CNN+RNN model. Finally, an MPNN is employed to classify the 18 types of physical activities.

2.1. UCI-WIDSM Dataset

The UCI-WISDM dataset [40] is comprised of tri-axial accelerometer and gyroscope data obtained from 51 volunteer subjects. The subjects carried an Android phone (a Google Nexus 5/5x or a Samsung Galaxy S5) in a front pocket of their pants and wore an Android watch (an LG G Watch) on their wrist while performing eighteen activities, which were categorized as body movements (walking, jogging, walking up stairs, sitting, and standing) included in many previous studies, hand movements (kicking, dribbling, catching, typing, writing, clapping, brushing teeth, and folding clothes) representing activities of daily life, and eating movements (eating pasta, drinking soup, eating a sandwich, eating chips, and drinking from a cup) to investigate the feasibility of automatic food-tracking applications [38]. The data were sampled at a rate of 20 Hz, and the 12 signals were segmented into fixed-width sliding windows of 6.4 s with 50% overlap between them. Each sample contained 12-channel signals, and each channel comprised 128 points. Samples containing two activities were removed. The numbers of training and testing samples were 34,316 and 14,707, respectively, and the sample numbers for each of the eighteen activities are presented in Table 1.

2.2. Feature-Extraction Model

Figure 2 illustrates a feature-extraction model that employs a hybrid CNN and RNN to extract the features of sensor signals. The fully connected layer, consisting of three layers, is used to classify the 18 types of physical activities. After training, the outputs of the RNN for the training samples serve as the feature samples to train the activation-classification models. Since the human movements in each activity occur in chronological order, the sensor signals represent time-sequential data. To address this, a time-distributed layer comprising four 1D CNNs (i.e., four pairs of CNNs with three layers and a maximal pool layer as the last layer) is stacked on top of the RNN. This separates a sample into four segments, with each segment containing 32 points. In the convolutional layer, the number of filters is 64; the kernel sizes are 3, 5, and 13; the stride is 1; and the padding is 4. In the pooling layer, the kernel size is 2, and the stride is 2. The activation function employed is ReLU. The RNN is replaced with the LSTM, BiLSTM, GRU, or BiGRU, with the unit numbers of LSTM and GRU set to 128 and those of BiLSTM and BiGRU set to 256. The batch size is set to 32, with the control reset gate and update gate using a sigmoid function and the hidden state using a tanh function. The numbers of full connection layers are 128, 64, and 18, respectively, with ReLU used as the activation function in hidden layers and softmax in the output layer. The loss function is the categorical Cross-Entropy (CE) function, and the Adam optimizer is used [41], with the learning rate set to 0.0001. Equation (1) is the formula for categorical CE:
CE = log ( exp ( a k ) i = 1 M exp ( a i ) )
where M is 18, ak is the score of softmax for the positive class, and ai is the score inferred by the net for each class.

2.3. Activation-Classification Model

The activation-classification model is a single-layer feedforward neural network (SLFN) with the ELM algorithm [42]. Its advantages are the convergent time being shorter than that of the BP method and its not converging to the local minimum. For an SLFN, a training set S = {(Xr, Yir| Xi = (xr1, xr2, …, xrn)TRn, Yr = (yr1, yr2, …, yrm)TRm}, where Xr denotes the rth input vector and Yr represents the rth target vector. The output o of SLFN with l hidden neurons can be expressed as:
o k = j = 1 l β k j f ( W i j X i j + b j ) ,   k = 1 , ,   m ,
where f(x) is the activation function in the hidden layer, Wji is the weight vector from the input layer to the jth hidden node, Wji = (wj1, wj2, …,wjn) ∈ Rn, bj is the bias of the jth hidden node, βk is the weight vector from the hidden nodes to kth output layer, and l is the number of hidden layers. In the ELM, activation functions are nonlinear functions that provide nonlinear mapping for the system. Or is the rth output vector. Mean square error (MSE) is the object function:
MSE = i = 1 N ( Y i O i ) 2 ,
where N is the number of samples. The MSE will approach 0 as the number of hidden nodes approaches to infinity. The output o of SLFN is equal to the target output y. Thus, Equation (2) could be described as follows:
y k = j = 1 l β k j f ( W i j X i j + b j ) ,   k = 1 , ,   m .
Y = Hβ,
where Y is the output matrix, H is the matrix of the activation function in the hidden layer, and β is the weight matrix from the hidden nodes to the output layer. ELM uses random parameters Wij and bj in its hidden layer, and they are frozen during the whole training process.
β = HY,
where H is the Moore–Penrose inverse. The resident, εi, is between the target and output values of the ith sample.
However, the ELM has the risk to approach the result of over-fitting model because it bases on the empirical risk minimization principle [43]. Den et al. proposed a regularized ELM (RELM) that used a weight factor γ for empirical risk [44].
min 1 2 β 2 + 1 2 γ ε 2 ,
In order to obtain a robust estimate weakening outlier interference, εi can be weighted by a factor vi. Equation (7) is changed thus:
min 1 2 β 2 + 1 2 γ D ε 2
where D = d i a log ( v 1 ,   v 2 ,   ,   v N )   and ε = [ ε 1 ,   ε 2 , , ε N ] . The method of Lagrange multipliers is used to search for the optimal solution of Equation (8):
L ( β ,   ε ,   α ) = 1 2 β 2 + γ 2 D ε 2 α ( H β O ε )
where α is the Lagrange multiplier with the equality constraints of Equation (9). Setting the gradients of L(β,ε,α) equal to zero gives the following Karush–Kuhn–Tucker (KKT) optimality conditions [44,45]:
α = γ ( H β T ) T
β = ( I γ + H T D 2 H ) H T D 2 T
ε i = α i γ ,   ( i = 1 ,   2 ,   , N )

2.4. Experimental Protocol

The hardware used in this study comprised an Intel Core i7-8700 CPU and a GeForce GTX1080 GPU. The operating system used was Ubuntu 16.04LTS, with development being conducted in Anaconda 3 for Python 3.7. The deep learning tool used was Pytorch 1.10, and the compiler used was Jupyter Notebook. To assess the proposed method’s performance, we evaluated the optimal feature-extraction model and the activation-classification model for HAR separately.
In the feature-extraction model, the RNN was replaced with LSTM, BiLSTM, GRU, and BiGRU, separately. The training samples were used to adjust the parameters of the hybrid CNN+RNN, while the testing samples were used to evaluate the performances of these RNNs. The feature-extraction model that achieved the best performance was one in which the RNN outputs for all training and testing samples were used as the new training and testing samples to evaluate the activation-classification model.
In the activation-classification model, a multilayer perceptron neural network (MPNN) was used to classify the 18 physical activities. The output number of the MPNN was 18, and the input number depended on the number of RNN outputs. The training algorithms used were BP, ELM, and RELM. The number (l) of hidden layers and the regularized parameter (γ) of RELM were optimized using the grid-search method to find the optimal values.

2.5. Statistical Analysis

According to the proposed method, a sample was considered a true positive (TP) when the classification activity was correctly recognized, as a false positive (FP) when the classification activity was incorrectly recognized, as a true negative (TN) when the activity classification was correctly rejected, and as a false negative (FN) when the activity classification was incorrectly rejected. In this work, the performance of the proposed method was evaluated using the measures given by Equations (13)–(16):
P r e c i s i o n   ( % ) = TP TP + FP × 100 %
R e c a l l   ( % ) = TP TP + FN × 100 %
F 1 - s c o r e   ( % ) = 2 × p r e c i s i o n × R e c a l l P r e c i s i o n + R e a c l l × 100 %
A c c u r a c y   ( % ) = TP + TN TP TN + FP + FN × 100 %

3. Results

In order to evaluate the effectiveness of the proposed method, we will present three sets of results: those for the feature-extraction model, the activation-classification model, and the training times of the models.

3.1. Analysis of the Feature-Extraction Model

The learning curves for the hybrid CNN+LSTM model are depicted in Figure 3, where (a) and (b) represent the accuracy and loss curves, respectively. The blue line corresponds to the training data, while the original line corresponds to the validation data. The optimal values for the accuracy and loss function are achieved at epoch 29. When applied to the testing data, the model achieved an average precision, recall, F1-score, and accuracy of 93.8%, 93.8%, 93.8%, and 94.1%, respectively. The total training time for the model was 130.26 s. In Figure 4, the learning curves for the hybrid CNN+GRU model are presented, where (a) and (b) denote the accuracy and loss curves, respectively. The blue line represents the training data, while the original line represents the validation data. The optimal values for the accuracy and loss function are attained at epoch 28. When evaluated on the testing data, the model achieved an average precision, recall, F1-score, and accuracy of 92.6%, 92.6%, 92.5%, and 92.2%, respectively. The total training time for the model was 98.67 s. The learning curves for the hybrid CNN+BiLSTM structure are displayed in Figure 5, where (a) and (b) represent the accuracy and loss curves, respectively. The blue line corresponds to the training data, while the original line corresponds to the validation data. The optimal values for the accuracy and loss function are achieved at epoch 30. When applied to the testing data, the model achieved an average precision, recall, F1-score, and accuracy of 95.3%, 95.3%, 95.3%, and 95.3%, respectively. The total training time for the model was 138.86 s. In Figure 6, the learning curves for the hybrid CNN+BiGRU model are presented, where (a) and (b) denote the accuracy and loss curves, respectively. The blue line represents the training data, while the original line represents the validation data. The optimal values for the accuracy and loss function are attained at epoch 29. When evaluated on the testing data, the model achieved an average precision, recall, F1-score, and accuracy of 95.7%, 95.4%, 95.5%, and 95.2%, respectively. The total training time for the model was 108.69 s. Table 2 provides an overview of the performances of four feature-extraction models. Although the hybrid structures with BiLSTM and BiGRU require more training time per epoch than LSTM and GRU (4.60 s vs. 4.49 s and 3.74 s vs. 3.52 s, respectively), their testing accuracies are superior to those of LSTM and GRU (95.3% vs. 94.1% and 95.2% vs. 92.2%). Given that the hybrid structure with BiGRU saves 19% of training time compared to BiLSTM and that their accuracies are very similar (95.25% vs. 95.3%), the feature-extraction model based on the hybrid CNN+BiGRU structure was chosen for building the HAR system.

3.2. Analysis of the Activation-Classification Model

To classify the 18 types of physical activities, an MPNN was utilized, where the input and output nodes were set to 256 and 18, respectively. The MPNN was trained using three activation-classification algorithms: BP, ELM, and RELM. The performance of ELM and RELM was influenced by two parameters: the regularized index (γ) and the number of hidden layers (l).

3.2.1. Performance of the MPNN with the BP Algorithm

The MPNN with the BP algorithm had two hidden layers with 128 and 64 nodes, respectively, where ReLU was used as the activation function in the hidden layers and softmax in the output layer. Table 3 shows the performances of the MPNN with the BP algorithm for 18 physical activities on the testing data. The model achieved an average precision of 97.1%, an average recall of 97.2%, an average F1-score of 97.2%, and an accuracy of 97.2%. The total training time was 10.563 s. Among the 18 activities, the worst F1-scores were obtained for the eating pasta, catching a ball, and eating a sandwich activities, which all involve hand and eating movements.

3.2.2. The Optimal Parameters of the RELM

The SLFN utilized both ELM and RELM algorithms, and the optimal parameters for the RELM were determined using a grid-search method. For the RELM, the regularized index (γ) was set to 5 × 10−4, and the number of hidden layers was gradually increased from 256 nodes to 8000 nodes. Table 4 displays the testing accuracies and training times for various numbers of hidden layers. The highest accuracy of 98.35% and a training time of 3.80 s were achieved with 6000 hidden nodes. After that, when l was fixed at 6000, γ gradually increased from 5 × 10−4 to 4. Table 5 shows the testing accuracies and training times for different regularized indexes. It was observed that the most accurate results and the highest training time were obtained when γ was set to 5 × 10−4. In Equation (7), the empirical risk, ε 2 , is regularized by γ. Thus, the performances of the ELM and RELM would be close in this study.

3.2.3. Performances of the SLFN with the ELM and RELM Algorithms

For the ELM algorithm, the SLFN had one hidden layer with 6000 nodes. Figure 7 shows the confusion matrix of the classification of eighteen activities. The performances of writing, clapping, brushing teeth, eating chips, and drinking from a cup activities were better than those for the ELM algorithm. Table 6 presents the performances of the SLFN with the ELM algorithm on the testing data. The model achieved an average precision of 97.9%, a recall of 97.9%, an F1-score of 97.9%, and an accuracy of 97.8%. The total training time was 7.52 s. The F1-scores for the eating pasta, catching a ball, and eating a sandwich activities rose to 98.0%, 96.4%, and 98.1%, respectively.
For the RELM algorithm, l was set to 6000 for the SLFN, and γ was set to 5 × 10−4. Figure 8 shows the confusion matrix of the classification of eighteen activities. The eating pasta activity was easily confused with the drinking soup and drink from a cup activities. Catching a ball was easily confused with kicking a ball. Table 7 shows the performances of the SLFN with the RELM algorithm on the testing data. The model achieved an average precision of 98.3%, a recall of 98.4%, an F1-score of 98.4%, and an accuracy of 98.3%. The total training time was 3.59 s. The F1-scores for the eating pasta, catching a ball, and eating a sandwich activities rose to 98.1%, 97.6%, and 99.2%, respectively.

4. Discussion

The proposed HAR system involves the use of a hybrid CNN+RNN model to extract activation features from accelerometers and gyroscopes in smartphones and smartwatches. This method was originally proposed by Tan et al. [36]. Since the accelerometer and gyroscope signals for activities are time-sequential, the performance of different RNN models can vary for HAR. In this study, LSTM, GRU, BiLSTM, and BiGRU were explored, and the classifying performances of BiLSTM and BiGRU were found to be very similar. However, BiGRU had a shorter training time than BiLSTM (108.69 s vs. 138.86 s) and was therefore used to extract the activation features. To enhance the performance of the classifier, the SLFN with the RELM algorithm was used. The ELM algorithm, which utilizes an SLFN with hidden neural weights and bias, was proposed by Huang et al. [46,47]. The ELM has an extremely fast training time and good generalized performance. Deng et al. proposed the RELM, which is based on the structural risk minimization principle of statistical learning theory and overcomes the drawbacks of the ELM [44]. Table 8 summarizes the total performances of the activation-classification models, the MLNN with BP, and the SLFN with the ELM and RELM. It was found that the classifying performances of the ELM and RELM were very similar (97.8% vs. 98.2% accuracies). The reason for this was the very small regularized weight, γ. However, the training time of the ELM was shorter (7.52 s vs. 10.56 s). However, the RELM exhibited the best performance for HAR despite its longer total testing time (feature extraction plus classification) compared to the ELM (0.038 s vs. 0.025 s).
Table 9 presents a comparative analysis of our proposed method with those of other studies that utilized the UCI-WISDM smartphone and/or smartwatch activity and biometrics dataset for six/eighteen activities. Previous studies [36,48,49,50,51,52] only classified six activities, while studies [38,39] classified eighteen activities. As shown, the proposed HAR system using the hybrid CNN+BiGRU model and the SLFN with the RELM achieved an F1-score and an accuracy of 98.4% and 98.2%, respectively, which are among the best results reported in the literature.
For the opening HAR datasets, the sensors, which are all accelerometers and gyroscopes, are embedded in smartphones or smartwatches or are body-worn [38,52]. The greater the number of sensors, the higher the accuracy of HAR. Table 10 displays the F1-scores of 18 physical activities using the accelerometers and gyroscopes embedded in the smartphones and smartwatches. We explored the performance of our proposed method when only using these sensors, specifically, either the accelerometers or the gyroscopes. When HAR used the sensors of the smartphones and smartwatches, the average F1-scores were 90.7% and 89.1%, respectively. When only the accelerometers or gyroscopes of the smartphones and smartwatches were used for HAR, the average F1-scores were 94.1% and 76.9%, respectively. These results suggest that the accelerometers provide more information than the gyroscopes for HAR.

5. Conclusions

The proposed deep learning model utilizes the hybrid CNN+BiGRU for feature extraction from the signals of sensors embedded in smartphones and smartwatches and the SLFN with the RELM algorithm for the classification of 18 physical activities, including body, hand, and eating movements. The experimental results demonstrate that the proposed model outperforms other existing schemes that utilize deep learning or machine learning methods in terms of F1-scores and accuracy. Notably, the worst F1-score was found in the classification for brushing teeth. Our investigation shows that using different deep learning models for feature extraction and classification during the training phase can effectively increase recognition accuracy and training time. Moreover, since the data are recorded by smartphones and smartwatches, our proposed method has the potential to be used for mHealth in real time in environments without embedding of wireless sensor networks. The weakness of this study is that it ignores signals sent when two activities are transferring. Thus, in the future, we will explore this problem.

Author Contributions

Conceptualization, T.-H.T. and S.-H.L.; Data curation, J.-Y.S.; Investigation, T.-H.T.; Methodology, J.-Y.S.; Project administration, T.-H.T.; Software, J.-Y.S.; Supervision, T.-H.T. and M.G.; Validation, M.A. and Y.-L.C.; Writing—original draft, T.-H.T. and S.-H.L.; Writing—review and editing, S.-H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Science and Technology Council, Taiwan, under grants NSTC 111-2221-E-324-003-MY3 and NSTC 111-2221-E-027-134.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. WHO. WHO Guideline: Recommendations on Digital Interventions for Health System Strengthening. Available online: https://www.who.int/publications/i/item/9789241550505/ (accessed on 1 December 2021).
  2. Wang, Q.; Su, M.; Zhang, M.; Li, R. Integrating Digital Technologies and Public Health to Fight COVID-19 Pandemic: Key Technologies, Applications, Challenges and Outlook of Digital Healthcare. Int. J. Environ. Res. Public Health 2021, 18, 6053. [Google Scholar] [CrossRef]
  3. Lupton, D. Critical Perspectives on Digital Health Technologies. Sociol. Compass 2014, 8, 1344–1359. [Google Scholar] [CrossRef]
  4. Zuehlke, P.; Li, J.; Talaei-Khoei, A.; Ray, P. A functional specification for mobile health (mHealth) systems. In Proceedings of the 11th International Conference on e-Health Networking, Applications and Services, Sydney, NSW, Australia, 16–18 December 2009; pp. 74–78. [Google Scholar]
  5. Pires, I.M.; Marques, G.; Garcia, N.M.; Flórez-Revuelta, F.; Ponciano, V.; Oniani, S. A research on the classification and applicability of the mobile health applications. J. Pers. Med. 2020, 10, 11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Pattichis, C.S.; Kyriacou, E.; Voskarides, S.; Pattichis, M.S.; Istepanian, R.; Schizas, C.N. Wireless telemedicine systems: An overview. IEEE Antennas Propag. Mag. 2002, 44, 143–153. [Google Scholar] [CrossRef]
  7. Woodward, B.; Istepanian, R.S.H.; Richards, C.I. Design of a telemedicine system using a mobile telephone. IEEE Trans. Inf. Technol. Biomed. 2001, 5, 13–15. [Google Scholar] [CrossRef]
  8. Pires, I.M.; Garcia, N.M.; Pombo, N.; Flórez-Revuelta, F.; Rodríguez, N.D. Validation techniques for Sensor data in mobile health applications. J. Sens. 2016, 2016, 2839372. [Google Scholar] [CrossRef] [Green Version]
  9. Bisio, I.; Lavagetto, F.; Marchese, M.; Sciarrone, A. A smartphone-centric platform for remote health monitoring of heart failure. Int. J. Commun. Syst. 2015, 28, 1753–1771. [Google Scholar] [CrossRef]
  10. Fayn, J.; Rubel, P. Toward a personal health society in cardiology. IEEE Trans. Inf. Technol. Biomed. 2010, 14, 401–409. [Google Scholar] [CrossRef]
  11. Sieverdes, J.C.; Treiber, F.; Jenkins, C. Improving diabetes management with mobile health technology. Am. J. Med. Sci. 2013, 345, 289–295. [Google Scholar] [CrossRef]
  12. Kirwan, M.; Vandelanotte, C.; Fenning, A.; Duncan, J.M. Diabetes self-management smartphone application for adults with type 1 diabetes: Randomized controlled trial. J. Med. Internet Res. 2013, 15, e235. [Google Scholar] [CrossRef]
  13. Lopes, I.; Silva, B.; Rodrigues, J.; Lloret, J.; Proenca, M. A mobile health monitoring solution for weight control. In Proceedings of the 2011 International Conference on Wireless Communications and Signal Processing, Nanjing, China, 9–11 November 2011; pp. 1–5. [Google Scholar]
  14. Zhu, F.; Bosch, M.; Woo, I.; Kim, S.; Boushey, C.J.; Ebert, D.S.; Delp, E.J. The use of mobile devices in aiding dietary assessment and evaluation. J. Sel. Top. Signal Process. 2010, 4, 756–766. [Google Scholar]
  15. Whittaker, R.; Dorey, E.; Bramley, D.; Bullen, C.; Denny, S.; Elley, R.C.; Maddison, R.; McRobbie, H.; Parag, V.; Rodgers, A.; et al. A theory-based video messaging mobile phone intervention for smoking cessation: Randomized controlled trial. J. Med. Internet Res. 2011, 13, e10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Chiarini, G.; Ray, P.; Akter, S.; Masella, C.; Ganz, A. mHealth technologies for chronic diseases and elders: A systematic review. IEEE J. Sel. Areas Commun. 2013, 31, 6–18. [Google Scholar] [CrossRef] [Green Version]
  17. Liu, S.-H.; Cheng, W.-C. Fall detection with the support vector machine during scripted and continuous unscripted activities. Sensors 2012, 12, 12301–12316. [Google Scholar] [CrossRef] [Green Version]
  18. Shen, V.R.L.; Lai, H.-Y.; Lai, A.-F. The implementation of a smartphone-based fall detection system using a high-level fuzzy Petri net. Appl. Soft Comput. 2015, 26, 390–400. [Google Scholar] [CrossRef]
  19. Casilari, E.; Oviedo-Jiménez, M.A. Automatic fall detection system based on the combined use of a Smartphone and a Smartwatch. PLoS ONE 2015, 10, e0140929. [Google Scholar] [CrossRef]
  20. Nyboe, L.; Lund, H. Low levels of physical activity in patients with severe mental illness. Nord. J. Psychiatry 2013, 67, 43–46. [Google Scholar] [CrossRef] [PubMed]
  21. Oliveira, J.; Ribeiro, F.; Gomes, H. Effects of a home-based cardiac rehabilitation program on the physical activity levels of patients with coronary artery disease. J. Cardiopulm. Rehabil. Prev. 2008, 28, 392–396. [Google Scholar] [CrossRef]
  22. Said, A.B.; Al-Sa’d, M.F.; Tlili, M.; Abdellatif, A.A.; Mohamed, A.; Elfouly, T.; Harras, K.; O’connor, M.D. A deep learning approach for vital signs compression and energy efficient delivery in mHealth systems. IEEE Access 2018, 6, 33727–33739. [Google Scholar] [CrossRef]
  23. Triantafyllidis, A.; Kondylakis, H.; Katehakis, D.; Kouroubali, A.; Koumakis, L.; Marias, K.; Alexiadis, A.; Votis, K.; Tzovaras, D. Deep learning in mHealth for cardiovascular disease, diabetes, and cancer: Systematic review. JMIR Mhealth Uhealth 2022, 10, e32344. [Google Scholar] [CrossRef]
  24. Huang, T.; Huang, L.; Yang, R.; He, N.; Feng, A.; Li, L.; Lyu, J. Machine learning models for predicting survival in patients with ampullary adenocarcinoma. Asia Pac. J. Oncol. Nurs. 2022, 9, 100141. [Google Scholar] [CrossRef] [PubMed]
  25. Istepanian, R.S.H.; Al-Anzi, T. m-Health 2.0: New perspectives on mobile health, machine learning and big data analytics. Methods 2018, 151, 34–40. [Google Scholar] [CrossRef] [PubMed]
  26. Pacis, D.M.M.; Subido, E.D.C.; Bugtai, N.T. Trends in telemedicine utilizing artificial intelligence. AIP Conf. Proc. 1933, 2018, 040009. [Google Scholar]
  27. Bhaskar, S.; Bradley, S.; Sakhamuri, S.; Moguilner, S.; Chattu, V.K.; Pandya, S.; Schroeder, S.; Ray, D.; Banach, M. Designing futuristic telemedicine using artificial intelligence and robotics in the COVID-19 Era. Front. Public Health 2020, 8, 556789. [Google Scholar] [CrossRef]
  28. Stefanova-Pavlova, M.; Andonov, V.; Stoyanov, T.; Angelova, M.; Cook, G.; Klein, B.; Vassilev, P.; Stefanova, E. Modeling telehealth services with generalized Nets. Recent Contrib. Intell. Syst. 2016, 657, 279–290. [Google Scholar]
  29. Silva, F.A.; Brito, C.; Araújo, G.; Fé, I.; Tyan, M.; Lee, J.-W.; Nguyen, T.A.; Maciel, P.R.M. Model-driven impact quantification of energy resource redundancy and server rejuvenation on the dependability of medical sensor networks in smart hospitals. Sensors 2022, 22, 1595. [Google Scholar] [CrossRef] [PubMed]
  30. Ruiz, I.; Contreras, J.; Garcia, J. Towards a physical rehabilitation system using a telemedicine approach. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2020, 8, 671–680. [Google Scholar] [CrossRef]
  31. Xia, K.; Huang, J.; Wang, H. LSTM-CNN architecture for human activity recognition. IEEE Access 2020, 8, 56855–56866. [Google Scholar] [CrossRef]
  32. Deep, S.; Zheng, X. Hybrid model featuring CNN and LSTM architecture for human activity recognition on smartphone sensor data. In Proceedings of the 20th International Conference on Parallel and Distributed Computing, Applications and Technologies, Gold Coast, QLD, Australia, 5–7 December 2019; pp. 259–264. [Google Scholar]
  33. Li, H.; Shrestha, A.; Heidari, H.; Kernec, J.L.; Fioranelli, F. Bi-LSTM network for multimodal continuous human activity recognition and fall detection. IEEE Sens. J. 2020, 20, 1191–1201. [Google Scholar] [CrossRef] [Green Version]
  34. Wang, D.; Huang, G.B. Protein sequence classification using extreme learning machine. In Proceedings of the 2005 IEEE International Joint Conference on Neural Networks, Montreal, QC, Canada, 31 July–4 August 2005; pp. 1406–1411. [Google Scholar]
  35. Chen, Z.; Jiang, C.; Xie, L. A novel ensemble ELM for human activity recognition using smartphone sensors. IEEE Trans. Ind. Inform. 2019, 15, 2691–2699. [Google Scholar] [CrossRef]
  36. Tan, T.-H.; Wu, J.-Y.; Liu, S.-H.; Gochoo, M. Human activity recognition using an ensemble learning algorithm with smartphone sensor data. Electronics 2022, 11, 322. [Google Scholar] [CrossRef]
  37. IDC, Shipments of Wearable Devices Leap to 125 Million Units, Up 35.1% in the Third Quarter. 2020. Available online: https://www.idc.com/getdoc.jsp?containerId=prUS47067820 (accessed on 1 July 2021).
  38. Weiss, G.M.; Yoneda, K.; Hayajneh, T. Smartphone and smartwatch-based biometrics using activities of daily living. IEEE Access 2019, 7, 133190–133202. [Google Scholar] [CrossRef]
  39. Mekruksavanich, S.; Jitpattanakul, A.; Youplao, P.; Yupapin, P. Enhanced hand-oriented activity recognition based on smartwatch sensor data using LSTMs. Symmetry 2020, 12, 1570. [Google Scholar] [CrossRef]
  40. WISDM Smartphone and Smartwatch Activity and Biometrics Dataset. University of California, Irvine Machine Learning Repository, 2019. Available online: https://archive.ics.uci.edu/ml/datasets/WISDM+Smartphone+and+Smartwatch+Activity+and+Biometrics+Dataset+ (accessed on 1 July 2021).
  41. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  42. Wang, J.; Wang, S.-H.; Zhang, Y.-D. A review on extreme learning machine. Multimed. Tools Appl. 2022, 81, 41611–41660. [Google Scholar] [CrossRef]
  43. Vapnik, V.N. The Nature of Statistical Learning Theory; Springer: New York, NY, USA, 1995. [Google Scholar]
  44. Deng, W.; Zheng, Q.; Chen, L. Regularized extreme learning machine. In Proceedings of the 2009 IEEE Symposium on Computational Intelligence and Data Mining, Nashville, TN, USA, 30 March–2 April 2009; pp. 389–395. [Google Scholar]
  45. Available online: https://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_condition (accessed on 25 February 2023).
  46. Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: Theory and applications. Neurocomputing 2005, 70, 489–501. [Google Scholar] [CrossRef]
  47. Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: A new learning scheme of feed forward neural networks. In Proceedings of the International Joint Conference on Neural Networks, Budapest, Hungary, 25–29 July 2004. [Google Scholar]
  48. Andrey, I. Real-time human activity recognition from accelerometer data using convolutional neural networks. Appl. Soft Comput. 2017, 62, 915922. [Google Scholar]
  49. Varamin, A.A.; Abbasnejad, E.; Shi, Q.; Ranasinghe, D.C.; Rezatoghi, H. Deep auto-set: A deep auto-encoder-set network for activity recognition using wearables. In Proceedings of the 15th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, New York, NY, USA, 5–7 November 2018; p. 246253. [Google Scholar]
  50. Nair, R.; Ragab, M.; Mujallid, O.A.; Mohammad, K.A.; Mansour, R.F.; Viju, G.K. Impact of wireless sensor data mining with hybrid deep learning for human activity recognition. Wirel. Commun. Mob. Comput. 2022, 2022, 9457536. [Google Scholar] [CrossRef]
  51. Dong, Y.; Li, X.; Dezert, J.; Zhou, R.; Zhu, C.; Wei, L.; Ge, S.S. Evidential reasoning with hesitant fuzzy belief structures for human activity recognition. IEEE Tran. Fuzzy Sys. 2021, 29, 3607–3617. [Google Scholar] [CrossRef]
  52. Thakur, D.; Biswas, S.; Ho, E.S.L.; Chattopadhyay, S. ConvAE-LSTM: Convolutional autoencoder long short-term memory network for smartphone-based human activity recognition. IEEE Access 2022, 10, 4137–4156. [Google Scholar] [CrossRef]
Figure 1. Structural diagram of the proposed HAR system, including the data processing unit, the feature extraction unit, and the classification unit.
Figure 1. Structural diagram of the proposed HAR system, including the data processing unit, the feature extraction unit, and the classification unit.
Sensors 23 03354 g001
Figure 2. Structural diagram of the feature-extraction model.
Figure 2. Structural diagram of the feature-extraction model.
Sensors 23 03354 g002
Figure 3. The learning curves for the hybrid CNN+LSTM model: (a) the accuracy and (b) loss curves.
Figure 3. The learning curves for the hybrid CNN+LSTM model: (a) the accuracy and (b) loss curves.
Sensors 23 03354 g003
Figure 4. The learning curves for the hybrid CNN+GRU model: (a) the accuracy and (b) loss curves.
Figure 4. The learning curves for the hybrid CNN+GRU model: (a) the accuracy and (b) loss curves.
Sensors 23 03354 g004
Figure 5. The learning curves for the hybrid CNN+BiLSTM model: (a) the accuracy and (b) loss curves.
Figure 5. The learning curves for the hybrid CNN+BiLSTM model: (a) the accuracy and (b) loss curves.
Sensors 23 03354 g005
Figure 6. The learning curves for the hybrid CNN+BiGRU model: (a) the accuracy and (b) loss curves.
Figure 6. The learning curves for the hybrid CNN+BiGRU model: (a) the accuracy and (b) loss curves.
Sensors 23 03354 g006
Figure 7. The confusion matrix of the classification of eighteen activities for the ELM algorithm.
Figure 7. The confusion matrix of the classification of eighteen activities for the ELM algorithm.
Sensors 23 03354 g007
Figure 8. The confusion matrix for the classification of eighteen activities for the RELM algorithm.
Figure 8. The confusion matrix for the classification of eighteen activities for the RELM algorithm.
Sensors 23 03354 g008
Table 1. Sample numbers of eighteen activities for model training and testing with the UCI-WISDM dataset.
Table 1. Sample numbers of eighteen activities for model training and testing with the UCI-WISDM dataset.
ActivityTraining NumberTesting Number
Walking1921807
Jogging1901827
Walking up stairs1920808
Sitting1895833
Standing1891837
Kicking (soccer ball)1932797
Dribbling (basketball)1906822
Catching (tennis ball)1893835
Typing1885843
Writing1880766
Clapping1945783
Brushing teeth1876852
Folding clothes1919809
Eating pasta1915814
Drinking soup1928800
Eating a sandwich1950778
Eating chips1898830
Drinking from a cup1861866
Table 2. The performances of the feature-extraction models with LSTM, GRU, BiLSTM, and BiGRU, separately.
Table 2. The performances of the feature-extraction models with LSTM, GRU, BiLSTM, and BiGRU, separately.
RNNPrecision
(%)
Recall
(%)
F1-Score
(%)
Accuracy
(%)
Training Time (s/epoch)
LSTM93.893.893.194.14.49
GRU92.692.692.592.23.52
BiLSTM95.395.395.395.34.60
BiGRU95.795.495.595.23.74
Table 3. The performances of the MPNN with the BP algorithm for 18 types of physical activities.
Table 3. The performances of the MPNN with the BP algorithm for 18 types of physical activities.
Precision
(%)
Recall
(%)
F1-Score
(%)
Accuracy
(%)
Walking97.298.097.697.2
Jogging97.398.898.0
Stairs97.797.097.3
Sitting98.097.297.6
Standing98.698.098.3
Kicking95.896.496.1
Dribbling96.697.196.8
Catching a ball95.995.095.4
Typing98.899.198.9
Writing99.098.598.7
Clapping97.598.097.7
Brushing teeth97.397.397.3
Folding clothes98.099.198.5
Eating pasta95.094.994.9
Drinking soup96.696.696.6
Eating a sandwich95.196.295.6
Eating chips96.896.996.8
Drinking from a cup96.796.396.5
Average97.197.297.2
Table 4. The testing accuracies and training times for various numbers of hidden layers with γ set at 5 × 10−4.
Table 4. The testing accuracies and training times for various numbers of hidden layers with γ set at 5 × 10−4.
NAccuracy
(%)
Training Time (s)NAccuracy
(%)
Training Time (s)
25697.10%2.49200097.85%2.654
30097.33%2.402250097.88%2.757
40097.54%2.414300097.91%2.892
50097.60%2.393350097.97%2.989
60097.60%2.444400097.98%3.093
70097.65%2.492450098.01%3.223
80097.70%2.528500098.05%3.317
90097.74%2.506550098.15%3.466
100097.76%2.603600098.25%3.802
110097.78%2.592650098.02%3.826
120097.81%2.600700098.05%3.886
130097.81%2.617750097.99%3.894
140097.82%2.622800098.05%4.116
150097.83%2.624
Table 5. The testing accuracies and training times for the different regularized indexes with l set at 6000.
Table 5. The testing accuracies and training times for the different regularized indexes with l set at 6000.
γAccuracy (%)Training Time (s)
450.693.530
296.953.348
197.703.359
5 × 10−197.803.414
1 × 10−197.823.616
5 × 10−297.853.484
1 × 10−297.863.512
5 × 10−397.923.607
1 × 10−398.043.584
5 × 10−498.253.802
Table 6. The performances of SLFN with the ELM algorithm for 18 types of physical activities.
Table 6. The performances of SLFN with the ELM algorithm for 18 types of physical activities.
Precision
(%)
Recall
(%)
F1-Score
(%)
Accuracy
(%)
Walking99.199.399.297.8
Jogging100.0100.0100.0
Stairs98.297.998.0
Sitting97.898.298.0
Standing98.599.198.8
Kicking 95.296.896.0
Dribbling98.397.898.0
Catching a ball96.696.296.4
Typing98.097.897.9
Writing99.197.098.0
Clapping97.298.197.6
Brushing teeth95.497.196.2
Folding clothes99.097.598.2
Eating pasta98.397.798.0
Drinking soup97.698.598.0
Eating a sandwich97.798.598.1
Eating chips96.996.996.9
Drinking from a cup98.798.098.3
Average97.997.997.9
Table 7. The performances of the SLFN with the RELM algorithm for 18 types of physical activities.
Table 7. The performances of the SLFN with the RELM algorithm for 18 types of physical activities.
Precision
(%)
Recall
(%)
F1-Score
(%)
Accuracy
(%)
Walking99.199.299.198.25
Jogging99.4100.099.7
Stairs97.897.897.8
Sitting98.298.398.2
Standing99.099.799.3
Kicking97.397.097.1
Dribbling98.198.798.4
Catching a ball97.997.397.6
Typing99.199.099.0
Writing98.898.898.8
Clapping98.097.097.5
Brushing teeth96.397.696.9
Folding clothes99.198.898.9
Eating pasta98.098.398.1
Drinking soup97.797.997.8
Eating a sandwich100.098.499.2
Eating chips97.998.097.9
Drinking from a cup98.599.198.8
Average98.398.498.4
Table 8. Total performances of activation-classification models: MLNN with BP and SLFN with the ELM and RELM.
Table 8. Total performances of activation-classification models: MLNN with BP and SLFN with the ELM and RELM.
MPNN with BPSLFN with ELMSLFN with RELM
Precision (%)97.197.998.3
Recall (%)97.297.998.4
F1-score (%)97.297.998.4
Accuracy (%)97.297.898.2
Training time (s)10.567.523.59
Total testing time (s)0.1030.0250.038
Table 9. Comparative results of various methods using the UCI-WISDM dataset.
Table 9. Comparative results of various methods using the UCI-WISDM dataset.
Ref.Classification MethodActivities/
Wearable Devices
F1-Score (%)Accuracy (%)
[36]CNN+GRU6/phone91.7NA
[38]Riege forest18/phone and watchNA94.4
[39]CNN+LSTM18/watch96.396.2
[48]CNN+handcrafted features6/phoneNA93.3
[49]ConvAS6/phoneNA94.9
[50]CNN+LSTM6/phone and watchNA96.0
[51]Hesitant fuzzy belief
structures
6/phone and watchNA95.82
[52]ConvAE-LSTM6/Phone97.497.1
Proposed methodHybrid CNN+BGRU
SLFN with RELM
18/phone and watch98.498.2
Table 10. F1-scores of 18 physical activities using the accelerometers and gyroscopes embedded in the smartphones and smartwatches.
Table 10. F1-scores of 18 physical activities using the accelerometers and gyroscopes embedded in the smartphones and smartwatches.
ActivitiesPhoneWatchPhoneWatchAcce.Gyro.All
Acce.Gyro.Acce.Gyro.
Walking96.393.194.889.799.296.398.183.299.2
Jogging97.097.198.594.197.898.797.696.799.7
Stairs88.279.780.069.792.389.295.1278.797.8
Sitting83.640.580.655.091.487.594.368.498.3
Standing88.358.189.261.893.790.793.168.899.3
Kicking79.870.487.777.790.089.792.977.097.2
Dribbling84.460.591.674.1490.593.695.287.398.4
Catching76.270.595.380.487.291.789.090.397.6
Typing91.340.094.377.290.696.094.281.299.1
Writing89.654.488.771.891.889.996.278.798.8
Clapping89.077.496.083.391.796.594.394.497.5
Brushing teeth87.661.795.575.588.3997.092.589.997.0
Folding clothes82.562.290.767.291.195.195.083.098.9
Eating pasta85.525.577.354.589.685.490.963.198.2
Drinking soup80.127.678.556.284.783.193.166.597.8
Eating a sandwich83.615.648.726.390.669.894.842.299.2
Eating chips81.421.066.841.481.971.789.350.098.0
Drinking from a cup85.726.477.755.889.781.996.267.198.8
Average86.254.685.267.490.789.194.176.098.4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tan, T.-H.; Shih, J.-Y.; Liu, S.-H.; Alkhaleefah, M.; Chang, Y.-L.; Gochoo, M. Using a Hybrid Neural Network and a Regularized Extreme Learning Machine for Human Activity Recognition with Smartphone and Smartwatch. Sensors 2023, 23, 3354. https://doi.org/10.3390/s23063354

AMA Style

Tan T-H, Shih J-Y, Liu S-H, Alkhaleefah M, Chang Y-L, Gochoo M. Using a Hybrid Neural Network and a Regularized Extreme Learning Machine for Human Activity Recognition with Smartphone and Smartwatch. Sensors. 2023; 23(6):3354. https://doi.org/10.3390/s23063354

Chicago/Turabian Style

Tan, Tan-Hsu, Jyun-Yu Shih, Shing-Hong Liu, Mohammad Alkhaleefah, Yang-Lang Chang, and Munkhjargal Gochoo. 2023. "Using a Hybrid Neural Network and a Regularized Extreme Learning Machine for Human Activity Recognition with Smartphone and Smartwatch" Sensors 23, no. 6: 3354. https://doi.org/10.3390/s23063354

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop