Next Article in Journal
LSTMs and Deep Residual Networks for Carbohydrate and Bolus Recommendations in Type 1 Diabetes Management
Next Article in Special Issue
Analysis of Older Adults in Spanish Care Facilities, Risk of Falling and Daily Activity Using Xiaomi Mi Band 2
Previous Article in Journal
Thermal Face Verification through Identification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiphase Identification Algorithm for Fall Recording Systems Using a Single Wearable Inertial Sensor

1
Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei 11221, Taiwan
2
Research Center for Information Technology Innovation, Academia Sinica, Taipei 11529, Taiwan
3
Department of Information Management, Minghsin University of Science and Technology, Hsinchu 30401, Taiwan
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(9), 3302; https://doi.org/10.3390/s21093302
Submission received: 22 March 2021 / Revised: 7 May 2021 / Accepted: 7 May 2021 / Published: 10 May 2021
(This article belongs to the Special Issue Wearable Sensors for Gait and Falls Monitoring)

Abstract

:
Fall-related information can help clinical professionals make diagnoses and plan fall prevention strategies. The information includes various characteristics of different fall phases, such as falling time and landing responses. To provide the information of different phases, this pilot study proposes an automatic multiphase identification algorithm for phase-aware fall recording systems. Seven young adults are recruited to perform the fall experiment. One inertial sensor is worn on the waist to collect the data of body movement, and a total of 525 trials are collected. The proposed multiphase identification algorithm combines machine learning techniques and fragment modification algorithm to identify pre-fall, free-fall, impact, resting and recovery phases in a fall process. Five machine learning techniques, including support vector machine, k-nearest neighbor (kNN), naïve Bayesian, decision tree and adaptive boosting, are applied to identify five phases. Fragment modification algorithm uses the rules to detect the fragment whose results are different from the neighbors. The proposed multiphase identification algorithm using the kNN technique achieves the best performance in 82.17% sensitivity, 85.74% precision, 73.51% Jaccard coefficient, and 90.28% accuracy. The results show that the proposed algorithm has the potential to provide automatic fine-grained fall information for clinical measurement and assessment.

1. Introduction

With the increase of life expectancy and the decrease of the fertility rate, the proportion of elders older than 64 years old in the total population explosively increases. Falls are one of the major problems leading to physical injuries, functional decline, increased healthcare costs, and even death for elders [1]. In the United States (2018), 27.5% of adults aged more than 65 years reported at least one fall in the past year, and a percentage of women reported at least one fall or fall-related injury has higher than did men [2]. Furthermore, the elders may develop anxiety, depression, and fear of repeated falling after fall events occur, which significantly influences the ability to live independently, social isolation, and the quality of life [3].
In recent years, advanced microelectromechanical systems (MEMS) and information and communication technology (ICT) create new opportunities for fall-related healthcare applications [4,5], including fall detection and prevention. Various sensors (e.g., inertial sensors [6], pressure or seismic sensors [7,8] and cameras [9,10]) and machine learning techniques (e.g., support vector machine (SVM), and k-nearest neighbor (kNN)) have been successfully applied to fall-related applications [5,11,12,13,14]. These works have shown that fall events can be automatically detected by the systems. Furthermore, the detailed information of different fall phases (e.g., the way to fall, the fall direction, the performed activities before the fall event, falling time, and landing responses) can assist in considering ways to plan preventive strategies and diagnostic approaches [12,15,16,17]. However, few works focus on the development of automatic fall recording systems to obtain fall information in fine-grained levels for clinical evaluation and measurement.
Typically, there are two common approaches to record and analyze fall-related information in clinical practice. The first is relying on self-report by fallers and caregivers [18,19]. But this approach has issues in misremembering, respondent interpretation, and cultural diversity [18,19]. Another approach is to install cameras in the potential faller’s house for long-term recording. If a fall accident has happened, clinical professionals can manually analyze fall-related information based on the video after fall events [16]. However, several challenges limit the usability of the manual-based fall analysis using cameras. The first one is that the manual operation and analysis of the whole fall event is time-consuming. Another one is that the manual recording suffers the issues in inter-rater bias and manual errors during the fall pattern analysis [16]. These issues might decrease the reliability of the analyzed results. Furthermore, it is a huge consumption of manpower and financial resources to install cameras in the potential faller’s house. To tackle the aforementioned challenges, there is a requirement to develop automatic fall recording systems to obtain objectively fine-grained fall information for fall information analysis.
To support objective and reliable fine-grained fall monitoring, various analysis approaches of fall characteristics have been proposed in automatic fall recording systems to acquire fall information for planning fall prevention strategies [8,14,20]. These studies utilized inertial sensors and seismic sensors to acquire the movement information while falling. Then, machine learning techniques are applied to classify fall directions, fall types and fall positions. In fact, the information of other fall phases is also essential for clinical assessment and analysis. For example, the duration of resting after hitting on the ground has far greater attendant risks in dehydration, hypothermia, and even death [21]. Fall information of different fall phases (e.g., pre-fall, falling, impact, resting, and recovery phase) is important to help clinical professionals analyzing and assessing fine-grained fall information. Therefore, automatic multiple fall phase segmentation and identification are required for objective assessment and evaluation.
The main purpose of this study is to automatically obtain five fall phases for automatic fall recording systems, including pre-fall, free-fall, impact, resting, and recovery phases. This pilot study proposes an automatic multiphase identification algorithm that combines machine learning techniques and fragment modification algorithm to identify the fall phases, while most works only focused on the analysis of a single fall phase. In addition, seven types of falls emulated in a lab-based environment are conducted to validate the proposed algorithm.

2. Materials and Methods

2.1. Background

A fall model is defined as the temporal order of the phases in a fall process. A fall process is defined as the body coming to rest unintentionally on the ground or other lower level when performing an activity, and getting up from the ground depending on consciousness [1,22]. The multiphase fall model divides a fall process into several fall phases. Various types of fall models have been proposed for the analysis of fall events [17,23,24,25,26]. There are two common types of fall models. One type is the four-phase fall model, including pre-fall, critical, post-fall and recovery phases [23]. The critical phase is the sudden body movement to the ground or other lower levels that ends with the shock. The critical phase is utilized to disclose the significant features for detecting fall events. Another work [17] proposed the five-phase fall model to obtain more detailed information of fall events for clinical analysis, which includes pre-fall, free-fall, impact, resting and recovery phases. Compared to the four-phase model, the five-phase fall model divides the critical phase into free-fall and impact phases. The information of free-fall phases can provide information of falling height and falling responses such as grabbing or stepping. In addition, the information of impact phases is important to assess fall types and directions. To provide the fine-grained fall information for clinical professionals, this study adopts the five-phase fall model. An acceleration-based signal of a fall process is shown in Figure 1.
These phases are defined as follows:
  • Pre-fall phase (green area): A pre-fall phase is defined as an activity before losing balance and hitting on the ground such as walking, standing, sit-to-stand, stand-to-sit activities, that may highly impact the fall biomechanics of faller. Through identified pre-fall activities, fallers and caregivers can understand which activity easily leads to fall events.
  • Free-fall phase (red area): A free-fall phase is the process of sudden body movement toward the ground. There is no protective strategy that can prevent people from falling in the free-fall phase. The time of free-fall phase depends on the circumstances such as fall direction and fall height.
  • Impact phase (yellow area): An impact phase is the process of the person hitting on the ground and can be determined by the abrupt shock of the acceleration signal. This phase is a critical phase for fall detection algorithms and systems. The fall types and directions can be analyzed by the duration of the impact phase and the magnitude of tri-axial acceleration in the impact phase.
  • Resting phase (purple area): A resting phase is defined as a faller remaining inactive on the ground after a fall occurred. The injury severity of the fall affects the duration of resting phases. In some fall events, the duration of the resting phase is extremely short when the faller directly picked oneself up. Conversely, the duration of the resting phase may be long or unending if the faller is unable to rise. The situation that the faller cannot get up is identified as long-lie, which means involuntarily remaining on the ground. Then, there is no recovery phase and the fall event ends in the resting phase.
  • Recovery phase (blue area): A recovery phase is the last phase of a fall event if the faller has consciousness to get up from the ground. Resting and recovery phases are important to understand the severity of falls and whether the faller has immediate assistance. Previous studies [27,28] have shown the positive correlation between the mortality rates and the waiting time of rescue from falls. Rescuing the faller quickly can reduce the risks of hospitalization and death [23].

2.2. Developments on Multiphase Identification Algorithm

The functional diagram of the proposed multiphase identification algorithm is shown in Figure 2. Three main stages are included, data collection, multiphase identification, and multiphase information. Firstly, the data collected by an inertial sensor and fall-related experimental protocol are described in Section 2.2.1. Secondly, the multiphase identification utilizes machine-learning-based classifiers and fragment modification to identify fall phases, that detailed description in Section 2.2.2. Finally, the multiphase information is obtained, including the starting point, ending point and duration of each fall phase.

2.2.1. Data Collection and Experimental Protocol

Seven young adults (three males and four females; age [mean ± standard deviation]: 20.86 ± 1.07 years; height: 1.68 ± 0.08 m; weight: 64.86 ± 18.23 kg) are recruited in this study. One inertial sensor (APDM, Inc., Portland, OR, USA) is worn on the waist and utilized to collect the data of body movement. A tri-axial accelerometer, gyroscope, and magnetometer are involved in the inertial sensor. In this study, only tri-axial acceleration data from the accelerometer and tri-axial angular velocity data from the gyroscope are utilized to collect movement information, and the data are collected at a sampling rate of 128 Hz.
In the experiment, seven types and four directions of fall are executed, as presented in Table 1. One trial involves each type with one direction of fall is performed. Each type with one direction of fall repeats three times. Between trials, the resting time depends on the physical condition of the subject. In the experiment, the subjects are asked to perform the instructed fall type and get up later. For example, when performing forward fall while standing, the subjects firstly stand in front of the soft mattress, then fall forward on the soft mattress, and finally get up to stand in front of the mattress. The elapsed time of each fall type and direction is shown in Table 2. The elapsed time of each trial is defined as the duration from performing the activity before losing balance (pre-fall phase) to getting up and standing in front of the mattress (recovery phase).
The experimental environment setting is shown in Figure 3. All experiments were performed on the 18 cm soft mattress to prevent injuries. A helmet, a waist support belt, and knee and elbow guards are worn to protect subjects from harm in experimenting. The orientation and position of a sensor and schematic view of the subject wearing protectors are shown in Figure 4.
A camera embedded with a smartphone is synchronized with an inertial sensor to record the video with 30 frames per second and placed on the lateral side of the subjects during the entire experiment for providing ground truth labels. The researcher manually labels initial and ending timestamps of each fall phase through recorded videos. The performance of the proposed multiphase identification algorithm is evaluated by ground truth labels.
Respecting research ethics and the participant’s privacy, a required consent form was signed by the subjects, and revealing personal identities of a subject were replaced with codes. This experiment was approved by the Institutional Review Board Committee of National Yang-Ming University (YM106066E).

2.2.2. Multiphase Identification

The multiphase identification includes four steps, such as sliding window, feature extraction, multiphase classifier, and fragment modification. The software MATLAB R2019a is utilized to perform and develop the collected data processing and the multiphase identification algorithm. In the sliding window, sensing data collected from an inertial sensor are divided into small segments using a window size. The influence of window size on identification performances is intelligible [29,30]. However, no clear definition exists for the selection of the optimal window size in activities identification. A large window size may involve many activities while a small window size may split an activity into several segments. Therefore, the adequate window size is needed to be investigated. In this study, five window sizes with a fixed sliding size of one sample are investigated, including window size of eight samples (0.0625 s), window size of 16 samples (0.125 s), window size of 24 samples (0.1875 s), window size of 32 samples (0.25 s), and window size of 40 samples (0.3125 s).
The segmented each segment is transformed to a set of features by the feature extraction. A total of 64 features are extracted for the multiphase identification, and that are listed in Table 3. Eight types of statistical features are extracted from each segment, including mean, standard deviation (std), variance (var), maximum (max), minimum (min), range, kurtosis, and skewness. These statistical features are commonly utilized in the field of activity recognition and identification [11,12,14]. Eight signals are utilized for features extraction, including the x-, y- and z-axis of acceleration ( a x , a y and a z ), the x-, y- and z-axis of angular velocity ( g x , g y and g z ), resultant of acceleration (AR) and resultant of angular velocity (GR). That resultant of acceleration (AR) and resultant of angular velocity (GR) are calculated by Equations (1) and (2).
A R = a x 2 + a y 2 + a z 2
G R = g x 2 + g y 2 + g z 2
The multiphase classifier uses machine learning techniques to identify each phase, including pre-fall, free-fall, impact, resting, and recovery phases. Because of the experimental protocol, the initial and ending activities are the standing activity. Therefore, the multiphase classifier may classify seven phases including the initial-static, pre-fall, free-fall, impact, resting, recovery and ending-static phases.
Five common machine learning techniques are adopted as the multiphase classifier, such as SVM, kNN, naïve Bayesian (NB), decision tree (DT), and adaptive boosting (AdaBoost). For all techniques, the training data x i ,   l i ,   i = 1 ,   ,   N . N is numbers of total training data. x i N and the class labels are l i 0 ,   1 , 2 , 3 , 4 , 5 , 6 for seven classes (0: initial-static, 1: pre-fall, 2: free-fall, 3: impact, 4: resting, 5: recovery, 6: ending-static). The introduction of these machine learning techniques and the applied parameters is as follows:
  • Support Vector Machine (SVM)
An SVM technique aims to find the optimal separating hyperplane and maximum margins in the n-dimensional feature space. The testing data are classified by the optimal hyperplane. Because the data distribution is unpredicted, there are various kernel functions for an SVM technique, such as the linear, polynomial, sigmoid, hyperbolic tangent kernel, and radial basis function (RBF). A multiclass SVM technique is implemented to classify multiphase of a fall event. In this study, the multiclass SVM classifier with the one-versus-one approach and the linear kernel function is adopted to classify multiphase of a fall event. The linear decision function is f x i = w x i + b , that defines the hyperplane. And, the linear kernel function K x ,   x i = x T x i is used for the multiclass SVM classifier.
2.
K-Nearest Neighbor (kNN)
KNN is a nonparametric algorithm for recognition and classification, which stores all training data and identifies testing data are classified by a plurality vote of nearest k neighbor, which decided by the distance. There are various distance functions that can be utilized to calculate distances, such as Euclidean, Manhattan, Minkowski, Chebyshev distance functions. In this study, the Euclidean distance function d i = x 2 + x i 2 is used to calculate the distances. Because the parameter k highly depends on data distribution, it commonly uses an odd constant from 1 to n , where n is the number of training data [31,32]. To find the best number of k for this study, a range of number between 1 and 21 with a step of 2 is explored and the best performances are gotten by k = 13.
3.
Naïve Bayesian (NB)
NB technique is a conditional probability model that makes predictions using Bayes theorem with the assumption of conditional independence between all features. The posterior probability P l | x = P x | l P l P x for each class is calculated by prior probability P l , likelihood P x | l , and predictor prior probability P x of class. The maximum a posterior (MAP) decision rule argmax P l | x   is used to obtain the most suitable class.
4.
Decision Tree (DT)
DT technique is a supervised machine learning algorithm for classification and regression. The aim is to create a model that classifies the testing data by a set of decision rules inferred from the training data. Classification and regression tree (CART), which is one of decision tree building methods, is implemented to train a multiphase classifier in this study. The splitting rule of CART is Gini impurity I G x = l = 0 c p l | x p l | x = 1 l = 0 c p l | x 2 , where c is total number of class.
5.
Adaptive Boosting (AdaBoost)
Boosting methods apply a set of the weak models h t x to build a strong model H x = s i g n t = 1 T α t h t x   by a weighted vote of weak models with weight α t = 1 2 ln 1 ϵ t ϵ t , where ϵ t is the classification error of the weak model. AdaBoost employs decision trees as weak classifiers and a weighted sum to create a stronger classifier. Current weak classifier assigns different weighted sums to data of the previous weak classifier. The weighted sum focuses on the misclassified data of the weak classifier. The misclassified data may be assigned a higher weight to get the higher probability for classification than correctly classified data. This process is repeatedly performed until reaching the defined maximum number of iterations. The maximum number of iterations is defined as 10 in this study.
Finally, fragment modification is implemented to modify the misclassification results from the multiphase classification, which are an inevitable situation in machine-learning-based techniques with the sliding window approach. As shown in Figure 5, if the identified phase of one segment, or two or three segments are different from that of the previous and following segments, and the identified phase of the previous and following segments are the same phase, the one segment, or two or three segments are considered misclassified segments. The misclassified segments should be modified to the same with the previous or following segments by Algorithm 1, which is the pseudocode of the fragment modification algorithm. In Algorithm 1, each multiphase segment ( p h a s e i ) and modified multiphase segment ( m p h a s e i ) are included in S P H A S E = s p h a s e j | 1 j a , where a is the number of defined semantic phases. There are seven defined semantic phases (a = 7) in this study, as following {‘initial-static’, ‘pre-fall’, ‘free-fall’, ‘impact’, ‘resting’, ‘recovery’, ‘ending-static’}. At the beginning of the fragment modification algorithm, the first segment should be set as the initial-static phase ( s p h a s e 1 ) and the last three segments should be set as the ending-static phase ( s p h a s e 7 ), referring to line 1 to line 4 and line 15 to line 18 in Algorithm 1. The main modification process refers to line5 to line 14 in Algorithm 1 to modify the one to three misclassified results between two segments with identical results.

2.2.3. Multiphase Information

A segment-based sequence of the phases is obtained from the output of the fragment modification algorithm. The sample-based sequence is restored from the segment-based sequence. The multiphase information is obtained by the sample-based sequence, including starting and ending points of each phase. Furthermore, the duration of each phase can be derived from starting and ending points.
Algorithm 1: Fragment modification algorithm in the multiphase identification stage
Input:An identified segments sequence   F A L L = ( p h a s e i | i = 1 , 2 , 3 , , N ) , The ith multiphase segment p h a s e i ; total number of segments in the sequence N
Output:A modified and identified segments sequence M F A L L = ( m p h a s e i | i = 1 , 2 , 3 , , N ) , The ith modified multiphase segment m p h a s e i
1: p h a s e 1 = s p h a s e 1 // s p h a s e 1 is the semantic phase of initial-static.
2: p h a s e N = s p h a s e 7 // s p h a s e 7 is the semantic phase of ending-static.
3: p h a s e N 1 = s p h a s e 7
4: p h a s e N 2 = s p h a s e 7
5:for i from 2 to N 3 do
6:if p h a s e i != p h a s e i 1 && p h a s e i 1 == p h a s e i + 1 then
7:   p h a s e i = p h a s e i 1
8: else if p h a s e i != p h a s e i 1 && p h a s e i + 1 != p h a s e i 1 && p h a s e i 1 == p h a s e i + 2 then
9:   p h a s e i = p h a s e i 1
10: else if  p h a s e i != p h a s e i 1 && p h a s e i + 1 != p h a s e i 1 && p h a s e i + 2 != p h a s e i 1 && p h a s e i 1 == p h a s e i + 3 then
11:   p h a s e i = p h a s e i 1
12:end if
13: m p h a s e i = p h a s e i
14:end for
15: m p h a s e 1 = s p h a s e 1
16: m p h a s e N = s p h a s e 7
17: m p h a s e N 1 = s p h a s e 7
18: m p h a s e N 2 = s p h a s e 7
19:return  M F A L L

2.3. Performance Evaluation

Leave-one-subject-out cross validation (LOSOCV) is utilized to evaluate the proposed algorithm performance. It is a specific k-fold cross validation that utilizes one subject data as the testing set and the others as the training set in each fold. Therefore, 75 trials performed by the identical subjects are adopted as the testing set and 450 trials of the rest subjects are the training data for each LOSOCV round. LOSOCV iterates 7 times until each subject is used as the testing set because seven subjects are recruited in this study.
The sample-based approach is commonly used to evaluate the reliability and performance of classification [33]. This approach calculates the number of true positive (TP), true negative (TN), false positive (FP) and false negative (FN) based on the sample-by-sample mapping between ground truth and output of multiphase identification. For example, if we want to evaluate the identification performance of the proposed algorithm on an impact phase. We define the impact phase as “positive”, and other phases as “negative”. TP is that the sample is identified as the impact phase, and the phase is performed exactly. TN is defined as that the impact phase is not performed, and the algorithm correctly predicts the sample as other phases. FP is defined as that the algorithm predicts the sample as the impact phase, but the impact phase is not performed actually. FN is that the algorithm identifies the sample as other phases, but the impact phase is performed actually. Four evaluation measures are utilized for performance evaluation, including sensitivity, precision, Jaccard coefficient, and accuracy. These evaluation measures are calculated by Equations (3)–(6). Sensitivity, precision, and Jaccard coefficient are measured for evaluation of each phase in the proposed multiphase identification algorithm. Jaccard coefficient can reveal the similarity between identified phase and the ground truth phase. In this study, Jaccard coefficient is utilized to evaluate the accuracy of locating the starting and ending points. Accuracy evaluates the average performance of the multiclass classifier:
S e n s i t i v i t y = T P T P + F N
P r e c i s i o n = T P T P + F P
J a c c a r d   C o e f f i c i e n t = T P T P + F P + F N
A c c u r a c y = T P + T N T P + F P + T N + F N
An example of a process in the multiphase identification and the definition of TP, TN, FP, and FN in the sensing stream data are shown in Figure 6, which is a case of a fall event while walking.

3. Results

In this study, there are a total of 525 trials (75 trials × 7 subjects) are collected. LOSOCV is applied in this study, which uses 450 trials from six subjects as the training set and 75 trials from the left subject as the testing set, and iterates seven times. The average performance results of the multiphase identification algorithm using different machine learning techniques and window sizes are shown in Table 4. The overall performance of the proposed multiphase identification algorithm achieves 76.54% sensitivity, 80.89% precision, 66.45% Jaccard coefficient, and 87.05% accuracy. To summarize the machine learning techniques, the average performance is shown in Figure 7. The accuracy of all machine learning techniques with different window sizes is over 80%. The proposed algorithm using the kNN technique achieves the best performance in 82.17% sensitivity, 85.74% precision, 73.51% Jaccard coefficient, and 90.28% accuracy. Then, the proposed algorithm using the NB technique has the worst performance than that using other techniques. To summarize the window sizes, the average performance is shown in Figure 8. The best performance of the proposed algorithm with different window sizes in sensitivity, precision, Jaccard coefficient, and accuracy is 77.40% with a window size of 24 samples (0.1875 s), 81.59% with a window size of 24 samples (0.1875 s), 67.28% with a window size of 32 samples (0.25 s), and 87.52% with a window size of 40 samples (0.3125 s), respectively.
According to the best performance of machine learning techniques, Table 5 shows the performance results of each phase using the kNN technique with different window sizes. In summary, the average performance of proposed algorithm using the kNN technique with each window size is shown in Figure 9. The highest sensitivity, precision, Jaccard coefficient and accuracy occurred proposed algorithm using the kNN technique with a window size of 24 samples (0.1875 s), 16 samples (0.125 s), 24 samples (0.1875 s) and 32 samples (0.25 s), respectively. For all window sizes, the sensitivity and Jaccard coefficient of the resting phase and the precision of the pre-fall phase outperform that of other phases, except the static phase. The sensitivity, precision, and Jaccard coefficient of the free-fall phase are worse than that of other phases.

4. Discussion

The proposed multiphase identification algorithm can automatically and objectively identify fall phases using a single wearable inertial sensor. The proposed algorithm combines machine learning techniques and fragment modification algorithm to provide fine-grained fall information about starting point, ending point and duration of fall phases for clinical professionals. The proposed multiphase identification algorithm using the kNN technique can achieve the best performance in all measures. The results demonstrate the kNN technique is more suitable for the proposed algorithm. Moreover, the kNN technique has the advantages of less computation complexity. The window size is an important parameter that may affect identification performance. Larger window sizes may include more patterns and characteristics across phases that obscure the machine learning techniques to build the models and lead to misidentification. Smaller window sizes may not include patterns and characteristics of a whole phase that confuses to build the models and easily lead to motion fragment by trained models. Over the five window sizes, a window size of 24, 16, 24, and 32 samples can achieve the best average performance using the kNN technique in sensitivity, precision, Jaccard coefficient, and accuracy, respectively. Therefore, the window size using 16, 24, and 32 samples are suitable for the proposed multiphase identification algorithm.
To obtained the initial and ending points of a whole fall event, the movement of initial and ending posture was collected in this experiment. Therefore, the initial- and ending-static phases are included in the proposed multiphase identification algorithm. The Jaccard coefficient is defined as the samples of the intersection divided by the samples of the union in the target phase between ground truth and identification result. The initial- and ending-static phases have the best performance in the Jaccard coefficient. The free-fall phase identification has the worst performance of the Jaccard coefficient because the free-fall phase has a short duration and be confused with daily activities easily. Therefore, the free-fall phase identification remains room for improvement.
A summary of previous studies [8,14,16,17,20] on sensor types, techniques, and provided fall-related information is shown in Table 6. Most studies focused on extracting characteristics of a whole fall to obtain information of fall directions, fall types and fall positions. Hsieh et al. [20] proposed a machine-learning-based algorithm to detect fall directions, and the accuracy was 97.34. Hussain et al. [14] used machine-learning-based classifiers to detect fall types, and the results was 96.82% accuracy using random forest classifier. Clemente et al. [8] utilized seismic sensor signals to estimate fall positions using the time difference of arrivals measurements, and results showed that the localization error is smaller than 0.28 m. However, these studies only focused on extracting characteristics using the entire fall signal, and the fine-grained information and characteristics were not extracted.
Two common approaches collected signals by inertial sensors and cameras to manually label the starting and ending points of fall phases and obtain the fine-grained fall information. The first is self-reports by fallers and caregivers [18,19]. The detailed fall-related information, including falling time, the activities before falling and fall direction, is recorded manually. The second approach is to install cameras in the potential faller’s house for long-term recording. After fall events occur, clinical professionals can manually analyze fall-related information based on the videos [16]. However, these two approaches may suffer issues in inter-rater bias and manual errors. Especially, the camera-based approach captured only 28% and 45% of fall events in two different care facilities. To our best knowledge, this is the first study aiming to automatically and objectively identify fall phases within a fall process using machine learning techniques. Machine learning techniques have been proposed and applied to automatic identification of activity, gesture and movement in other applications [34,35,36,37]. However, obtaining fine-grained fall information still rely on manual execution [16,17]. Our study demonstrates the feasibility of multiphase identification for phase-aware fall recording system.
The performances of the proposed algorithm are restricted by some challenges to multiphase identification, such as motion variability, temporal order modification, boundary decision. The performance of parts of phases needs to be improved, such as the free-fall phase especially. We plan to examine other algorithms and powerful learning techniques, such as hierarchical algorithms with rule conditions and machine learning techniques, convolutional neural networks (CNN), long short-term memory (LSTM). Another limitation is that only seven subjects with narrow age distributed are recruited and seven types of fall are performed in the experimental environment for validation of the proposed multiphase identification algorithm. More subjects with wide age distributed, types of fall and real-world fall events datasets will be investigated to validate the proposed multiphase identification algorithm in the future.

5. Conclusions

To obtain and understand the phase information of fall events for clinical requirements, an automatic multiphase identification algorithm is proposed for phase-aware fall recording systems. The sliding window approach, several machine learning techniques, and fragment modification algorithm are utilized to identify five phases in a fall event. The proposed multiphase identification algorithm using the kNN technique with a window size of 24 samples could achieve the best performance in 83.09% sensitivity, 86.53% precision, 74.47% Jaccard coefficient, and 90.56% accuracy. The results show that the proposed system has the potential to provide automatic and reliable fine-grained fall information for clinical professionals.

Author Contributions

Conceptualization, C.-T.C. and S.J.-P.H.; Investigation, C.-Y.H., H.-Y.H. and K.-C.L.; Methodology, C.-Y.H., H.-Y.H., K.-C.L., C.-T.C. and S.J.-P.H.; Resources, C.-Y.H. and C.-P.L.; Software, C.-Y.H.; Supervision, C.-T.C. and S.J.-P.H.; Writing—original draft, C.-Y.H.; Writing—review & editing, C.-T.C. and S.J.-P.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Science and Technology, Taiwan, grant number MOST 109-2221-E-010-005 and The APC was funded by the Ministry of Science and Technology, Taiwan, grant number MOST 109-2221-E-010-005.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of National Yang-Ming University (protocol code: YM106066E and date of approval: 25 September 2017).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Acknowledgments

The authors would like to thank the volunteers who participated in the experiments for their efforts and time.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. World Health Organization, Ageing, and Life Course Unit. WHO Global Report on Falls Prevention in Older Age; World Health Organization: Geneva, Switzerland, 2008. [Google Scholar]
  2. Moreland, B.; Kakara, R.; Henry, A. Trends in nonfatal falls and fall-related injuries among adults aged ≥ 65 years—United States, 2012–2018. Morb. Mortal. Wkly. Rep. 2020, 69, 875. [Google Scholar] [CrossRef]
  3. Sherrington, C.; Fairhall, N.; Kwok, W.; Wallbank, G.; Tiedemann, A.; Michaleff, Z.A.; Ng, C.A.; Bauman, A. Evidence on physical activity and falls prevention for people aged 65+ years: Systematic review to inform the WHO guidelines on physical activity and sedentary behaviour. Int. J. Behav. Nutr. Phys. Act. 2020, 17, 144. [Google Scholar] [CrossRef]
  4. Montesinos, L.; Castaldo, R.; Pecchia, L. Wearable inertial sensors for fall risk assessment and prediction in older adults: A systematic review and meta-analysis. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 573–582. [Google Scholar] [CrossRef] [Green Version]
  5. Özdemir, A.T.; Barshan, B. Detecting falls with wearable sensors using machine learning techniques. Sensors 2014, 14, 10691–10708. [Google Scholar] [CrossRef]
  6. Singh, A.; Rehman, S.U.; Yongchareon, S.; Chong, P.H.J. Sensor technologies for fall detection systems: A review. IEEE Sens. J. 2020, 20, 6889–6919. [Google Scholar] [CrossRef]
  7. Al Nahian, M.J.; Raju, M.H.; Tasnim, Z.; Mahmud, M.; Ahad, M.A.R.; Kaiser, M.S. Contactless fall detection for the elderly. In Contactless Human Activity Analysis; Springer: Berlin/Heidelberg, Germany, 2021; pp. 1–32. [Google Scholar]
  8. Clemente, J.; Song, W.; Valero, M.; Li, F.; Liy, X. Indoor person identification and fall detection through non-intrusive floor seismic sensing. In Proceedings of the 2019 IEEE International Conference on Smart Computing (SMARTCOMP), Washington, DC, USA, 12–15 June 2019; pp. 417–424. [Google Scholar]
  9. Shu, F.; Shu, J. An eight-camera fall detection system using human fall pattern recognition via machine learning by a low-cost android box. Sci. Rep. 2021, 11, 2471. [Google Scholar] [CrossRef] [PubMed]
  10. Chen, W.; Jiang, Z.; Guo, H.; Ni, X. Fall detection based on key points of human-skeleton using openpose. Symmetry 2020, 12, 744. [Google Scholar] [CrossRef]
  11. Pannurat, N.; Thiemjarus, S.; Nantajeewarawat, E. Automatic fall monitoring: A review. Sensors 2014, 14, 12900–12936. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Hsieh, C.-Y.; Liu, K.-C.; Huang, C.-N.; Chu, W.-C.; Chan, C.-T. Novel hierarchical fall detection algorithm using a multiphase fall model. Sensors 2017, 17, 307. [Google Scholar] [CrossRef]
  13. Chelli, A.; Pätzold, M. A machine learning approach for fall detection and daily living activity recognition. IEEE Access 2019, 7, 38670–38687. [Google Scholar] [CrossRef]
  14. Hussain, F.; Hussain, F.; Ehatisham-ul-Haq, M.; Azam, M.A. Activity-aware fall detection and recognition based on wearable sensors. IEEE Sens. J. 2019, 19, 4528–4536. [Google Scholar] [CrossRef]
  15. Woolrych, R.; Zecevic, A.; Sixsmith, A.; Sims-Gould, J.; Feldman, F.; Chaudhury, H.; Symes, B.; Robinovitch, S.N. Using video capture to investigate the causes of falls in long-term care. Gerontologist 2015, 55, 483–494. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Robinovitch, S.N.; Feldman, F.; Yang, Y.; Schonnop, R.; Leung, P.M.; Sarraf, T.; Sims-Gould, J.; Loughin, M. Video capture of the circumstances of falls in elderly people residing in long-term care: An observational study. Lancet 2013, 381, 47–54. [Google Scholar] [CrossRef] [Green Version]
  17. Becker, C.; Schwickert, L.; Mellone, S.; Bagalà, F.; Chiari, L.; Helbostad, J.; Zijlstra, W.; Aminian, K.; Bourke, A.; Todd, C. Proposal for a multiphase fall model based on real-world fall recordings with body-fixed sensors. Zeitschrift für Gerontologie und Geriatrie 2012, 45, 707–715. [Google Scholar] [CrossRef] [PubMed]
  18. Garcia, P.A.; Dias, J.; Silva, S.L.; Dias, R.C. Prospective monitoring and self-report of previous falls among older women at high risk of falls and fractures: A study of comparison and agreement. Braz. J. Phys. Ther. 2015, 19, 218–226. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Hale, W.A.; Delaney, M.J.; Cable, T. Accuracy of patient recall and chart documentation of falls. J. Am. Board Fam. Pract. 1993, 6, 239–242. [Google Scholar]
  20. Hsieh, C.-Y.; Shi, W.-T.; Huang, H.-Y.; Liu, K.-C.; Hsu, S.J.; Chan, C.-T. Machine learning-based fall characteristics monitoring system for strategic plan of falls prevention. In Proceedings of the 2018 IEEE International Conference on Applied System Invention (ICASI), Chiba, Japan, 13–17 April 2018; pp. 818–821. [Google Scholar]
  21. Fleming, J.; Brayne, C. Inability to get up after falling, subsequent time on floor, and summoning help: Prospective cohort study in people over 90. BMJ 2008, 337, a2227. [Google Scholar] [CrossRef] [Green Version]
  22. Zijlstra, A.; Ufkes, T.; Skelton, D.; Lundin-Olsson, L.; Zijlstra, W. Do dual tasks have an added value over single tasks for balance assessment in fall prevention programs? A mini-review. Gerontology 2008, 54, 40–49. [Google Scholar] [CrossRef]
  23. Noury, N.; Rumeau, P.; Bourke, A.; ÓLaighin, G.; Lundy, J. A proposal for the classification and evaluation of fall detectors. IRBM 2008, 29, 340–349. [Google Scholar] [CrossRef]
  24. Bai, Y.-W.; Wu, S.-C.; Tsai, C.-L. Design and implementation of a fall monitor system by using a 3-axis accelerometer in a smart phone. IEEE Trans. Consum. Electron. 2012, 58, 1269–1275. [Google Scholar] [CrossRef]
  25. Wu, Y.; Su, Y.; Feng, R.; Yu, N.; Zang, X. Wearable-sensor-based pre-impact fall detection system with a hierarchical classifier. Measurement 2019, 140, 283–292. [Google Scholar] [CrossRef]
  26. Putra, I.; Brusey, J.; Gaura, E.; Vesilo, R. An event-triggered machine learning approach for accelerometer-based fall detection. Sensors 2018, 18, 20. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Wild, D.; Nayak, U.; Isaacs, B. How dangerous are falls in old people at home? Br. Med. J. (Clin. Res. Ed.) 1981, 282, 266–268. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Gurley, R.J.; Lum, N.; Sande, M.; Lo, B.; Katz, M.H. Persons found in their homes helpless or dead. N. Engl. J. Med. 1996, 334, 1710–1716. [Google Scholar] [CrossRef]
  29. Alhammad, N.; Al-Dossari, H. Dynamic Segmentation for Physical Activity Recognition Using a Single Wearable Sensor. Appl. Sci. 2021, 11, 2633. [Google Scholar] [CrossRef]
  30. Nurwulan, N.; Jiang, B.C. Window selection impact in human activity recognition. Int. J. Innov. Technol. Interdiscip. Sci. 2020, 3, 381–394. [Google Scholar]
  31. Bhattacharya, G.; Ghosh, K.; Chowdhury, A.S. An affinity-based new local distance function and similarity measure for kNN algorithm. Pattern Recognit. Lett. 2012, 33, 356–363. [Google Scholar] [CrossRef]
  32. Biswas, N.; Chakraborty, S.; Mullick, S.S.; Das, S. A parameter independent fuzzy weighted k-nearest neighbor classifier. Pattern Recognit. Lett. 2018, 101, 80–87. [Google Scholar] [CrossRef]
  33. Ward, J.A.; Lukowicz, P.; Gellersen, H.W. Performance metrics for activity recognition. ACM Trans. Intell. Syst. Technol. (TIST) 2011, 2, 1–23. [Google Scholar] [CrossRef]
  34. Sprint, G.; Cook, D.J.; Weeks, D.L. Toward automating clinical assessments: A survey of the timed up and go. IEEE Rev. Biomed. Eng. 2015, 8, 64–77. [Google Scholar] [CrossRef] [PubMed]
  35. Hsieh, C.-Y.; Huang, H.-Y.; Liu, K.-C.; Chen, K.-H.; Hsu, S.J.-P.; Chan, C.-T. Subtask Segmentation of Timed Up and Go Test for Mobility Assessment of Perioperative Total Knee Arthroplasty. Sensors 2020, 20, 6302. [Google Scholar] [CrossRef] [PubMed]
  36. Huang, H.-Y.; Hsieh, C.-Y.; Liu, K.-C.; Hsu, S.J.-P.; Chan, C.-T. Fluid Intake Monitoring System Using a Wearable Inertial Sensor for Fluid Intake Management. Sensors 2020, 20, 6682. [Google Scholar] [CrossRef] [PubMed]
  37. Chang, C.-Y.; Hsieh, C.-Y.; Huang, H.-Y.; Wu, Y.-T.; Chen, L.-C.; Chan, C.-T.; Liu, K.-C. Automatic Functional Shoulder Task Identification and Sub-Task Segmentation Using Wearable Inertial Measurement Units for Frozen Shoulder Assessment. Sensors 2021, 21, 106. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Diagram of an acceleration-based signal of a fall process.
Figure 1. Diagram of an acceleration-based signal of a fall process.
Sensors 21 03302 g001
Figure 2. Functional diagram of proposed multiphase identification algorithm.
Figure 2. Functional diagram of proposed multiphase identification algorithm.
Sensors 21 03302 g002
Figure 3. Diagram of the experimental environment setting.
Figure 3. Diagram of the experimental environment setting.
Sensors 21 03302 g003
Figure 4. The sensor orientation, wearing position of a sensor, and the subject has worn protectors in the experiment. (a) Sensor orientation; (b) The sensor was worn on the waist (lower back); (c,d) The front and back view of the subject worn protectors, respectively.
Figure 4. The sensor orientation, wearing position of a sensor, and the subject has worn protectors in the experiment. (a) Sensor orientation; (b) The sensor was worn on the waist (lower back); (c,d) The front and back view of the subject worn protectors, respectively.
Sensors 21 03302 g004
Figure 5. Diagram of proposed fragment modification algorithm. An example to modify one (situation 1), two (situation 2), or three (situation 3) segments that are different from previous and following segments. These segments (misclassified segments) should be modified to the same with the previous or following segments.
Figure 5. Diagram of proposed fragment modification algorithm. An example to modify one (situation 1), two (situation 2), or three (situation 3) segments that are different from previous and following segments. These segments (misclassified segments) should be modified to the same with the previous or following segments.
Sensors 21 03302 g005
Figure 6. An example of a process in the multiphase identification. The fragment modified results were compared against the ground truth in terms of TN, TP, FP and FN.
Figure 6. An example of a process in the multiphase identification. The fragment modified results were compared against the ground truth in terms of TN, TP, FP and FN.
Sensors 21 03302 g006
Figure 7. The average performance using different machine learning techniques.
Figure 7. The average performance using different machine learning techniques.
Sensors 21 03302 g007
Figure 8. The average performance using different window sizes.
Figure 8. The average performance using different window sizes.
Sensors 21 03302 g008
Figure 9. The average performance of proposed algorithm using the kNN technique with each window size.
Figure 9. The average performance of proposed algorithm using the kNN technique with each window size.
Sensors 21 03302 g009
Table 1. Types and directions of fall.
Table 1. Types and directions of fall.
No.TypeDirectionTrial
1Fall while standingForward, backward, right lateral, and left lateral84
2Fall while standing upForward, backward, right lateral, and left lateral84
3Fall while sitting downForward, backward, right lateral, and left lateral84
4Fall while stooping downForward, backward, right lateral, and left lateral84
5Fall while walkingForward, backward, right lateral, and left lateral84
6Fall while jumpingForward, backward, right lateral, and left lateral84
7Fall while walking backwardBackward21
Table 2. Elapsed time of each fall type and direction (notation: mean ± standard deviation).
Table 2. Elapsed time of each fall type and direction (notation: mean ± standard deviation).
TypeDirection
Forward (s)Backward(s)Right Lateral(s)Left Lateral(s)
Fall while standing14.89 ± 2.0314.91 ± 2.5315.58 ± 2.3215.35 ± 1.95
Fall while standing up16.66 ± 3.0117.00 ± 2.5817.07 ± 1.8317.33 ± 1.67
Fall while sitting down16.44 ± 1.8815.91 ± 1.8515.90 ± 1.8515.69 ± 1.72
Fall while stooping down17.68 ± 1.3520.56 ± 2.7318.61 ± 2.1319.07 ± 1.82
Fall while walking15.52 ± 1.9716.46 ± 1.9315.65 ± 1.7315.81 ± 1.55
Fall while jumping19.04 ± 2.2319.64 ± 2.6219.11 ± 2.4119.02 ± 2.36
Fall while walking backward--19.17 ± 1.80----
Table 3. The feature set for multiphase classifier.
Table 3. The feature set for multiphase classifier.
Feature Set,
F = (f1, f2, …, f64) ϵ R64
Feature Description
f1 ~ f8 m e a n ,   s t d ,   v a r ,   m a x ,   m i n ,   r a n g e ,   k u r t o s i s   a n d   s k e w n e s s   o f   a x
f9 ~ f16 m e a n ,   s t d ,   v a r ,   m a x ,   m i n ,   r a n g e ,   k u r t o s i s   a n d   s k e w n e s s   o f   a y
f17 ~ f24 m e a n ,   s t d ,   v a r ,   m a x ,   m i n ,   r a n g e ,   k u r t o s i s   a n d   s k e w n e s s   o f   a z
f25 ~ f32 m e a n ,   s t d ,   v a r ,   m a x ,   m i n ,   r a n g e ,   k u r t o s i s   a n d   s k e w n e s s   o f   g x
f33 ~ f40 m e a n ,   s t d ,   v a r ,   m a x ,   m i n ,   r a n g e ,   k u r t o s i s   a n d   s k e w n e s s   o f   g y
f41 ~ f48 m e a n ,   s t d ,   v a r ,   m a x ,   m i n ,   r a n g e ,   k u r t o s i s   a n d   s k e w n e s s   o f   g y
f49 ~ f52 m e a n ,   s t d ,   v a r ,   m a x ,   m i n ,   r a n g e ,   k u r t o s i s   a n d   s k e w n e s s   o f   A R
f53 ~ f64 m e a n ,   s t d ,   v a r ,   m a x ,   m i n ,   r a n g e ,   k u r t o s i s   a n d   s k e w n e s s   o f   G R
Table 4. The performance results of the multiphase identification algorithm using machine learning techniques versus window sizes (unit:%).
Table 4. The performance results of the multiphase identification algorithm using machine learning techniques versus window sizes (unit:%).
Machine Learning TechniqueEvaluation MeasureWindow Size
8 Samples
(0.0625 s)
16 Samples
(0.125 s)
24 Samples
(0.1875 s)
32 Samples
(0.25 s)
40 Samples
(0.3125 s)
Overall
AdaBoostSensitivity73.2575.0777.3178.0677.8776.31
Precision83.8285.8186.5585.3183.0484.91
Jaccard coefficient65.2367.3569.7370.2669.8368.48
Accuracy87.8588.5789.3089.5489.7789.00
SVMSensitivity72.4774.4475.7077.6278.2275.69
Precision78.4076.6177.9279.0779.3078.26
Jaccard coefficient61.8161.4463.0564.9165.3763.32
Accuracy84.4583.8384.8085.8686.2285.03
kNNSensitivity81.1782.4683.0982.6581.4982.17
Precision86.0286.7086.5385.5683.9185.74
Jaccard coefficient72.3373.8374.4774.0572.8673.51
Accuracy89.3290.0790.5690.7690.6990.28
DTSensitivity81.8482.4282.8481.6180.9281.93
Precision83.5783.3383.5182.1981.2882.78
Jaccard coefficient72.4672.5173.0371.6470.9372.12
Accuracy89.7589.5689.8689.3689.2689.56
NBSensitivity65.8067.4468.0766.9264.7666.60
Precision72.7773.0173.4173.2671.3172.75
Jaccard coefficient53.5554.9355.6455.5654.3654.81
Accuracy80.7681.2481.5081.7281.6781.38
OverallSensitivity74.9176.3777.4077.3776.6576.54
Precision80.9281.0981.5981.0879.7780.89
Jaccard coefficient65.0866.0167.1967.2866.6766.45
Accuracy86.4286.6587.2087.4587.5287.05
Table 5. The performance results of each phase using the kNN technique with different window sizes (unit:%).
Table 5. The performance results of each phase using the kNN technique with different window sizes (unit:%).
Using a kNN technique with a window size of 8 samples (0.0625 s)
Initial-staticPre-fallFree-fallImpactRestingRecoveryEnding-staticOverall
Sensitivity99.7062.9651.0179.6496.3380.1298.4681.17
Precision90.2693.1867.0387.0084.1288.1992.3986.02
Jaccard coefficient90.0360.1740.2071.0781.5072.3191.0572.33
Accuracy 89.32
Using a kNN technique with a window size of 16 samples (0.125 s)
Sensitivity99.7064.3454.4281.2196.7482.3898.4582.46
Precision91.3894.7567.6386.4986.0787.6892.8886.70
Jaccard coefficient91.1462.1142.7371.9483.6273.7991.5173.83
Accuracy 90.07
Using a kNN technique with a window size of 24 samples (0.1875 s)
Sensitivity99.5466.0254.8982.0796.9883.8098.3383.09
Precision92.5095.7364.5085.1287.3187.5293.0386.53
Jaccard coefficient92.1264.1042.0871.6884.9774.8191.5574.47
Accuracy--------------90.56
Using a kNN technique with a window size of 32 samples (0.25 s)
Sensitivity99.2967.7149.8881.8796.8684.7098.2482.65
Precision93.4496.1857.9382.9388.0787.4992.8985.56
Jaccard coefficient92.8365.9137.1170.0185.6075.5291.3574.05
Accuracy--------------90.76
Using a kNN technique with a window size of 40 samples (0.3125 s)
Sensitivity99.0568.8241.9780.7996.3485.2298.2881.49
Precision94.2496.0148.3880.3088.5187.2592.6683.91
Jaccard coefficient93.4066.9129.8267.3885.6175.7591.1572.86
Accuracy--------------90.69
Table 6. A summary of previous studies on sensor types, techniques, and provided fall-related information.
Table 6. A summary of previous studies on sensor types, techniques, and provided fall-related information.
Article (Year) [Reference]Sensor TypeTechnique (Method)Provided Fall-Related Information
Becker et al. (2012) [17]Inertial sensorManual labelingStarting and ending points of fall phases
Robinovitch et al. (2013) [16]CameraManual labelingCauses of falling; Activities before the fall event.
Hsieh et al. (2018) [20]Inertial sensorMachine learning (SVM)Fall directions (97.34% accuracy)
Hussain et al. (2019) [14]Inertial sensorMachine learning (kNN, SVM and random forest)Fall types (96.82% accuracy using random forest classifier)
Clemente et al. (2019) [8]Seismic sensorMachine learning (SVM)Fall positions (localization error is smaller than 0.28 m)
This studyInertial sensorMachine learning (SVM, kNN, NB, DT and AdaBoost)Starting and ending points of fall phases; Duration of fall phases.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hsieh, C.-Y.; Huang, H.-Y.; Liu, K.-C.; Liu, C.-P.; Chan, C.-T.; Hsu, S.J.-P. Multiphase Identification Algorithm for Fall Recording Systems Using a Single Wearable Inertial Sensor. Sensors 2021, 21, 3302. https://doi.org/10.3390/s21093302

AMA Style

Hsieh C-Y, Huang H-Y, Liu K-C, Liu C-P, Chan C-T, Hsu SJ-P. Multiphase Identification Algorithm for Fall Recording Systems Using a Single Wearable Inertial Sensor. Sensors. 2021; 21(9):3302. https://doi.org/10.3390/s21093302

Chicago/Turabian Style

Hsieh, Chia-Yeh, Hsiang-Yun Huang, Kai-Chun Liu, Chien-Pin Liu, Chia-Tai Chan, and Steen Jun-Ping Hsu. 2021. "Multiphase Identification Algorithm for Fall Recording Systems Using a Single Wearable Inertial Sensor" Sensors 21, no. 9: 3302. https://doi.org/10.3390/s21093302

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop