Next Article in Journal
PlantInfoCMS: Scalable Plant Disease Information Collection and Management System for Training AI Models
Previous Article in Journal
Adapting Single-Image Super-Resolution Models to Video Super-Resolution: A Plug-and-Play Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fall Direction Detection in Motion State Based on the FMCW Radar

School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun 130022, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(11), 5031; https://doi.org/10.3390/s23115031
Submission received: 28 April 2023 / Revised: 20 May 2023 / Accepted: 23 May 2023 / Published: 24 May 2023
(This article belongs to the Section Radar Sensors)

Abstract

:
Accurately detecting falls and providing clear directions for the fall can greatly assist medical staff in promptly developing rescue plans and reducing secondary injuries during transportation to the hospital. In order to facilitate portability and protect people’s privacy, this paper presents a novel method for detecting fall direction during motion using the FMCW radar. We analyze the fall direction in motion based on the correlation between different motion states. The range–time (RT) features and Doppler–time (DT) features of the person from the motion state to the fallen state were obtained by using the FMCW radar. We analyzed the different features of the two states and used a two-branch convolutional neural network (CNN) to detect the falling direction of the person. In order to improve the reliability of the model, this paper presents a pattern feature extraction (PFE) algorithm that effectively eliminates noise and outliers in RT maps and DT maps. The experimental results show that the method proposed in this paper has an identification accuracy of 96.27% for different falling directions, which can accurately identify the falling direction and improve the efficiency of rescue.

1. Introduction

Fall accidents pose a major threat to the health and life of the elderly. When a fall accident occurs, the elderly person may be seriously injured or lose consciousness. It is necessary to detect falls in time and provide detailed fall information to help medical staff quickly formulate a treatment plan [1,2,3].
The directions of falls in motion can be categorized into four types: forward, backward, left, and right. Different fall directions can result in injuries to different parts of the body [4,5]. In reality, elderly individuals often lose consciousness and roll unconsciously after a fall. Upon arriving at the scene, medical staff are often faced with the daunting task of identifying the precise location of the injury, which can prove to be a challenging and time-consuming process, ultimately delaying the administration of prompt and effective treatment. Therefore, it is essential to conduct a specialized study on this subject. By detecting the precise direction of the fall, paramedics can receive valuable information to promptly formulate an effective treatment plan [6,7,8].
Researchers have explored different sensors for the detection and warning of fall accidents, such as wearable sensors and vision sensors [9,10,11,12,13,14,15]. In [16,17], the researchers used wearable sensors to detect the fall direction and determine the location of the fracture. In [18], the researchers proposed a relationship between the direction and severity of a fall during fall detection. They classified falls by using an accelerometer-based multi-classifier and trained five different models. In [19], the researchers investigated the correlation between fall direction and injury severity. They used the SisFall dataset and a support vector machine classifier to determine the direction and impact of falls. In [20], a further study of fall and non-fall actions was conducted, and a classification method was developed for falls of different directions and severities in the paper, which also included four common non-fall actions.
In [21], the researchers used camera equipment to extract gait differences between different movements to identify falls. They designed a multimodal feature fusion model that addresses the dependencies between spatial and temporal features in fall detection. The detection accuracy of the model was 95.80%. In [22], the researchers used camera equipment to detect the fall event and the posture after the fall. They designed a multi-scale skip connection segmentation network to obtain the human body contour from the camera. The accuracy of the model was 97.68%. In [23], the researchers utilized camera equipment to identify various human activities and enhanced the conventional long short-term memory network by introducing a new deep convolutional long short-term memory network (ConvLSTM). They determined the position of human bones from the images and combined motion features to create new features. The model achieved a detection accuracy of 98.89%. In [24], the researchers utilized a camera device to detect fall events. They employed the pose estimation method to acquire human joint information and used SVM for classification. Notably, this was the first instance of a visual sensor being employed to detect the direction of falls.
However, wearable sensors have limitations such as a high false alarm rate and inconvenience in terms of portability. Vision sensors have high light requirements and privacy leakage problems, particularly in private areas such as bathrooms or bedrooms [25,26]. Therefore, we selected the FMCW radar sensor as the data acquisition device [27]. The FMCW radar has the feasibility of all-weather monitoring, robustness in light changes, security in privacy protection, and the ability to perceive changes in speed and range.
The researchers analyzed the influence of human motion on the echo signal of the FMCW radar from different angles, such as the range–time (RT) feature, the Doppler–time (DT) feature, and the range–Doppler (RD) feature [28,29,30]. In [31], the researchers extracted the phase information from the intermediate frequency signal and obtained the distance change between the human head and eyes through this information. This method has a good effect on detecting small movements of the human body, such as blinking and micro head movements. Although a single feature can effectively identify human motion in most scenes, there are still problems with similar or indistinct features.
To address the issue of inadequate information in a single feature, researchers used a multi-feature fusion method to detect human motion. In [32], Branka Jokanovic et al. combined the time–frequency, time–range, and range–Doppler features. They used a deep neural network for fall detection. The experimental results showed that certain features were more beneficial than others in identifying specific motions. This further strengthens the advantages of motion classification that utilizes multiple features. In [33], Yuh-Shyan Chen et al. proposed the continuous human motion recognition (CHMR) algorithm. To distinguish highly similar human actions, they combined 2D features and 3D features, such as range, Doppler, angle, range–Doppler–time, and range–angle–time. In [34], Baris Erol et al. proposed a multi-linear subspace fall detection method. They used the slow time, fast time, and Doppler frequency features to construct a data cube for fall detection.
Subsequently, in [35], Feng Jin and others collected the point cloud and centroid information of the tested person and used a Hybrid Variational RNN Autoencoder for fall detection. In [36], Xingshuai Qiao et al. proposed a radar point clouds model. They used range–time, Doppler–time, and Doppler–range to construct an RPC cube and used a two-layer CMPCANet to detect fall events. In [37], Ahmed Zoha et al. used the micro-Doppler features of the FMCW radar to detect falls. In addition, they used a variety of machine learning algorithms and transfer learning algorithms to classify human actions.
Although the above studies have shown that fall events can be effectively identified through the various features of the FMCW radar, there has been less work on detecting and identifying the direction of falls in motion. Based on the aforementioned analysis, this study utilizes the FMCW radar to detect the direction of falls in motion among elderly individuals. We combine range–time features and Doppler–time features and use a dual-branch CNN to achieve fall direction detection and recognition. To improve the model’s reliability and reduce the system’s computational cost, this paper proposes a pattern feature extraction (PEF) algorithm to eliminate noise and outliers in the environment.
The rest of this paper is structured as follows. Section 2 introduces the Materials and Methods. Section 3 introduces the Experimental Platform. Section 4 provides the Experimental Results. Section 5 provides the Conclusions, summarizing the research conducted and their implications for future research in the field.

2. Materials and Methods

2.1. Fall Direction Detection System

The fall direction detection system we designed is depicted in Figure 1. We collected human motion information using the FMCW radar and used a signal processing algorithm to process the original radar information to obtain the range change and Doppler change from the motion state to the falling state. We use the pattern feature extraction (PEF) algorithm to remove noise and outliers in the environment and to feed the processed data into the built dual-branch CNN to identify the direction of the fall.

2.2. Radar Raw Data Processing

When the FMCW radar is working, the transmit and receive antennas are switched on synchronously. The frequencies of the transmitted and received signals are depicted in Figure 2. The frequencies of the signal waveform. The transmitted signal can be written as
x T t   = A T cos 2 π f c + π B T c t 2 + φ t ,
where A T is the transmitting power, φ t is the phase noise of the transmitter, f c is the initial frequency of the chirp; B is the bandwidth of the chirp and T c is the duration of the chirp.
The point target within the effective range of the radar will generate a reflected signal which the radar receives after a brief delay. The delay can be written as
t d = 2 d / c ,
where d is the distance of the measured target. The signal that the radar receives can be written as:
x R t = α A T c o s 2 π f c t t d + π β t t d 2 ,
where α is the factor of influence amplitude change caused by the attenuation of the environment and the scattering of the object. The specific formula for the beat frequency signal f b can be written as:
f b = β t d ,
where β is the slope of the chirp signal. We can obtain baseband I/Q signals and then obtain complex baseband data Z n through the A/D converter.
When all chirps are ordered by time, a comprehensive matrix of radar raw data is generated, accurately capturing critical information. The matrix is crafted with horizontal and vertical axes, the former representing the slow time dimension and the latter representing the fast time dimension. As shown in Figure 3, we used the STFT algorithm to transform the slow time dimension of Z n to obtain the DT characteristics. The specific formula for STFT can be expressed as
S T F T z t = s τ , f d   = z t ω t τ e j 2 π f d t d t ,
where z t is the radar echo signal, ω t is the sliding window function, and f d represents the Doppler frequency. The distance equation of the measured object can be written as:
d = c f b / 2 β ,
We used the DFT algorithm to transform the fast time dimension of Z n to obtain the RT characteristics. The specific formula for DFT can be written as:
D F T N x n = k = 0 N 1 x n e j 2 π k n N ,
where x n is the original signal.
To improve the reliability of the experiment, we merged the four falling directions when close to the radar and the four falling directions when away from the radar into four falling directions in motion. Figure 4 shows the range–time map of different fall states. In addition, Figure 5 shows the Doppler–time map of different fall states.
From the range–time diagram in Figure 4, we can observe that when a person falls forward or to the left during motion, the distance from the motion state to the fallen state follows a similar trend. When approaching the radar, the distance between the person and the radar increases. In addition, when moving away from the radar, the distance between the person and the radar decreases. However, the change in distance for a forward fall is greater than that for a left fall. On the other hand, when a person falls backwards or to the right during motion, the distance from the motion state to the fall state changes in the opposite direction. When approaching the radar, the distance between the person and the radar first decreases and then increases. In addition, when moving away from the radar, the distance between the person and the radar first increases and then decreases. However, the change in distance for falling backwards is greater than for falling to the right. Therefore, we can identify different fall directions based on the range–time features.
From the Doppler–time graph in Figure 5, we can observe that when a person falls forward or to the left during motion, the Doppler change trend from the moving state to the falling state is the same. When approaching the radar, the Doppler information is negative. In addition, when far away from the radar, the Doppler information is positive. However, the Doppler change for a forward fall is greater than that for a left fall. On the other hand, when a person falls backward or to the right during motion, the Doppler change from the motion state to the fallen state is the opposite. When approaching the radar, the Doppler information is first negative and then positive. In addition, when far away from the radar, the Doppler information is first positive and then negative. However, the Doppler change for a backward fall is greater than that for a rightward fall. Therefore, we can identify different falling directions based on the Doppler-time features.
In summary, both range–time features and distance–time features can differentiate the direction of falls. However, to enhance the reliability of the model, we adopted a fusion of the two features for the recognition of fall direction.

2.3. Pattern Feature Extraction

We can observe that there is a lot of noise and many outliers in the RT map and the DT map, which cause great interference in identifying the falling direction. Therefore, we propose a pattern feature extraction (PEF) algorithm to lessen computing costs and enhance the reliability of the model. Figure 6 shows the algorithm flowchart.
Different power values are chosen based on different feature maps. Regions in the feature matrix with high power values are selected. Points with power values higher than the threshold are considered valid feature regions, and the threshold formula can be written as:
P t h = 1 a P ¯ + a · max P + m i n P ,
where a 0 , 1 is an adaptive parameter. The highest power value, lowest power value, and average power value in the feature diagram are represented by max P , m i n P , and P ¯ , respectively. When creating a dataset, the power values in each feature map vary. Choosing the appropriate threshold for each feature image can reduce the ambient noise more effectively.
After choosing an appropriate threshold, there are still some discrete outliers in the feature map. Then, by removing the outliers by using the Hampel filter, the calculation formula of the Hampel filter can be written as:
m k = m e d i a n x k K , x k , x k + K
S k = 1.4826 × m e d i a n x k j m k j K , K ;
                        y k = x k , x k j m k n t h S k m k , x k j m k n t h S k
The median · is the median of the sought signal, and n t h is the final threshold.
Figure 7a,d is the unprocessed RT map and the DT map. The area marked by the red circle is the noise from the surrounding environment. Figure 7b,e shows the RT map and the DT map with less noise and stronger features after processing with the power-threshold algorithm. The red circles in Figure 7b,e represent the remaining outliers. Figure 7c,f shows the results processed by the Hampel filter, which are more suitable as input for the CNN to improve the reliability of the model.

2.4. Dual-Branch CNN

The convolutional neural network (CNN) is a cutting-edge technology that utilizes internal convolution to extract a wide range of input features, including target color, edges, corners, and other critical information. These features serve as the foundation for CNN classification, allowing for unparalleled accuracy and precision in a variety of applications. By reducing the computational cost of preprocessing, CNN has become a go-to solution for classification tasks across a broad range of industries and disciplines. To identify different falling directions, it is necessary to detect changes in distance and Doppler before and after the person falls. Therefore, this experiment uses the RT map and the DT map as the input for the dual-branch CNN. The dual-branch CNN used in the experiments is traditional and basic. We are committed to using the simplest network structure to realize the recognition of the direction of the fall.
The network can fuse motion information from both the RT map and the DT map. In Figure 8, the network model of the dual-branch CNN is shown. Table 1 shows the parameter settings of each convolutional layer, pooling layer, and fully connected layer.

3. System Implementation Experiment Platform

3.1. Experiment Platform

The FMCW radar in the experiment is an IWR6843 produced by Texas Instruments (Dallas, TX, USA) [38]. The FMCW radar operates from 60 to 64 GHz. In our experiments, the radar is started at 60 GHz with a chirp slope of 33 MHz/μs. The number of sampling points selected on each chirp is 256. The other radar parameters are shown in Table 2.

3.2. Experimental Environment

The experimental environment setup is shown in Figure 9. The radar sensor is installed approximately 6 m in front left of the volunteer, with a height of about 1.5 m above the ground and inclined downward at 15° to ensure that the radar can collect the overall information. The experiment considers five fall states, including forward fall in motion, backward fall in motion, left fall in motion, right fall in motion, and fall in non-motion.
Figure 10 shows the data collection scene for some actions in the experiment. The blue arrow in the figure represents the motion direction of the volunteers, while the yellow arrow represents the fall direction of the volunteers.

3.3. Data Collection

Six volunteers participated in the experiment. The FMCW radar collected each action for 10 s. Each volunteer performed five actions. In addition, each movement was performed 80–100 times. A total of 573 pieces of data were collected. The experiment used the data sample generation method [39], and the radar mode was set to one transmission and four receptions. Through this method, we expanded the dataset to 7500 samples. Figure 11 is the proportion of data collected. Each color in the figure corresponds to a fall direction category. Each share is 1500 datasets.

4. Results

This experiment uses the Pytorch framework. The learning rate of the CNN is 0.0005, and the Step LR learning rate update strategy was used. The number of iterations is 50. Figure 12 shows the changes in accuracy and the loss of the dual-branch CNN during the training process. It can be observed that after 10 rounds of training, the network model tended towards a stable state. In addition, after 20 rounds, the model’s accuracy reaches its peak.
In order to comprehensively evaluate the efficacy and accuracy of our proposed PEF algorithm, we utilized both the unaltered original dataset and the refined PEF dataset to detect the fall direction. Figure 13a shows the confusion matrix using the original dataset as input in the dual-branch CNN. In addition, Figure 13b shows the confusion matrix using the PEF dataset as input in the dual-branch CNN.
The results of our experimentation reveal that when using the original dataset, the network’s recognition accuracy rates for forward fall in motion, backward fall in motion, left fall in motion, right fall in motion, and fall in non-motion were 92.40%, 91.73%, 89.73%, 89.74%, and 95.80%, respectively. When using the PEF dataset, the network’s recognition accuracy rates for forward fall in motion, backward fall in motion, left fall in motion, right fall in motion, and fall in non-motion were 96.02%, 95.49%, 94.44%, 94.84%, and 98.20%, respectively. The dual-channel CNN built in this experiment has a recognition rate of 96.27% for the direction of the fall. Our proposed PEF algorithm improves the accuracy of the model by 3–4%.
Forward fall and left fall have the same change trend in the distance, but the change in the distance is different. In order to analyze the influence of the height of the tested person on the experimental results, we conducted experiments using the respective datasets of six volunteers. Table 3 is the height of the tested person and the accuracy of the corresponding model.
The experimental results show that the taller the subject is, the easier it is to recognize the direction of the fall. This may be because the height of the person affects the distance change during the fall. The larger the distance variation, the more distinct features we acquire.
In order to further evaluate the superiority of the PEF algorithm, we chose CNNs with different structures for fall direction detection on the PEF dataset and the original dataset, including the two-branch CNN, LeNet [40], AlexNet [41], and VGG [42], and the DT map or the PEF-DT map as the input to the single-branch network. Table 4 and Table 5 are the falling direction detection results of different CNNs.
The experimental results show that the recognition of the fall direction by dual-branch CNN detection is higher than that of the other three single-branch networks. This is because its input is composed of distance characteristics and Doppler characteristics, which contain more human motion information. When using the PEF dataset, LeNet, AlexNet, and VGG show better results in terms of precision and recall. The results show that the PEF algorithm also performs excellently in CNNs with different structures.

5. Discussion

In this section, we review related works on fall orientation detection and discuss their methods and results. Table 6 shows the results of using different sensors and models in the field of identifying different fall directions.
In [16], Farhad Hossain et al. utilized a single 3D accelerometer to detect forward fall, backward fall, left fall, right fall, and ADL. They employed several machine learning algorithms for analysis and demonstration. The experimental results showed that the classification results using the SVM algorithm were the most accurate. The accuracy of the model was 94.54%. In [17], Abbas Shah Syed et al. used 3D accelerometers and gyroscopes to detect forward fall, backward fall, and lateral fall. In addition, they divided the fall action in each direction into two categories: hard fall and soft fall. They proposed the XGB-CNN network. The accuracy of the model was 90.02%.
In [18], Ryan M. Gibson et al. employed accelerometers to identify seven types of fall events, including forward fall, backward fall, left fall, right fall, hard fall, soft fall, and fall in any situation. They used the CVM algorithm to conduct experiments, and the experimental results showed that the accuracy of the model was 98.3%. In [19], Abbas Shah Syed et al. utilized the IMU-sensor-collected SisFall dataset to determine the direction of falls. They used the inertial measurement sensor and SVM algorithm to identify falls in three directions: forward, backward, and lateral. The model accuracy for fall detection was 90.4%.
In [24], Chunmiao Yuan et al. introduced a novel video-based method for detecting the direction of falls. They calculated human body joint points using body posture and identified four types of falls (forward, backward, left, and right) using the SVM algorithm. The model achieved an accuracy of 97.52% on the Le2i dataset.
However, there are some limitations to this paper. Specifically, we only employed a basic two-branch convolutional neural network to detect the direction of falls. In future research, we aim to develop a better performing network architecture that improves the overall performance of the system while improving accuracy.

6. Conclusions

This paper presents a fall direction detection method in motion based on the FMCW radar. In the study, we extract the range–time features and Doppler–time features of people in motion and fall states through the FMCW radar to distinguish different fall directions. We propose a PEF algorithm to reduce noise and outliers in feature maps. We built a dual-branch CNN to detect different fall directions and analyzed the performance of the PEF algorithm in different network structures. This method achieves high accuracy in identifying different fall directions.
In future work, we will collect fall direction data from volunteers of different ages to make our dataset more diverse. Additionally, we plan to prompt the injured body part and the corresponding treatment method after detecting the direction of the fall. This will enable elderly individuals to receive quick and effective treatment, reducing the risk of secondary injuries.

Author Contributions

Conceptualization, X.L. and L.M.; methodology, L.M.; software, L.M.; validation, L.M.; formal analysis, L.M. and G.L.; investigation, L.M. and Y.C.; resources, G.L.; data curation, L.M. and Y.C.; writing—original draft preparation, L.M.; writing—review and editing, X.L. and L.M.; supervision, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to acknowledge the technical support of the Changchun University of Science and Technology (Jilin, China).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Close, J.; Ellis, M.; Hooper, R.; Glucksman, E.; Jackson, S.; Swift, C.J.T. Prevention of falls in the elderly trial (PROFET): A randomised controlled trial. Lancet 1999, 353, 93–97. [Google Scholar] [CrossRef] [PubMed]
  2. Tinetti, M.E.; Speechley, M.J. Prevention of falls among the elderly. N. Engl. J. Med. 1989, 320, 1055–1059. [Google Scholar] [PubMed]
  3. Cummings, S.R.; Nevitt, M.C.; Kidd, S.J. Forgetting falls: The limited accuracy of recall of falls in the elderly. J. Am. Geriatr. Soc. 1988, 36, 613–616. [Google Scholar] [CrossRef] [PubMed]
  4. Gratza, S.K.; Chocano-Bedoya, P.O.; Orav, E.; Fischbacher, M.; Freystätter, G.; Theiler, R.; Egli, A.; Kressig, R.; Kanis, J.A.; Bischoff-Ferrari, H.A. Influence of fall environment and fall direction on risk of injury among pre-frail and frail adults. Osteoporos. Int. 2019, 30, 2205–2215. [Google Scholar] [CrossRef]
  5. Lai, C.-F.; Chang, S.-Y.; Chao, H.-C.; Huang, Y.-M.J. Detection of cognitive injured body region using multiple triaxial accelerometers for elderly falling. IEEE Sens. J. 2010, 11, 763–770. [Google Scholar] [CrossRef]
  6. Bogner, J.; Brenner, L.; Kurowski, B.; Malec, J.; Yu, W.-Y.; Hwang, H.-F.; Lin, M.-R.J. Gender differences in personal and situational risk factors for traumatic brain injury among older adults. J. Head Trauma Rehabil. 2022, 37, 220–229. [Google Scholar]
  7. Komisar, V.; Robinovitch, S.N.J. The role of fall biomechanics in the cause and prevention of bone fractures in older adults. Curr. Osteoporos. Rep. 2021, 19, 381–390. [Google Scholar] [CrossRef]
  8. Tolkiehn, M.; Atallah, L.; Lo, B.; Yang, G.-Z. Direction sensitive fall detection using a triaxial accelerometer and a barometric pressure sensor. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp. 369–372. [Google Scholar]
  9. Chen, J.; Kwong, K.; Chang, D.; Luk, J.; Bajcsy, R. Wearable sensors for reliable fall detection. In Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, 17–18 January 2006; pp. 3551–3554. [Google Scholar]
  10. Nyan, M.; Tay, F.E.; Murugasu, E.J. A wearable system for pre-impact fall detection. J. Biomech. 2008, 41, 3475–3481. [Google Scholar] [CrossRef]
  11. Ojetola, O.; Gaura, E.I.; Brusey, J. Fall detection with wearable sensors--safe (Smart Fall Detection). In Proceedings of the 2011 Seventh International Conference on Intelligent Environments, Nottingham, UK, 25–28 July 2011; pp. 318–321. [Google Scholar]
  12. Kong, X.; Meng, L.; Tomiyama, H. Fall detection for elderly persons using a depth camera. In Proceedings of the 2017 International Conference on Advanced Mechatronic Systems (ICAMechS), Xiamen, China, 6–9 December 2017; pp. 269–273. [Google Scholar]
  13. Boudouane, I.; Makhlouf, A.; Harkat, M.A.; Hammouche, M.Z.; Saadia, N.; Ramdane Cherif, A.J. Computing, H. Fall detection system with portable camera. J. Ambient. Intell. Humaniz. Comput. 2020, 11, 2647–2659. [Google Scholar] [CrossRef]
  14. Bosch-Jorge, M.; Sánchez-Salmerón, A.-J.; Valera, Á.; Ricolfe-Viala, C.J. Fall detection based on the gravity vector using a wide-angle camera. Expert Syst. Appl. 2014, 41, 7980–7986. [Google Scholar] [CrossRef]
  15. Mirmahboub, B.; Samavi, S.; Karimi, N.; Shirani, S.J. Automatic monocular system for human fall detection based on variations in silhouette area. IEEE Trans. Biomed. Eng. 2012, 60, 427–436. [Google Scholar] [CrossRef] [PubMed]
  16. Hossain, F.; Ali, M.L.; Islam, M.Z.; Mustafa, H. A direction-sensitive fall detection system using single 3D accelerometer and learning classifier. In Proceedings of the 2016 International Conference on Medical Engineering, Health Informatics and Technology (MediTec), Dhaka, Bangladesh, 17–18 December 2016; pp. 1–6. [Google Scholar]
  17. Syed, A.S.; Sierra-Sosa, D.; Kumar, A.; Elmaghraby, A.J. A deep convolutional neural network-xgb for direction and severity aware fall detection and activity recognition. Sensors 2022, 22, 2547. [Google Scholar] [CrossRef] [PubMed]
  18. Gibson, R.M.; Amira, A.; Ramzan, N.; Casaseca-de-la-Higuera, P.; Pervez, Z.J. Multiple comparator classifier framework for accelerometer-based fall detection and diagnostic. Appl. Soft Comput. 2016, 39, 94–103. [Google Scholar] [CrossRef]
  19. Syed, A.S.; Kumar, A.; Sierra-Sosa, D.; Elmaghraby, A.S. Determining Fall direction and severity using SVMs. In Proceedings of the 2020 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Louisville, KY, USA, 9–11 December 2020; pp. 1–7. [Google Scholar]
  20. Syed, A.S.; Sierra-Sosa, D.; Kumar, A.; Elmaghraby, A.J.S. A hierarchical approach to activity recognition and fall detection using wavelets and adaptive pooling. Sensors 2021, 21, 6653. [Google Scholar] [CrossRef]
  21. Amsaprabhaa, M.J. Multimodal spatiotemporal skeletal kinematic gait feature fusion for vision-based fall detection. Expert Syst. Appl. 2023, 212, 118681. [Google Scholar]
  22. Mobsite, S.; Alaoui, N.; Boulmalf, M.; Ghogho, M.J. Semantic segmentation-based system for fall detection and post-fall posture classification. Eng. Appl. Artif. Intell. 2023, 117, 105616. [Google Scholar] [CrossRef]
  23. Yadav, S.K.; Tiwari, K.; Pandey, H.M.; Akbar, S.A.J. Skeleton-based human activity recognition using ConvLSTM and guided feature learning. Soft Comput. 2022, 26, 877–890. [Google Scholar] [CrossRef]
  24. Yuan, C.; Zhang, P.; Yang, Q.; Wang, J.J. Fall detection and direction judgment based on posture estimation. Discret. Dyn. Nat. Soc. 2022, 2022, e8372291. [Google Scholar] [CrossRef]
  25. Wang, X.; Ellul, J.; Azzopardi, G.J. Elderly fall detection systems: A literature survey. Front. Robot. AI 2020, 7, 71. [Google Scholar] [CrossRef]
  26. Cai, Y.; Li, X.; Li, J.J.S. Emotion Recognition Using Different Sensors, Emotion Models, Methods and Datasets: A Comprehensive Review. Sensors 2023, 23, 2455. [Google Scholar] [CrossRef]
  27. Erol, B.; Amin, M.; Ahmad, F.; Boashash, B. Radar fall detectors: A comparison. In Proceedings of the Radar Sensor Technology XX, Baltimore, MD, USA, 18–21 April 2016; pp. 349–357. [Google Scholar]
  28. Jokanović, B.; Amin, M.J. Fall detection using deep learning in range-Doppler radars. IEEE Trans. Aerosp. Electron. Syst. 2017, 54, 180–189. [Google Scholar] [CrossRef]
  29. Kim, Y.; Moon, T.J. Human detection and activity classification based on micro-Doppler signatures using deep convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 2015, 13, 8–12. [Google Scholar] [CrossRef]
  30. Wang, M.; Zhang, Y.D.; Cui, G.J. Human motion recognition exploiting radar with stacked recurrent neural network. Digit. Signal Process. 2019, 87, 125–131. [Google Scholar] [CrossRef]
  31. Cardillo, E.; Sapienza, G.; Li, C.; Caddemi, A. Head motion and eyes blinking detection: A mm-wave radar for assisting people with neurodegenerative disorders. In Proceedings of the 2020 50th European Microwave Conference (EuMC), Utrecht, The Netherlands, 12–14 January 2021; pp. 925–928. [Google Scholar]
  32. Jokanovic, B.; Amin, M.; Erol, B. Multiple joint-variable domains recognition of human motion. In Proceedings of the 2017 IEEE Radar Conference (RadarConf), Seattle, WA, USA, 8–12 May 2017; pp. 0948–0952. [Google Scholar]
  33. Chen, Y.-S.; Cheng, K.-H.; Xu, Y.-A.; Juang, T.-Y.J. Multi-Feature Transformer-Based Learning for Continuous Human Motion Recognition with High Similarity Using mmWave FMCW Radar. Sensors 2022, 22, 8409. [Google Scholar] [CrossRef]
  34. Erol, B.; Amin, M.G. Radar Data Cube Analysis for Fall Detection. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 2446–2450. [Google Scholar]
  35. Jin, F.; Sengupta, A.; Cao, S.J. Engineering. mmfall: Fall detection using 4-d mmwave radar and a hybrid variational rnn autoencoder. IEEE Trans. Autom. Sci. Eng. 2020, 19, 1245–1257. [Google Scholar] [CrossRef]
  36. Qiao, X.; Feng, Y.; Liu, S.; Shan, T.; Tao, R.J. Radar Point Clouds Processing for Human Activity Classification using Convolutional Multilinear Subspace Learning. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5121117. [Google Scholar] [CrossRef]
  37. Shah, S.A.; Tahir, A.; Le Kernec, J.; Zoha, A.; Fioranelli, F.J. Data portability for activities of daily living and fall detection in different environments using radar micro-doppler. Neural Comput. Appl. 2022, 34, 7933–7953. [Google Scholar] [CrossRef]
  38. Instruments, T. IWR6843 IWR6443 Single-Chip 60-to 64-GHz mmWave Sensor. 2021. Available online: https://www.mouser.com/datasheet/2/405/1/iwr6843-1952538.pdf (accessed on 6 August 2022).
  39. Wang, B.; Guo, L.; Zhang, H.; Guo, Y.-X.J. A millimetre-wave radar-based fall detection method using line kernel convolutional neural network. IEEE Sens. J. 2020, 20, 13364–13370. [Google Scholar] [CrossRef]
  40. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P.J. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  41. Krizhevsky, A.; Sutskever, I.; Hinton, G.E.J. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  42. Simonyan, K.; Zisserman, A.J. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
Figure 1. Fall direction detection system.
Figure 1. Fall direction detection system.
Sensors 23 05031 g001
Figure 2. The frequencies of the signal waveform.
Figure 2. The frequencies of the signal waveform.
Sensors 23 05031 g002
Figure 3. The radar baseband data matrix generates the RT map and the DT map.
Figure 3. The radar baseband data matrix generates the RT map and the DT map.
Sensors 23 05031 g003
Figure 4. The RT maps of the different falling directions. (a) Forward fall in motion close to the radar, (b) backward fall in motion close to the radar, (c) left fall in motion close to the radar, (d) right fall in motion close to the radar, (e) fall in non-motion, (f) forward fall in motion away from the radar, (g) backward fall in motion away from the radar, (h) left fall in motion away from the radar, and (i) right fall in motion away from the radar.
Figure 4. The RT maps of the different falling directions. (a) Forward fall in motion close to the radar, (b) backward fall in motion close to the radar, (c) left fall in motion close to the radar, (d) right fall in motion close to the radar, (e) fall in non-motion, (f) forward fall in motion away from the radar, (g) backward fall in motion away from the radar, (h) left fall in motion away from the radar, and (i) right fall in motion away from the radar.
Sensors 23 05031 g004aSensors 23 05031 g004b
Figure 5. The DT maps of different falling directions. (a) Forward fall in motion close to the radar, (b) backward fall in motion close to the radar, (c) left fall in motion close to the radar, (d) right fall in motion close to the radar, (e) fall in non-motion, (f) forward fall in motion away from the radar, (g) backward fall in motion away from the radar, and (h) left fall in motion away from the radar, (i) right fall in motion away from the radar.
Figure 5. The DT maps of different falling directions. (a) Forward fall in motion close to the radar, (b) backward fall in motion close to the radar, (c) left fall in motion close to the radar, (d) right fall in motion close to the radar, (e) fall in non-motion, (f) forward fall in motion away from the radar, (g) backward fall in motion away from the radar, and (h) left fall in motion away from the radar, (i) right fall in motion away from the radar.
Sensors 23 05031 g005aSensors 23 05031 g005b
Figure 6. The proposed pattern feature extraction algorithm.
Figure 6. The proposed pattern feature extraction algorithm.
Sensors 23 05031 g006
Figure 7. (a) RT map of backward fall in motion; (b) RT map after threshold of backward fall in motion; (c) RT map after Hampel of backward fall in motion; (d) DT map of backward fall in motion; (e) DT map after threshold of backward fall in motion; and (f) DT map after Hampel of backward fall in motion.
Figure 7. (a) RT map of backward fall in motion; (b) RT map after threshold of backward fall in motion; (c) RT map after Hampel of backward fall in motion; (d) DT map of backward fall in motion; (e) DT map after threshold of backward fall in motion; and (f) DT map after Hampel of backward fall in motion.
Sensors 23 05031 g007aSensors 23 05031 g007b
Figure 8. Dual-channel CNN (yellow is the convolutional layer, red is the max pooling layer, light purple is the fully connected layer, and purple is the soft classifier).
Figure 8. Dual-channel CNN (yellow is the convolutional layer, red is the max pooling layer, light purple is the fully connected layer, and purple is the soft classifier).
Sensors 23 05031 g008
Figure 9. (a) Schematic diagram of different falling directions and (b) the experimental scene setup (right) and the FMCW radar (left).
Figure 9. (a) Schematic diagram of different falling directions and (b) the experimental scene setup (right) and the FMCW radar (left).
Sensors 23 05031 g009
Figure 10. (a) Forward fall in motion (b) backward fall in motion, (c) left fall in motion, (d) right fall in motion, and (e) fall in non-motion.
Figure 10. (a) Forward fall in motion (b) backward fall in motion, (c) left fall in motion, (d) right fall in motion, and (e) fall in non-motion.
Sensors 23 05031 g010
Figure 11. Data samples corresponding to each action.
Figure 11. Data samples corresponding to each action.
Sensors 23 05031 g011
Figure 12. Accuracy and loss curves.
Figure 12. Accuracy and loss curves.
Sensors 23 05031 g012
Figure 13. (a) Confusion matrix using the original dataset; (b) confusion matrix using the PEF dataset.
Figure 13. (a) Confusion matrix using the original dataset; (b) confusion matrix using the PEF dataset.
Sensors 23 05031 g013
Table 1. Dual-channel CNN structure.
Table 1. Dual-channel CNN structure.
LayerChannel 1Channel 2
InputPEF-RT mapPEF-DT map
Convolution(3, 16, 3)(3, 16, 3)
Max Pooling(2, 2)(2, 2)
Convolution(16, 32, 3)(16, 32, 3)
Max Pooling(2, 2)(2, 2)
Convolution(32, 64, 3)(32, 64, 3)
Max Pooling(2, 2)(2, 2)
Convolution(64, 128, 3)(64, 128, 3)
Flatten 1 × 1152 1 × 1152
Convolution 1 × 2304
Sigmoid-
Table 2. Rader parameters.
Table 2. Rader parameters.
ParametersQuantity
Starting frequency60 GHz
Stop frequency63.015 GHz
Bandwidth3.015 GHz
Frequency slope50.259 MHz/μs
Sample rate5 MHz
Idle time between chirps100 s
Ramp end time60 s
Number of receiving antennas4
Number of transmitting antennas1
Frame time10 s
Number of samples per chirp256
No. of chirp loops128
Table 3. The recognition accuracy of different heights.
Table 3. The recognition accuracy of different heights.
Volunteer IDHeight (cm)Accuracy
118596.82%
218196.55%
317796.02%
417595.86%
516095.75%
615895.75%
Table 4. Classification matrix using the original dataset.
Table 4. Classification matrix using the original dataset.
NetACCTPR-
Forward Fall
TPR-
Backward Fall
TPR-
Left Fall
TPR-
Right Fall
TPR-Fall Non-Motion
Two-Branch CNN92.00%92.40%91.73%89.73%89.74%95.80%
LeNet87.72%88.53%88.33%87.73%87.26%86.73%
AlexNet88.96%90.50%89.10%88.00%89.26%87.86%
VGG89.89%91.60%90.53%88.87%89.87%88.60%
Table 5. Classification matrix using the PEF dataset.
Table 5. Classification matrix using the PEF dataset.
NetACCTPR-
Forward Fall
TPR-
Backward Fall
TPR-
Left Fall
TPR-
Right Fall
TPR-Fall Non-Motion
Two-Branch CNN96.27%96.02%95.49%94.44%94.84%98.20%
LeNet93.52%93.86%93.46%91.73%92.53%96.00%
AlexNet92.64%92.13%93.26%91.73%91.26%94.80%
VGG94.86%93.46%94.46%94.53%94.40%97.46%
Table 6. Comparison of different fall direction detection methods.
Table 6. Comparison of different fall direction detection methods.
ReferencesSensor TypeModelAccuracy
[16]Three-dimensional accelerometerSVM94.54%
[17]Three-dimensional accelerometer and gyroscopeXGB-CNN90.02%
[18]AccelerometerSVM98.3%
[19]IMU sensorSVM90.4%
[24]CameraSVM97.52%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, L.; Li, X.; Liu, G.; Cai, Y. Fall Direction Detection in Motion State Based on the FMCW Radar. Sensors 2023, 23, 5031. https://doi.org/10.3390/s23115031

AMA Style

Ma L, Li X, Liu G, Cai Y. Fall Direction Detection in Motion State Based on the FMCW Radar. Sensors. 2023; 23(11):5031. https://doi.org/10.3390/s23115031

Chicago/Turabian Style

Ma, Lei, Xingguang Li, Guoxiang Liu, and Yujian Cai. 2023. "Fall Direction Detection in Motion State Based on the FMCW Radar" Sensors 23, no. 11: 5031. https://doi.org/10.3390/s23115031

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop