Next Article in Journal
EEG-Based Emotion Classification Using Stacking Ensemble Approach
Previous Article in Journal
On the Role of Damage Evolution in Finite Element Modeling of the Cutting Process and Sensing Residual Stresses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on the Application of Multi-Source Information Fusion in Multiple Gait Pattern Transition Recognition

1
Department of Mechanical and Engineering, Beijing Institute of Technology, 5 South Zhongguancun Street, Haidian District, Beijing 100081, China
2
Institute of Advanced Technology, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(21), 8551; https://doi.org/10.3390/s22218551
Submission received: 11 October 2022 / Revised: 1 November 2022 / Accepted: 4 November 2022 / Published: 6 November 2022
(This article belongs to the Section Wearables)

Abstract

:
Multi-source information fusion technology is a kind of information processing technology which comprehensively processes and utilizes multi-source uncertain information. It is an effective scheme to solve complex pattern recognition and improve classification performance. This study aims to improve the accuracy and robustness of exoskeleton gait pattern transition recognition in complex environments. Based on the theory of multi-source information fusion, this paper explored a multi-source information fusion model for exoskeleton gait pattern transition recognition in terms of two aspects of multi-source information fusion strategy and multi-classifier fusion. For eight common gait pattern transitions (between level and stair walking and between level and ramp walking), we proposed a hybrid fusion strategy of multi-source information at the feature level and decision level. We first selected an optimal feature subset through correlation feature extraction and feature selection algorithm, followed by the feature fusion through the classifier. We then studied the construction of a multi-classifier fusion model with a focus on the selection of base classifier and multi-classifier fusion algorithm. By analyzing the classification performance and robustness of the multi-classifier fusion model integrating multiple classifier combinations with a number of multi-classifier fusion algorithms, we finally constructed a multi-classifier fusion model based on D-S evidence theory and the combination of three SVM classifiers with different kernel functions (linear, RBF, polynomial). Such multi-source information fusion model improved the anti-interference and fault tolerance of the model through the hybrid fusion strategy of feature level and decision level and had higher accuracy and robustness in the gait pattern transition recognition, whose average recognition accuracy for eight gait pattern transitions reached 99.70%, which increased by 0.15% compared with the highest average recognition accuracy of the single classifier. Moreover, the average recognition accuracy in the absence of different feature data reached 97.47% with good robustness.

1. Introduction

Fast, accurate, and stable recognition of gait pattern transition is a prerequisite to the safe and smooth operation of wearable exoskeleton systems. For the practical application of exoskeleton, whether in the military field or civilian field, the exoskeleton system faces a complex and changeable environment. This requires that the gait pattern transition recognition model based on the exoskeleton perception system has good reliability and robustness while ensuring the accuracy. However, there are few studies on gait pattern transition recognition in the field of gait recognition, and the research on the robustness of gait pattern transition recognition model is vacant. How to design a stable and accurate recognition model of gait pattern transition is a highlight in this paper.
With the development of sensor technology and computer technology, multi-source information fusion technology, as an important research direction of intelligent information processing, has been widely used and developed in many fields. The purpose of information fusion is to overcome the shortcomings of a single information source by combining all relevant and available information to obtain a more comprehensive and accurate perception. At present, there is no unified definition of multi-source information fusion due to the extensive and diverse contents of information fusion research. In a general sense, multi-source information fusion technology is a kind of information processing technology that comprehensively processes and utilizes multi-source uncertain information. In addition, there is no research on the application of multi-source information fusion technology in gait pattern transition recognition in the literature we found. Therefore, we will consider the application of multi-source information fusion in gait pattern transition recognition. Different data (information) fusion strategies currently have some impact on the accuracy and stability of gait pattern transition recognition. A good information fusion strategy may improve the accuracy and efficiency of the final recognition, but also improve the anti-interference and fault-tolerant ability of the recognition system [1]. First, data (information) fusion can be divided into data level fusion, feature level fusion, and decision level fusion [2]. The general exoskeleton sensor system includes several types of sensors, but the data level information fusion is only applicable to the same type of sensors, which limits the multi-sensor information fusion at the data level. Data fusion at the decision level is the least dependent on sensors, effectively improving the reliability and robustness of the exoskeleton perception system (gait recognition model), and enhancing the security and stability of the exoskeleton system in complex environments. However, data fusion at the decision level may cause substantial information loss with relatively poor performance. The feature level fusion can give better consideration to the advantages of the data level fusion and the decision level fusion. Such a compromise information fusion can not only keep adequate important information, but also compress the data as much as possible to improve real-time processing. However, it requires a fine preprocessing to the sensor data, including feature extraction and feature selection. Therefore, the hybrid fusion of multi-sensor information at the feature level and decision level is a strong candidate for exoskeleton gait pattern transition recognition.
In addition, according to [3], there were only two ways to improve the performance of the gait pattern transition recognition model: One is to further improve the performance of existing classification algorithms (classifiers); the other is classifier fusion. For the first method, one of the main research directions is to construct a classification model with higher accuracy by improving the classification algorithm or combining the appropriate feature selection algorithm with the classification algorithm [4,5,6,7]. However, there are many classification models for gait pattern recognition [8,9,10,11], and the classification performance is different for different data, and there is no optimal classifier for all problems. The individual performance of the classification method itself also depends on the feature dataset used for classification [12]. The use of multi-classifier fusion (known as classifier ensembles) is now considered a practical and effective solution to solve complex pattern recognition problems and improve classification performance. Different classifiers may provide complementary information about the pattern to be classified, enabling the possibility of better classification performance [13] and less sensitivity to outliers [14]. Literature [14] also points out that classifier fusion does not necessarily improve classification performance, and in some cases, the performance of classifier fusion cannot exceed that of the best single classifier. Therefore, how to select a more suitable multi-classifier model for gait pattern transition recognition is the main priority of this paper. The two key factors that affect the performance of multi-classifier fusion are the diversity of classifier combination and the fusion algorithm [13,14]. This leads to two important questions: (1) How to select classifiers to retain information and achieve diversity in the set of classifiers; (2) how to fuse the classifier outputs to make the final decision [15]. The diversity of multiple classifier models can be achieved by using different classification methods, different numbers and types of features, different training samples, etc. [13]. Much research and applications have been currently carried out on multi-classifier fusion [15,16,17,18,19] to fuse the outputs of these classifiers well, in which D-S evidence theory has been widely studied and applied. In the field of gait and motion pattern recognition, there are also some research and applications related to multiple classifier ensembles [20,21,22]. In this paper, we select a variety of representative classifiers to randomly combine to achieve diversity, and compare and analyze a relatively large number of representative fusion algorithms. Therefore, we study the model that can be used for exoskeleton gait pattern transition recognition in terms of the multi-source information fusion strategy and multi-classifier fusion.
In this paper, a data fusion strategy based on the hybrid fusion of feature level and decision level is proposed for gait pattern transition recognition. First, the optimal feature subset is selected by feature engineering and the corresponding classifier is used for feature level fusion. The classifier selection and fusion algorithm of the multi-classifier fusion model for decision level fusion are then studied, and a relatively optimal multi-classifier fusion model suitable for gait pattern transition recognition is constructed. The focus of this paper is the research on the classifier selection of the multi-classifier fusion model and the multi-classifier fusion algorithm.

2. Materials and Methods

To select a better multi-classifier model for gait pattern transition recognition, we selected a relatively suitable decision level fusion method and classifier combination in two stages.
First, we compared the accuracy of multi-classifier models in gait pattern transition recognition under different multi-classifier fusion algorithms. The number of models whose accuracy of the fusion model under different fusion algorithms was higher than or equal to the highest accuracy of a single classifier in the corresponding classifier combination and the number of models with the highest accuracy were counted. According to this, the relatively most effective decision level fusion algorithm and candidate multi-classifier combination were determined.
Second, the perception system of the exoskeleton requires high reliability and robustness in practical applications given the complex actual use environment it faces. To this end, based on the fusion algorithm selected in the first stage, we analyzed the robustness of the multi-classifier model corresponding to the candidate multi-classifier combination under the fusion algorithm, and accordingly selected the suitable classifier combination. In the second stage of the robustness analysis of multi-classifier models, we analyzed the classification performance of each multi-classifier model in the absence of different feature data. Such performance includes the decrease of the accuracy of the multi-classifier model compared with that with the full-feature data; the multi-classifier model with the highest accuracy in the absence of feature data; and the comparison of the average and standard deviation of the accuracy of each multi-classifier model in the absence of several feature data.
All classification tests in this paper were performed in Matlab (R2019a, The MathWorks, Natick, MA, USA). The block diagram of the fusion strategy is shown in Figure 1.

2.1. Data Acquisition and Processing

We collected the data of 22 kinematic parameters (Table 1) of 9 people under 8 gait pattern transitions (Table 2) through a 3D motion capture system with 12 cameras (Motion Analysis, Raptor–4S). The kinematic parameters of both lower limbs were in the sagittal plane, and the parameters of the trunk were in the coronal axis direction in the coronal plane. The 3D motion capture device captured the motion trajectory (coordinate data) of markers on the target object through the cameras, and accurately calculated the body movement information of the target object according to their own algorithm. Before the beginning of the experiment, we corrected the experimental equipment through the calibrator, so as to ensure the accuracy of data acquisition. In addition, we used Helen Hayes whole body model excluding the head for the markers. Specific participants and experimental platforms can be found in our other paper [23]. Participants performed gait transitions as they were accustomed to the behavior. We collected five sets of data of each participant in different transition gait at a sampling frequency of 100 Hz, and preprocessed the data with Cortex software (Butterworth filter, 7 Hz low-pass filtering), and then removed the unsatisfactory data. We finally obtained a data set containing 350 sets of data. In this case, a transition gait starts from the last heel (toe) strike of the trailing limb (limb that enters the new gait pattern second) in the former gait pattern and ends at the first heel (toe) strike of the trailing limb in the next gait pattern. For example, level to up-stair transition was defined as the last heel strike of the trailing limb on the level to the first heel (toe) strike of the trailing limb on the stair.

2.2. Feature Extraction and Feature Selection

As mentioned above, in order to achieve better feature level fusion, it is necessary to preprocess the data through feature extraction and feature selection, so as to compress the data as much as possible while keeping enough important information and improve the efficiency of gait pattern transition recognition model. It is also mentioned above that the accuracy of different classification models can be improved by proper feature extraction and feature selection. Therefore, feature extraction and feature selection are key steps in multi-source information fusion.

2.2.1. Feature Extraction

We used Matlab (R2019a, The MathWorks, Natick, MA, USA) to extract features from 22 kinematic parameters of each transition gait in 350 sets of data. About 17 common time domain features (maximum, minimum, peak, peak-to-peak, mean, average amplitude, root amplitude, variance, standard deviation, root mean square, kurtosis, skewness, shape factor, peak factor, pulse factor, margin factor, clearance factor) and 4 common frequency domain features (root mean square of frequency, mean frequency, root variance of frequency, gravity frequency) are extracted from each set of data, totaling to 462 features in time domain and frequency domain. The new dataset obtained after feature extraction contains 350 feature samples as shown in Table 3.

2.2.2. Feature Selection

In this paper, the MRMR (maximum relevance minimum redundancy) –BMSF (binary matrix shuffling filter) second-order feature selection algorithm was used to select the features [24]. First, we screened 100 candidate features that have the maximum correlation with the classes and the minimum redundancy between features from 462 features by MRMR algorithm. Finally, an optimal feature subset containing nine features was selected by BMSF algorithm. The feature subset is shown in Table 4.
Of these, MRMR is a feature selection algorithm or criterion proposed by Peng et al. [25], which uses mutual information to measure the correlation between features and classes and the redundancy between features and features, and selects the feature that has the greatest correlation with classes and the least redundancy with selected features; BMSF is a data-driven heuristic random search algorithm proposed by Zhang et al. [26,27], implemented by combining binary matrix shuffling with SVM to perform filtering. Although BMSF is a wrapper feature selection algorithm, its generalization degree is high, and its advantages are not limited to support vector machine (SVM) classifier. Since this paper focused on the research of multi-classifier fusion model for gait pattern transition recognition, the MRMR and BMSF algorithms would not be described in detail here.
In order to analyze the classification performance of multiple classifier models in the absence of different feature data, we regarded the relevant time domain and frequency domain features of the same kinematic parameter as a type. The feature subset can be divided into five types of feature data: feature data related to right hip angular velocity, feature data related to right thigh velocity, feature data related to left hip angular velocity, feature data related to left shank velocity, and feature data related to left ankle angular velocity. Then we analyzed the classification performance of multi-classifier model in the absence of a certain type of feature data. This would simulate the reliability and robustness of the exoskeleton in the case of a certain type of sensor failure in practical applications.

2.3. Multi-Classifier Fusion

2.3.1. Fusion Algorithm

As mentioned above, there are much research on multi-classifier fusion algorithms, and they are also applied in the field of gait and motion recognition. Here, we selected representative related fusion algorithms for comparative analysis.
1.
D-S theory
D-S evidence theory is a typical reasoning method for intelligent processing and data fusion of uncertain information. It has been widely used in various fields [28,29,30,31,32]. Under the D-S evidence theory, a complete set of mutually incompatible basic propositions (presumptions) is called a recognition frame, which represents all possible answers to a question with only one correct answer. The subset of this framework is called a proposition. The degree of confidence assigned to each proposition is called the basic probability assignment (BPA, also known as the m function), and m (A) is the basic probability assignment, which reflects the degree of confidence in A. The belief function Bel (A) represents the exact degree of belief in the proposition A, and the likelihood function Pl (A) represents the degree of belief that the proposition A is not false, that is, the measure of uncertainty that it seems possible for A to be true. In fact, [Bel (A), Pl (A)] represents the uncertainty interval of A, and [0, Bel (a)] represents the supporting evidence interval of proposition A. [0, Pl (A)] denotes the quasi-belief interval of proposition A, [Pl (A), 1] denotes the rejection evidence interval of proposition A. Let m1 and m2 be the basic probability assignment functions derived from two independent evidence sources, then Dempster’s combination rule can be used to calculate the new basic probability assignment function. Such function is produced by the interaction of the two evidences and reflects the fusion information. Dempster’s combination rule is as follows:
m ( ) = 0 ;   m ( A ) = A 1 A 2 = A m 1 ( A 1 ) m 2 ( A 2 ) A 1 A 2 m 1 ( A 1 ) m 2 ( A 2 ) ,   A
2.
Majority voting
The basic idea of the voting method is “the minority is subordinate to the majority”, where all the member classifiers participate in the voting, the candidates are all the possible classification results, and the classification result with the most votes is the final output result. In this method, each member classifier is regarded as a completely equal individual, and all member classifiers are regarded as a voter. The majority voting method is a voting method, which requires more than half of valid votes.
3.
Decision Templates
Decision templates for multiple classifier fusion was proposed by Kuncheva in 1999 [16]. First, the DT method establishes the output matrix of the classifier in the fusion system and a DT matrix for each class; then the two are compared and the similarity is calculated. The greater the similarity is, the greater the support for the class is. Finally, the class corresponding to the highest similarity is selected as the judgment result.
In order to describe the following basic elementary fusion algorithm more easily, we assume that there are n classifiers X = (x1, x2, …, xn), m classes Y = (y1, y2, …, ym). zij represents the probability (score) of the sample to be recognized as the j class by classifier xi, i = 1, 2, …, n; j = 1, 2, …, m. After n classifiers are fused, a vector ej is obtained, which represents the overall likelihood that the sample to be identified belongs to each class. Finally, the class corresponding to the maximum value of ej is selected as the input pattern.
4.
Sum
The scores provided by each base classifier are summed and the class label with the highest score is assigned to a given input pattern.
e j = i = 1 n z i j ,   j = 1 , 2 , , m
5.
Average
Find the average of the scores for each class between the classifiers and assign the input pattern to the class with the highest score in the average. It is equivalent to the sum of the rules.
e j = 1 n i = 1 n z i j ,   j = 1 , 2 , , m
6.
Maximum
Find the maximum score for each class between the classifiers and assign the input pattern to the class with the highest score among the maximum scores.
e j = max i = 1 n { z i j } ,   j = 1 , 2 , , m
7.
Minimum
Find the minimum of each class between the classifiers and assign the input pattern to the class with the highest score among the minimum scores.
e j = min i = 1 n { z i j } ,   j = 1 , 2 , , m
8.
Product
The scores provided by each base classifier are multiplied and the class label with the highest score is assigned to a given input pattern.
e j = i = 1 n z i j ,   j = 1 , 2 , , m
Due to space limitations, the specific theory of each fusion algorithm is not introduced in detail in this paper.

2.3.2. Selection of Classifiers

There are many classifiers for gait recognition [8,10,11,33], such as support vector machines (SVM) [10], K-nearest neighbor (KNN), neural networks [4], Bayesian classifiers [8], etc.
We selected five commonly used representative classifiers as the candidate base classifiers for the multi-classifier model of gait pattern transition recognition. The SVM (linear, RBF, polynomial) with three different kernel functions, KNN (K = 1) and BP (one hidden layer, 10 neurons, and “tansig” activation function) are included. In order to analyze the multi-classifier model more conveniently, we numbered the five classifiers (as shown in Table 5), and replaced each classifier with a number for description hereinafter.

2.4. Evaluation Method

After reviewing relevant literatures where most of them take the accuracy as the evaluation criterion of the performance of multi-classifier model, this paper also took the accuracy as the evaluation criterion of the performance of the whole multi-classifier model. The recognition accuracies of each multiple classifier model and the single classifier corresponding to each multi-classifier model under different fusion algorithms were obtained through a ten-fold cross-validation method, and the relevant accuracies in the absence of different feature data were obtained in the same way. The above process was repeated 100 times, and the relevant average accuracy of each multi-classifier model was finally obtained.

3. Results

For better comparative analysis, in the first stage, we did not list all the relevant single classifier accuracy and fusion accuracy in all multi-classifier models. Only the accuracies of the multi-classifier models whose accuracy was higher than or equal to the highest accuracy in the single classifier and the highest accuracies of the related single classifier were listed as shown in Table 6. The italics in the table represents the multi-classifier fusion model whose accuracy was higher than or equal to the highest accuracy of the relevant single classifier; and the bold indicated the multi-classifier fusion model with the highest accuracy for each multi-classifier combination.
In the second stage, according to the selected fusion algorithm, we analyzed the classification performance of the correlation multi-classifier fusion model under the fusion algorithm in five different cases of missing feature data. Similarly, we also only listed the accuracies of the related multiple classifier fusion models and the highest accuracies of a single classifier in these models, as shown in Table 7. In order to better analyze the advantages and disadvantages of the multi-classifier fusion model, we listed the fusion accuracy and their standard deviation of the relevant multi-classifier fusion model in Table 8, where the bold indicated the highest accuracy of the multi-classifier fusion model in each case of missing feature data.

4. Discussion

First, we can see from the above results (Table 6) that there were more multi-classifier models whose accuracy was higher than or equal to the accuracy of single classifier under the D-S algorithm. Moreover, the highest accuracy mostly appeared in the model based on the D-S algorithm among these multi-classifier models. Therefore, we adopted the D-S algorithm as the decision level fusion algorithm of the multi-classifier model of gait pattern transition recognition, and carried out subsequent research accordingly.
Moreover, in order to select a better classifier combination, we further compared and analyzed the robustness of the multi-classifier model whose accuracy was higher than the highest accuracy of the single classifier under the D-S algorithm. We found an insignificant decrease in the classification accuracy of most multi-classifier models in the absence of a variety of different feature data when compared with the classification accuracy under full-feature data (as shown in Table 6 and Table 7). In addition, the accuracy of fusion decision in almost all multi-classifier models was still higher than the highest accuracy of a single classifier. This illustrated the stability of the D-S algorithm in these multi-classifier models and its low sensitivity to anomalies. That is to say, in the practical application of exoskeleton, when some sensors of the perception system fail or data are missing, the gait pattern transition recognition system can still maintain good robustness and reliability. Additionally, in the case of missing five types of feature data, the highest accuracy occurred twice in the combination of 3 and 4 classifiers and the combination of 3, 4, and 5 classifiers, respectively.
Finally, we can see from Table 8 that in the case of missing different feature data, the multi-classifier combination model with the highest average accuracy was the combination of 3, 4, and 5 (97.47%), and the standard deviation (0.0216) was not the smallest, but only higher than the combination of 1 and 4 (0.0174). On the whole, the combination of 3, 4, and 5 was thus a multi-classifier combination model which was relatively most suitable for gait pattern transition recognition. It can be seen from Table 6 that the average recognition accuracy of the model for eight gait pattern transitions reached 99.70%, which was 0.15% higher than the highest average recognition accuracy of the single classifier (99.55%). In addition, the literature [34] used relevant kinematic data to compare various latest feature processing and dimensionality reduction methods and machine learning classifiers to find an effective tool for recognition of LMTS (locomotion mode transitions). They finally found that the average recognition accuracy of all machine learning classifiers for the LMTS was 99.6%, while the recognition accuracy of the multi-source in-formation fusion model we selected was 99.7%, which was consistent with the conclusion mentioned in this paper that the average recognition accuracy of this model for eight gait pattern transitions is increased by 0.15% compared with the highest average recognition accuracy of a single classifier. It also proved the effectiveness of multi-source information fusion in improving the accuracy of gait pattern transition recognition.
It should be noted that we do not consider the factor of recognition efficiency in the selection of multi-classifier model. In terms of the efficiency of the multi-classifier model, under the same circumstances, the fewer the classifiers are, the lower the computational complexity of the classifier is, and the higher the efficiency of the multi-classifier model is. However, the recognition efficiency of different classifiers varies with respect to the gait pattern transition recognition. Therefore, the efficiency of the final multi-classifier model also needs to integrate the number of classifiers in the multi-classifier model and the recognition efficiency of each classifier; it is undergoing further study. In future work, we will consider using IMU equipment to re-collect data and verify the effectiveness of the multi-source information fusion model constructed in this paper as a supplement to this paper.

5. Conclusions

In this paper, we studied the application of multi-source information fusion technology in exoskeleton gait pattern transition recognition from two aspects: multi-source information fusion strategy and multi-classifier fusion. A hybrid fusion strategy based on feature level data fusion and decision level data fusion was proposed for exoskeleton gait pattern transition recognition. The multi-classifier model of data fusion in decision level was analyzed in terms of two aspects of classifier selection and multi-classifier fusion algorithm. Finally, a multi-classifier model based on D-S evidence theory and the combination of three SVM classifiers with different kernel functions (linear, RBF, polynomial) was constructed, which had higher gait pattern transition recognition accuracy and robustness. This multi-source information fusion model can not only improve the accuracy of gait pattern transition recognition, but also improve the anti-interference and fault tolerance of gait pattern transition recognition system of exoskeleton through the multi-source information hybrid fusion strategy of feature level and decision level. Finally, we also verified the robustness of the model by analyzing the performance of the model in the case of missing feature data.

Author Contributions

Conceptualization, C.G. and Q.S.; methodology, C.G.; investigation, C.G.; resources, C.G., Q.S. and Y.L.; data curation, C.G.; writing—original draft preparation, C.G.; writing—review and editing, C.G. and Y.L.; visualization, C.G.; supervision, Q.S.; project administration, Q.S. and Y.L.; funding acquisition, Q.S. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by grants from the Ministry of Science and Technology’s national key R&D program (Grant Number: 2017YFB1300500), the National Natural Science Foundation of China (Grant No. 51905035), and the Science and Technology Innovation Special Zone Project (Grant No. 2116312ZT00200202).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Ethics Committee of Beijing Sport University (protocol code: 2019007H and date of approval: 22 January 2019).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the data also forms part of an ongoing study.

Acknowledgments

The authors would like to thank all the participants for their cooperation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yang, L.J.; Yu, H. Theory and Application of Multi-Source Information Fusion, 1st ed.; Beijing University of Posts and Telecommunications Press: Beijing, China, 2006; pp. 6–8. [Google Scholar]
  2. Kong, L.; Peng, X.; Chen, Y.; Wang, P.; Xu, M. Multi-sensor measurement and data fusion technology for manufacturing process monitoring: A literature review. Int. J. Extrem. Manuf. 2020, 2, 5–31. [Google Scholar] [CrossRef]
  3. Ruta, D.; Gabrys, B. An overview of classifier fusion methods. Comput. Inf. Syst. 2000, 7, 1–10. [Google Scholar]
  4. Woodward, R.B.; Spanias, J.A.; Hargrove, L.J. User Intent Prediction with a Scaled Conjugate Gradient Trained Artificial Neural Network for Lower Limb Amputees Using a Powered Prosthesis. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 6405–6408. [Google Scholar]
  5. Yin, Z.Y.; Zheng, J.B.; Huang, L.P.; Gao, Y.F.; Peng, H.H.; Yin, L.H. SA-SVM-Based Locomotion Pattern Recognition for Exoskeleton Robot. Appl. Sci. 2021, 11, 5573. [Google Scholar] [CrossRef]
  6. Huang, L.; Zheng, J.; Hu, H. A Gait Phase Detection Method in Complex Environment Based on DTW-MEAN Templates. IEEE Sens. J. 2021, 21, 15114–15123. [Google Scholar] [CrossRef]
  7. Davila, J.C.; Cretu, A.M.; Zaremba, M. Wearable Sensor Data Classification for Human Activity Recognition Based on an Iterative Learning Framework. Sensors 2017, 17, 1287. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Uriel, M.H.; Abbas, A.; Dehghani, S. Adaptive Bayesian inference system for recognition of walking activities and prediction of gait events using wearable sensors. Neural Netw. 2018, 102, 107–119. [Google Scholar]
  9. Figueiredo, J.; Santos, C.P.; Moreno, J.C. Automatic recognition of gait patterns in human motor disorders using machine learning: A review. Med. Eng. Phys. 2018, 53, 1–12. [Google Scholar] [CrossRef]
  10. Wen, S.; Gelan, Y.; Xiang, G.C.; Jie, J. Gait identification using fractal analysis and support vector machine. Soft Comput. 2019, 23, 9287–9297. [Google Scholar]
  11. Shi, L.F.; Qiu, C.X.; Xin, D.J.; Liu, G.X. Gait recognition via random forests based on wearable inertial measurement unit. J. Ambient. Intell. Humaniz. Comput. 2020, 11, 5329–5340. [Google Scholar] [CrossRef]
  12. Ali, S.; Smith, K.A. On learning algorithm selection for classification. Appl. Soft Comput. 2006, 6, 119–138. [Google Scholar] [CrossRef]
  13. Quost, B.; Masson, M.H.; Denoeux, T. Classifier fusion in the Dempster-Shafer framework using optimized t-norm based combination rules. Int. J. Approx. Reason. 2011, 52, 353–374. [Google Scholar] [CrossRef] [Green Version]
  14. Rothe, S.; Kudszus, B.; Söffker, D. Does Classifier Fusion Improve the Overall Performance? Numerical Analysis of Data and Fusion Method Characteristics Influencing Classifier Fusion Performance. Entropy 2019, 21, 866. [Google Scholar] [CrossRef] [Green Version]
  15. Yaghoubi, V.; Cheng, L.; Paepegem, W.V.; Kersemans, M. A novel multi-classifier information fusion based on dempster-shafer theory: Application to vibration-based fault detection. Struct. Health Monit. 2020, 21, 596–612. [Google Scholar] [CrossRef]
  16. Kuncheva, L.I.; Bezdek, J.C.; Duin, R.P.W. Decision templates for multiple classifier fusion: An experimental comparison. Pattern Recognit. 2001, 34, 299–314. [Google Scholar] [CrossRef]
  17. Ruta, D.; Gabrys, B. Classifier selection for majority voting. Inf. Fusion 2005, 6, 63–81. [Google Scholar] [CrossRef]
  18. Padfield, N.; Ren, J.; Qing, C.; Murray, P.; Zhao, H.; Zheng, J. Multi-segment majority voting decision fusion for mi eeg brain-computer interfacing. Cogn. Comput. 2021, 13, 1484–1495. [Google Scholar] [CrossRef]
  19. Pei, F.Q.; Li, D.B.; Tong, Y.F.; He, F. Process service quality evaluation based on Dempster-Shafer theory and support vector machine. PLoS ONE 2017, 12, e0189189. [Google Scholar] [CrossRef] [Green Version]
  20. Wang, L.; Li, Y.J.; Xiong, F.; Zhang, W.Y. Gait Recognition Using Optical Motion Capture: A Decision Fusion Based Method. Sensors 2021, 21, 3496. [Google Scholar] [CrossRef]
  21. Zheng, H.; Zhang, K. Decision Fusion Gait Recognition Based on Bayesian Rule and Support Vector Machine. Appl. Mech. Mater. 2013, 2700, 1287–1290. [Google Scholar] [CrossRef]
  22. Wu, Y.C.; Qi, S.F.; Hu, F.; Ma, S.B.; Mao, W.; Li, W. Recognizing activities of the elderly using wearable sensors: A comparison of ensemble algorithms based on boosting. Sens. Rev. 2019, 39, 743–751. [Google Scholar] [CrossRef]
  23. Guo, C.Y.; Liu, Y.L.; Song, Q.Z.; Liu, S.T. Research on Kinematic Parameters of Multiple Gait Pattern Transitions. Appl. Sci. 2021, 11, 6911. [Google Scholar] [CrossRef]
  24. Guo, C.Y.; Liu, Y.L.; Song, Q.Z.; Liu, S.T. Application of feature selection method based on Maximum Relevance Minimum Redundancy criterion and Binary Matrix Shuffling Filter in human gait pattern transitions recognition. Comput. Methods Programs Biomed. 2022. submitted. [Google Scholar]
  25. Peng, H.C.; Long, F.H.; Chris, D. Feature selection based on mutual information: Criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1226–1238. [Google Scholar] [CrossRef] [PubMed]
  26. Zhang, H.Y.; Wang, H.Y.; Dai, Z.J.; Chen, M.S.; Yuan, Z.M. Improving accuracy for cancer classification with a new algorithm for genes selection. BMC Bioinform. 2012, 13, 298. [Google Scholar] [CrossRef] [PubMed]
  27. Sun, C.W.; Dai, Z.J.; Zhang, H.Y.; Li, L.Z.; Yuan, Z.M. Binary matrix shuffling filter for feature selection in neuronal morphology classification. Comput. Math. Methods Med. 2015, 2015, 626975. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Ahmed, A.A.; Mohamed, A.D. A New Technique for Combining Multiple Classifiers using The Dempster-Shafer Theory of Evidence. J. Artif. Intell. Res. 2002, 17, 333–361. [Google Scholar]
  29. Cao, Y.B.; Xie, X.P. Fusion Identification for Wear Particles Based on Dempster-Shafter Evidential Reasoning and Back-Propagation Neural Network. Key Eng. Mater. 2007, 46, 341–346. [Google Scholar] [CrossRef]
  30. Ballent, W.J. Dempster-Shafer Theory Applications in Structural Damage Assessment and Social Vulnerability Ranking. Ph.D. Thesis, University of Colorado at Boulder, Boulder, CO, USA, 2018. [Google Scholar]
  31. Limbourg, P.; Savic, R. Fault tree analysis in an early design stage using the dempster-shafer theory of evidence. In Proceedings of the European Safety and Reliability Conference 2007, ESREL, Stavanger, Norway, 25–27 June 2007; pp. 713–722. [Google Scholar]
  32. Li, J.P.; Pan, Q. A New Belief Entropy in Dempster–Shafer Theory Based on Basic Probability Assignment and the Frame of Discernment. Entropy 2020, 22, 691. [Google Scholar] [CrossRef]
  33. Negi, S.; Negi, P.C.B.S.; Sharma, S.; Sharma, N. Human Locomotion Classification for Different Terrains Using Machine Learning Techniques. Crit. Rev. Biomed. Eng. 2020, 48, 199–209. [Google Scholar] [CrossRef]
  34. Joana, F.; Carvalho, S.P.; Diogo, G.; Moreno, J.C.; Santos, C.P. Daily Locomotion Recognition and Prediction: A Kinematic Data-Based Machine Learning Approach. IEEE Access 2020, 8, 33250–33262. [Google Scholar]
Figure 1. The block diagram of the fusion strategy.
Figure 1. The block diagram of the fusion strategy.
Sensors 22 08551 g001
Table 1. Abbreviation and description of kinematic parameters.
Table 1. Abbreviation and description of kinematic parameters.
No.Abbr.Description
1rt aaright hip angular acceleration
2lt aaleft hip angular acceleration
3rs aaright knee angular acceleration
4ls aaleft knee angular acceleration
5rf aaright ankle angular acceleration
6lf aaleft hip angular acceleration
7rt avright hip angular velocity
8lt avleft hip angular velocity
9rs avright knee angular velocity
10ls avleft knee angular velocity
11rf avright ankle angular velocity
12lf avleft ankle angular velocity
13ls vleft shank velocity
14lt vleft thigh velocity
15ls aleft shank acceleration
16lt aleft thigh acceleration
17rt aright thigh acceleration
18rs aright shank acceleration
19rt vright thigh velocity
20rs vright shank velocity
21trunk atrunk acceleration
22trunk vtrunk velocity
Table 2. Abbreviation and description of gait pattern transitions.
Table 2. Abbreviation and description of gait pattern transitions.
No.Abbr.Description
1L-R UPlevel walking to up-ramp walking transition
2R-L DOWNdown-ramp walking to level walking transition
3L-S UPlevel walking to up-stair walking transition
4S-L DOWNdown-stair walking to level walking transition
5R-L UPup-ramp walking to level walking transition
6L-R DOWNlevel walking to down-ramp walking transition
7S-L UPup-stair walking to level walking transition
8L-S DOWNlevel walking to down-stair walking transition
Table 3. New dataset.
Table 3. New dataset.
Gait TransitionsSamplesFeatures
L-R UP45462
R-L DOWN45462
L-S UP44462
S-L DOWN42462
R-L UP44462
L-R DOWN43462
S-L UP44462
L-S DOWN43462
Table 4. The features of feature subset.
Table 4. The features of feature subset.
Features
rt av Mean; lt av Mean; lt av Skewness; lf av Mean; lf av Kurtosis; lf av Skewness; ls v Shape Factor; rt v Peak–To–Peak; lt av Root Variance of Frequency
Table 5. The candidate base classifiers.
Table 5. The candidate base classifiers.
No.Classifiers
1BP (one hidden layer, 10 neurons, and “tansig” activation function)
2KNN (K = 1)
3SVM (linear)
4SVM (RBF)
5SVM (polynomial)
Table 6. The accuracies of the related multiple classifier fusion models (%).
Table 6. The accuracies of the related multiple classifier fusion models (%).
Classifier EnsembleFusion Algorithm
Single MaxMajority VotingMaximumSumMinimumAverageProductDecision TemplateD-S
1, 296.0994.1794.3496.0995.9996.0995.9996.0596.09
1, 494.6093.2394.6096.4394.6996.4397.4596.9197.45
2, 496.3293.6396.3296.3296.3296.3296.3296.3296.32
3, 499.7896.1799.8399.8798.3999.8799.7499.8799.87
3, 599.7499.7099.8399.8799.8799.8799.8399.8799.87
1, 2, 495.9996.8394.6696.7795.9096.7795.9096.6096.73
1, 3, 599.6599.7095.3499.3799.3599.3799.3599.2299.59
3, 4, 599.5599.6199.7099.7098.8199.7099.7099.7099.70
1, 3, 4, 599.7499.7495.0299.7298.8899.7299.6699.6599.80
The italics in the table indicated the multi-classifier fusion model whose accuracy was higher than or equal to the highest accuracy of the relevant single classifier; The bold indicated the multi-classifier fusion model with the highest accuracy for each multi-classifier combination.
Table 7. The accuracies (%) of the related multiple classifier fusion models.
Table 7. The accuracies (%) of the related multiple classifier fusion models.
Classifier EnsembleNo rt av FeaturesNo rt v FeaturesNo lt av FeaturesNo ls v FeaturesNo lf av Features
Single MaxD-S EnsembleSingle MaxD-S EnsembleSingle MaxD-S EnsembleSingle MaxD-S EnsembleSingle MaxD-S Ensemble
1, 295.0995.0994.4092.8492.7392.7396.0996.0989.4189.41
1, 493.6396.5594.7996.5093.5994.2494.6797.2790.1492.60
2, 495.0495.0492.3992.3993.0792.6796.1296.1289.6389.51
3, 498.3998.9698.8598.5593.6295.8499.4699.6590.2493.89
3, 598.4699.2099.1399.1192.3892.4299.4899.4891.6492.07
3, 4, 598.4098.9499.3998.8393.5895.8199.3999.7091.8294.05
1, 3, 4, 598.4099.1599.2098.7593.8995.4099.7899.0991.5093.61
The bold in the table indicated the highest accuracy of the multi-classifier fusion model in each case of missing feature data.
Table 8. The accuracies (%) and standard deviations of the related multiple classifier fusion models.
Table 8. The accuracies (%) and standard deviations of the related multiple classifier fusion models.
Classifier EnsembleNo rt av FeaturesNo rt v FeaturesNo lt av FeaturesNo ls v FeaturesNo lf av FeaturesAverageStandard Deviation
1, 295.0992.8492.7396.0989.4193.230.0231
1, 496.5596.5094.2497.2792.6095.430.0174
2, 495.0492.3992.6796.1289.5193.150.0230
3, 498.9698.5595.8499.6593.8997.380.0217
3, 599.2099.1192.4299.4892.0796.460.0344
3, 4, 598.9498.8395.8199.7094.0597.470.0216
1, 3, 4, 599.1598.7595.4099.0993.6197.200.0228
The bold in the table indicated the highest accuracy of the multi-classifier fusion model in each case of missing feature data.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Guo, C.; Song, Q.; Liu, Y. Research on the Application of Multi-Source Information Fusion in Multiple Gait Pattern Transition Recognition. Sensors 2022, 22, 8551. https://doi.org/10.3390/s22218551

AMA Style

Guo C, Song Q, Liu Y. Research on the Application of Multi-Source Information Fusion in Multiple Gait Pattern Transition Recognition. Sensors. 2022; 22(21):8551. https://doi.org/10.3390/s22218551

Chicago/Turabian Style

Guo, Chaoyue, Qiuzhi Song, and Yali Liu. 2022. "Research on the Application of Multi-Source Information Fusion in Multiple Gait Pattern Transition Recognition" Sensors 22, no. 21: 8551. https://doi.org/10.3390/s22218551

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop