Next Article in Journal
Using a Light-Weight CNN for Perfume Identification with An Integrated Handheld Electronic Nose
Previous Article in Journal
MWIRGAN: Unsupervised Visible-to-MWIR Image Translation with Generative Adversarial Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on sEMG Feature Generation and Classification Performance Based on EBGAN

College of Mechatronics and Automobile Engineering, Chongqing Jiaotong University, No. 66 Xuefudadao, Nanan District, Chongqing 400074, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(4), 1040; https://doi.org/10.3390/electronics12041040
Submission received: 24 December 2022 / Revised: 3 February 2023 / Accepted: 3 February 2023 / Published: 20 February 2023

Abstract

:
Surface electromyography signal (sEMG) recognition technology requires a large number of samples to ensure the accuracy of the training results. However, sEMG signals generally have the problems of a small amount of data, complicated acquisition process and large environmental influence, which hinders the improvement of the accuracy of sEMG classification. In order to improve the accuracy of sEMG classification, an sEMG feature generation method based on an energy generative adversarial network (EBGAN) is proposed in this paper for the first time. The energy concept is introduced into the discriminant network instead of the traditional binary judgment, and the distribution of the real EMG dataset is learned and captured by multiple fully connected layers, with similar sEMG data being generated. The experimental results show that, compared with other types of GAN networks, this method achieves a small maximum mean discrepancy in comparison with that of the original data. The experimental results using different typical classification models show that the data augmentation method proposed can effectively improve the classification accuracy of typical classification models, and the accuracy increase range is 1~5%.

1. Introduction

With the aging of the population, stroke, hemiplegia and other diseases have a great impact on the lives of the elderly [1]. Rehabilitation training for patients using an exoskeleton robot is an ideal rehabilitation method. However, a core problem to be solved in exoskeleton motion control is how to coordinate the exoskeleton and body motion [2]. Using human–robot interaction technology to accurately obtain human motion intention is an important pre-requisite to achieve its motion. Generally speaking, the signal source of the human–robot interaction includes detection force, position or physiological signal (EEG, EMG) [3]. Among them, because the generation of surface EMG signals is ahead of limb movements, it has the advantage of being non−invasive and has become one of the most ideal control signal sources in human–robot interaction systems [4].
Surface electromyography (sEMG) in the field of myoelectric control and human–computer interaction is a research hotspot in the field of human–robot interaction. For example, the EMG control system uses the characteristic value of the EMG signal for action recognition and drives the lower limbs according to the patient’s motion intention, so as to promote the patient to actively participate in rehabilitation training and restore the muscle strength of the lower limbs [5]. Another example is the recognition of the body’s movement intention through sEMG [6] and the control of exoskeleton robots using sEMG signals [7]. However, due to the cumbersome experiments of collecting EMG data, subjects need to pay attention for a long time and wear uncomfortable EMG collection equipment. In the medical field, the data of stroke patients are scattered among different laboratories and it is difficult to achieve data sharing, which easily leads to a lack of EMG datasets and an imbalance in data categories. However, sharing medical data can lead to major privacy issues. All of these issues affect the improvement of the accuracy of EMG classification and hinder the development of sEMG signal research. In order to study the rehabilitation system based on EMG signals in more depth, the amount of data must be increased.
The main contributions of this paper are centered on four aspects:
(1)
We propose a feature generation method of sEMG based on an energy−based generative adversarial network (EBGAN), which explores the feasibility of a data enhancement method for improving EMG recognition technology and provides a new research idea for further research on machine learning in EMG recognition;
(2)
The concept of energy is introduced into the discriminant network to replace the traditional binary judgment. Through multiple full connection layers, the distribution of real EMG datasets is learned and captured, and similar EMG data are generated;
(3)
Compared with other types of GANs, this method achieves a maximum mean discrepancy that is smaller than that of the original data;
(4)
The experimental results of different typical classification models show that the proposed data enhancement method can effectively improve the classification accuracy of typical classification models, and the accuracy rate is improved by 1~5%.

2. Related Work

For small samples and unbalanced EMG signal datasets, the neural network model is likely to overfit, the generalization performance of the model is poor and the classification accuracy is not high, so the amplification of EMG training data is particularly important [8]. At present, there are enhancement algorithms, such as machine learning and a small number of oversampling SMOTE [9] and ADASYN [10] of adaptive synthetic sampling. There are also ways to enhance the rotation and translation of the data. However, most of these studies focus on image generation, and not all of them are applicable to one−dimensional time−series data of EMG signals. In 2014, Goodfellow [11] proposed a generative adversarial network (GAN), which provides a new and effective framework for the study of time−series signal enhancement. Its generation model can fit known datasets to produce realistic data that conform to the known data distribution. This can expand a small sample or unbalanced sample. Over the years, there have been many studies on GAN in the field of biological signals and medical data generation. Sharaj [12] used WGAN (Wasserstein GAN) with a gradient penalty term to synthesize EEG data; the network solves the problems of frequency artifacts and the training instability of the generated sequence, and tests its classification performance, which proves the effectiveness of the generated samples. Haradal [13] successfully synthesized corresponding biological signals in electrocardiogram (ECG) and EEG datasets using a GAN model based on long short−term memory (LSTM) units. Xiang Xia Yu [14] proposed a generative adversarial network technology based on Gaussian coupling for the synthesis of structured electronic health records in 2022. However, at present, due to the traditional thinking of GAN in two−dimensional image application, or due to the inherent low signal−to−noise ratio and non−stationary characteristics of EMG signals, the significance of EMG signal enhancement has not received widespread attention, and there are few related studies on data generation and enhancement of surface EMG signals at home and abroad.
In view of the above problems, this paper focuses on the enhancement of EMG signal features in view of the importance of EMG signal features in pattern recognition, human–computer interaction and EMG control. The concept of energy [15] is introduced into the discrimination network to replace the traditional binary judgment, and a method of EMG signal feature generation based on an energy generation antagonism network is proposed. This paper focuses on the pretreatment of EMG signals, the construction and optimization of the EBGAN model, data enhancement and the verification of enhanced data classification. It also explores the feasibility and effectiveness of the EMG feature generation method to improve the accuracy of EMG classification.

3. Signal Acquisition and Feature Extraction

In this section, we mainly introduce the surface EMG signal acquisition, preprocessing and feature extraction under the target action to form the feature dataset of the generation network.

3.1. Experimental Data Acquisition and Preprocessing

The collection of human EMG signals mainly includes two types: invasive and noninvasive [16]. Invasive acquisition equipment needs to be inserted into the muscle, which can effectively detect the myoelectric signal deep in the muscle and has a good signal−to−noise ratio and resolution ability. However, in the invasive collection method, the needle needs to be inserted into the muscle, which will damage the muscle, so it is not suitable for acceptance. The non−invasive method is performed by attaching EMG detection electrodes to the skin without causing any damage to the body. Therefore, this paper adopted a non−invasive acquisition method to analyze surface EMG signals.
We utilized the OT BioLab surface EMG acquisition device, a wearable wireless EMG acquisition system developed by OT Bioeletronica s.n.c in Italy. The detection device includes a Due probe and an electrode sheet to detect the strength of the surface EMG signal by the voltage difference between the two electrodes. The electrode sheet adopts Ag/AgCl and electrode adhesive, which can prevent skin sensitivity caused by long−term contact with the skin, and has good conductivity and adhesion properties, which can prevent the surface EMG signal inaccuracy or data loss caused by the movement or fall of the electrode during the acquisition process. Table 1 shows the data parameters of the testers.
The surface EMG signal is a weak electrical signal with an amplitude between 100~1500 μV [17], which is easily affected by external noise. Before collecting surface EMG signals, appropriate methods should be used to reduce the measurement error. The muscle area where the electrode sheet is attached should be wiped with medical alcohol to ensure that the electrode has good contact with the skin to reduce the impedance between the electrode and the skin and avoid falling or moving during the test. In the experiment, three muscles, biceps femoris (BF), vastus lateralis (VL) and vastus medialis (VM), were selected for surface EMG signal acquisition, and five action modes were selected: flat walking, going upstairs, going downstairs, sitting down and standing up. In this paper, the data of tester No. 1 were selected for follow−up work. The acquisition scenario is shown in Figure 1.
Since the effective signal range of sEMG is in the range of 0~500 Hz, and the main energy is concentrated at 20~400 Hz, the notch was first used to remove the power frequency interference, and then the Butterworth filter was used to perform 20~400 Hz bandpass filtering of the original signal [18] to eliminate the interference of high−frequency and low−frequency noise and retain those required frequency components. Taking the flat walking action as an example, the filtered surface EMG signal is shown in Figure 2, and the filtered surface EMG signals of the biceps femoris (BF), vastus lateralis and vastus medialis muscle are shown from top to bottom.

3.2. Feature Extraction

At present, there are three methods for the feature extraction of EMG signals: time domain method, frequency domain method and time–frequency domain method. The time–domain mapping feature is obtained directly from the time series representing the original surface EMG signal, which can reflect the change in surface EMG signal with time, and because it does not need to be transformed, it is easy to implement and the computing load is small, so it is more commonly used [19]. Since the frequency domain characteristics need to be obtained by Fourier transform, the real−time performance is relatively low. The time–frequency characteristic is an abstract characteristic and generally has many parameters to set.
The frequency domain characteristic can only show its distribution in various frequency bands, but cannot reflect the energy of sEMG in time, that is, the strength of sEMG with time. Therefore, the eigenvalues obtained by the frequency domain method were not suitable for the study of this paper. Moreover, the sEMG feature values extracted by this method are input as real samples of generative adversarial networks, so that the generator G generates samples with false reality, and it is more appropriate to select time−related time domain methods to extract feature values. Therefore, it is better to use the time domain analysis method for feature value extraction.
Common time−domain feature extraction methods include mean absolute value (MAV), root mean square (RMS) and variance (VAR). We extracted three time−domain feature values of EMG signals on the surface of each muscle as the real sample input of the model. The extraction of the three time−domain features is shown in Figure 3.
The absolute mean (MAV) reflects the average change in the energy of surface EMG signals during muscle movement and is expressed as
F MAV = 1 N i = 1 N x i
The root mean square (RMS) reflects the change in the effective value of the EMG signal during movement and is expressed as
F RMS = 1 N i = 1 N x i 2
The variance (VAR) reflects the rate at which the energy of the EMG signal changes during movement and is expressed as
F VAR = 1 N i = 1 N x i 2
In the formula, x i  is the time–varying surface EMG signal value and N is the length of the time window for each feature extraction. The above amplitude feature constitutes the real samples’ x input to generative adversarial networks.

4. Energy Generative Adversarial Network

This section introduces the basic principles of EBGAN as well as its model construction and optimization process in detail.

4.1. GAN Principle

Generative adversarial networks derive from game theory, which consists of two components: generator G and discriminator D. The function of the generator is to learn the random noise z of the input, so that it can produce a generated samples G ( z ) close to the actual p data , which can fool discriminator D. Discriminator D is used to identify the input samples and determine whether they are real samples x or generated samples G ( z ) . These two different networks continue to confront each other, and the generator constantly produces samples that can confuse the discriminator, and the discriminator will constantly learn, thereby improving the ability to distinguish between true and false. The two networks will eventually reach Nash equilibrium [13], which means that the two loss functions of the generator and discriminator are minimized and balanced. The distribution of the samples and datasets generated by the generator G is indistinguishable. As a result, the generator G can generate a generated sample G ( z ) . Because G ( z ) is close to the actual distribution, the discriminator D cannot correctly distinguish the generated sample G ( z ) from the real sample x . Figure 4 shows the architecture diagram of the generated countermeasure network.
The discriminator D distinguishes the generated sample G ( z ) from the real sample x , which is a binary classification problem. When inputting x , set the output probability to 1 as much as possible. When the input is G ( z ) , ensure that the output possibility is 0 as much as possible. Therefore, the loss function of D is
max D V ( D , G ) = E x ~ p data ( x ) [ log ( D ( x ) ) ] + E z ~ p ( z ) [ log ( 1 D ( G ( z ) ) ) ]
where D(x) indicates the possibility that the discriminator outputs are real samples and D( G ( z ) ) indicates the possibility that the discriminator outputs are generated samples. The generator learns the true distribution p data , and the generator constantly inputs the generated high−quality samples into the discriminator to confuse the discriminator’s judgment and increase the possibility of the generated samples being recognized as real samples, thus increasing the possibility of G ( z ) being recognized as a real sample, that is, the output probability of D approaches 1. So, the loss function of the generator is expressed as follows:
max G V ( D , G ) = E z ~ p ( z ) [ log ( 1 D ( G ( z ) ) ) ]
The judgment probability of D is between 0 and 1; so, Equation (5) is equivalent to
min G V ( D , G ) = E z ~ p ( z ) [ log ( 1 D ( G ( z ) ) ) ]
Thus, Equation (7) represents the confrontation process between the discriminator D and generator G:
min G max D V ( D , G ) = E x ~ p data ( x ) [ log ( D ( x ) ) ] + E z ~ p ( z ) [ log ( 1 D ( G ( z ) ) ) ]

4.2. EBGAN Model Construction and Optimization

Generative adversarial network is a probability−based model, which is unstable in training and prone to gradient−vanishing and gradient−bursting problems, which makes GANs difficult to train. The EBGAN regards the discriminator as an energy function from the perspective of energy concept, and it introduces the energy concept into the discriminator network to replace traditional binary judgment. It does not need to consider factors such as production distribution, actual distribution and distribution spacing, but regards the discriminator as an energy function. If the generated data are close to the real data distribution, it affords the generated data low energy, and if the generated data are far away from the real data, it affords the state unstable high energy. Therefore, EBGAN does not have too many restrictions on the model structure and loss function, and it has a greater flexibility. Based on the energy model, the loss function of the EBGAN generator is expressed as follows:
L G ( z ) = D ( G ( z ) )
The loss function of the discriminator is
L D ( x , z ) = D ( x ) + [ m D ( G ( z ) ) ] +
In the formula, m is a super parameter greater than 0, [ · ] + = max ( 0 , · )  LG is the loss function of the generator and LD is the loss function of the discriminator, which is also the energy function of the discriminator. The architecture of EBGAN is shown in Figure 5.
In this paper, the EBGAN model was constructed based on the one−dimensional fully connected layer (Dense), and the architecture of network G and discriminant network D were generated as shown in Table 2. The input to G is random noise, which learns and captures the distribution of the true EMG dataset through multiple fully connected layers and generates similar EMG data. In order to prevent gradient disappearance or gradient explosion during training, batch standardization was carried out within G.
EBGAN’s discriminator D is an improvement on Autoencoder, which first pre−trains Autoencoder with actual data, so that the bias between the reconstructed data and the original data is reduced. After the pre−training is completed, the discriminator has recognition ability. The original input data and the reconstruction data are calculated, and the L2 norm is used to calculate the error between the original data and the reconstructed data, and it is used as the basis for judgment, which is defined as energy. In the initial stage, the reconstruction error of the real data is very small, that is, the energy is very low. The reconstruction error of the generated data is large and has great energy instability. Therefore, in the training process, it is necessary to continuously reduce the reconstruction error of the generated data, and when the energy difference is smaller, it is closer to the actual data distribution, so as to achieve iterative optimization. The discriminator is mainly composed of a three−layer encoder and a three−layer decoder, where the encoder module is used to learn the implicit characteristics of the input data. The decoder module is used to reconstruct the new learned features into the original input data. Compared with other unsupervised learning, the EBGAN model treats the discriminator as an energy function, which helps the model to converge and effectively avoids the model collapse.
During model training, the Adam algorithm was selected as the optimization algorithm for generator network G and discriminator network D [20], among which the parameter of the first moment estimation β1 was set to 0.9, initial learning rate was set to 0.001, epochs was set to 200, batch size was set to 64, and the generator and discriminator were alternately trained according to the ratio of 1:3, that is, training D for 3 times and training G for 1 time. Figure 6 summarizes the process of EBGAN data generation and classification performance test proposed in this paper, including data preprocessing, feature extraction, EBGAN training, data generation and classification verification.

5. Experimental Results

5.1. Authenticity of the Generated Data

We used the MAV feature of the EMG signal as an example to discuss the authenticity of the generated data. Figure 7 shows the comparison between the sEMG feature data generated by EBGAN and the original sEMG feature data. It can be seen intuitively that the sEMG feature data generated by EBGAN has enough authenticity.
Figure 8 shows the values of the loss functions of the discriminator and generator after the proposed EBGAN was trained on the training set. In this experiment, the smaller the discriminator loss, the better the discriminator performs in identifying true and false samples. The smaller the generator loss, the better the quality of generated samples. It can be seen from the figure that, after 150 iterations of training, the losses of the generator and discriminator remain stable in a small range, which indicates that the network has converged at this time and the generator can produce high−quality samples.

5.2. Maximum Mean Discrepancy

The maximum mean discrepancy (MMD) [21,22] was used to measure the similarity between the generated data of EBGAN and the sample distribution of real data. The MMD indexes of EBGAN and traditional GAN, WGAN [23], DCGAN [24] and WGAN−GP [25] were further compared. The lower the value of the indicator, the closer the generated data is to the real data, which indicates that the quality of the generated data is better. Taking the three−muscle EMG signal of the flat walking motion as an example, the MMD values between the three time−domain features and the original data were generated as shown in Table 3.
It can be seen from the table that the MMD value of EBGAN is generally smaller than that of the traditional GAN, WGAN, DCGAN and WGAN−GP, indicating that the results are better and the data are more realistic. It also shows that the EBGAN model is better than the traditional GAN, WGAN, DCGAN and WGAN−GP models.

5.3. Classification Performance of the Generated Data

This section presents the EMG signal classification task, taking the five movement modes of walking on the flat ground, going upstairs, going downstairs, sitting down and standing up as the classification objects, and comparing the changes in the classification ability of the various classification models before and after data generation to verify the effectiveness of the data generation method, the data quality of the generated samples and the universality of the generated samples to different models.

5.3.1. Quality and Universality of the Generated Sample Data

Firstly, according to the parameter setting of EBGAN in the previous section, the model was applied to the EMG feature dataset and adversarial training was carried out, so that the G network model had the ability to generate EMG features. Secondly, the G−network model was used to generate synthetic sample sets comparable to the scale of the EMG feature dataset and they were labeled. These generated samples, together with the original EMG feature data samples, constitute the EBGAN synthetic dataset. Further, this paper selected five typical classification models, including linear discriminant analysis (LDA), Gaussian Naive Bayes (GNB), support vector machine (SVM), k−nearest neighbor (KNN) and multilayer perceptron, MLP), trained on the original dataset and the synthetic dataset [26]. Then, the trained model was applied to the test set for the classification test; among them, the training samples accounted for 80% and the test samples accounted for 20%. The classification accuracy is shown in Table 4.
The basic information of each classification model is as follows:
LDA is used to project multi−dimensional sample data into a low−dimensional space to minimize the point variance of each category after projection, while maximizing the variance between different categories. It has the advantages of not needing to adjust learning parameters and having a high efficiency.
GNB, which is applicable to continuous variables, assumes that each feature x i is subject to a normal distribution under each category y. The probability density function of normal distribution is used to calculate the probability inside the algorithm.
SVM, which has many unique advantages in solving small sample, nonlinear and high−dimensional pattern recognition, can be extended to other machine learning problems, such as function fitting. In SVM, the hyperplane is selected to optimally separate the points in the input variable space from their classes.
KNN is centered on the fact that, if most of the k−nearest samples in the feature space belong to a certain category, the sample also belongs to this category and has the characteristics of the samples in this category. In this method, only the category of the nearest sample or several samples is used to determine the classification decision.
MLP is a forward structure artificial neural network. In addition to the input and output layers, it can have multiple hidden layers in the middle, mapping a group of input vectors to a group of output vectors. It can solve nonlinear problems and conduct real−time learning.
As shown in Table 4, each classification model achieved better classification results than the original training set by being trained on the EBGAN synthetic dataset, with the accuracy rate improving by 1~5%. The KNN algorithm has the highest accuracy, reaching 93.4%. Among them, the improvement effect of the MLP model is the most obvious. We believe that this is because the MLP model adopts a full connection structure, with a large parameter scale, and it is very easy to over fit small−scale original data training sets. The experimental results show that the data generation method is helpful to improve the training effect of typical machine learning models, which verifies the effectiveness of the data generation method and the generalization ability of the synthetic samples. At the same time, it also proves that the synthetic samples and real samples have similar characteristics and high data quality.

5.3.2. Influence of Sample Generation Size on Classification Accuracy

For machine learning models, in addition to the quality of the data samples, the size of the training set samples will also significantly affect the training effect of the model. This section explores the changes in the classification accuracy of each classification model under different sample generation scales to reveal the impact of sample generation scales on machine learning models. Specifically, the trained EBGAN model was obtained by referring to the steps in the previous section, which was used to generate the EMG feature data samples and limit the size of the original data to 0.5~3.0 times. The scale of the newly generated data in the mixed set was 0.5~3 times that of the original dataset, so the synthetic data ratios in differently mixed sets were 33%, 50%, 66% and 75%. Secondly, these generated samples were mixed with the original dataset to form a synthetic dataset. The symbols “synthetic dataset * 0.5~synthetic dataset * 3.0” were used to represent these datasets. The asterisk * represents a multiple of the dataset. Then, we applied the five typical classification models, mentioned in the previous section, to the original dataset and the synthetic dataset for classification training, and conducted classification tests on the test set. The classification accuracy of each model is shown in Table 5. According to the analysis results, most classification models achieved good training results on the “synthetic dataset * 1.0~synthetic dataset * 2.0” synthetic dataset, and the test classification accuracy was the highest. This shows that the relationship between the model training effect and the data generation scale is not linear. The excessive introduction of synthetic data samples changes the sample subject in the augmented dataset from real data to synthetic data, and the characteristics of synthetic samples cannot be infinitely close to the real samples, which leads to the decline in classification accuracy.

5.3.3. Applicability and Superiority of the EBGAN Model Structure for EMG Feature Enhancement

In order to fully verify the applicability and superiority of the EBGAN model structure to EMG characteristics, this section applies the composite datasets of traditional GAN, WGAN, DCGAN, WGAN−GP and EBGAN to the above five classification models, such as LDA. In this paper, we compared the change in the classification ability of several typical classification models on different synthetic datasets to quantify the effectiveness of the proposed EBGAN model for EMG feature generation methods. Specifically, in this section, we selected the GAN, WGAN, DCGAN, WGAN−GP and EBGAN models to train on the dataset [27], and doubled the size of the original dataset. Secondly, the typical classification models were applied to the datasets constructed by each generation model for classification training, and the classification results were compared with the test set. As shown in Table 6, the classification model trained under the EBGAN augmented set achieved optimal classification results.

6. Conclusions and Future Work

Aiming at solving the problem of the low accuracy of EMG classification due to the small amount of data, complicated collection difficulties and large environmental influences of EMG signals, this paper proposed a myoelectric signal feature generation method based on an energy generation adversarial network for the first time, exploring the feasibility of a data enhancement method for improving EMG recognition technology and providing a new research path for further research on machine learning in EMG recognition. Research on EMG signal acquisition, feature extraction, EBGAN model construction and optimization, and enhancement effect evaluation was carried out. Finally, EBGAN−based EMG data generation was objectively evaluated using quantifiable indicators and typical classification methods. The results show that the proposed method can effectively generate sufficient real EMG features and, compared with the traditional GAN, WGAN, DCGAN and WGAN−GP models, the generated data of EBGAN is more realistic, which helps to improve the classification effect of typical machine learning models, and verifies the effectiveness of the proposed data generation method and the generalization ability of synthetic samples. At the same time, such synthetic datasets will not pose major privacy problems or data leakage to the original sensitive training data, and they have high application prospects. The results not only promote the beneficial development of machine learning in EMG recognition, reduce the time and material cost of the manual collection of EMG data, but also provide an important research basis for the in−depth exploration of EMG signaling mechanism.
In this paper, we used the time−domain feature generation of EMG signal as the starting point, and we carried out research on the effectiveness of data enhancement technology in improving the accuracy of model recognition. Other types of characteristics of EMG signal, such as frequency domain characteristics, time–frequency characteristics and other nonlinear characteristics, have not yet been considered. In addition, due to the small number of subjects, the sample diversity may be insufficient. The generalization performance of the generator for the data of different subjects needs further study.

Author Contributions

X.Z. proposed the algorithm idea and experimental scheme, and M.M. was responsible for completing the experiment and writing the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant: 51505048), Natural Science Foundation Project of Chongqing, Chongqing Science and Technology Commission (Grant: cstc2019jcyj−msxmX0292), Science and Technology Project of the Chongqing Municipal Education Commission (Grant: KJZD−K201900702) and Chongqing Engineering Laboratory for Transportation Engineering Application Robot Open Fund (Grant: CEL−TEAR−KFKT−202101).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the ethics committee of Shinshu University Japan (protocol code cjur201905, approval date 10 May 2019).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no competing interests.

References

  1. Members, M.G.; Benjamin, E.; Go, A.; Arnett, D.; Blaha, M.; Cushman, M.; Das, S.; de Ferranti, S.; Després, J.-P. Executive summary: Heart disease and stroke statistics—2016 update: A report from the American Heart Association. Circulation 2016, 133, 447–454. [Google Scholar]
  2. Zhang, X.; Hu, J.; Luo, T.; Chen, R.; Hashimoto, M. Hybrid control method and stability of wearable walking assistant robot. Robot 2017, 39, 489–497. (In Chinese) [Google Scholar]
  3. Giovacchini, F.; Vannetti, F.; Fantozzi, M.; Cempini, M.; Cortese, M.; Parri, A.; Yan, T.; Lefeber, D.; Vitiello, N. A light-weight active orthosis for hip movement assistance. Robot. Auton. Syst. 2015, 73, 123–134. [Google Scholar] [CrossRef]
  4. Junbao, G.; Ning, W.; Lei, Z. Surface Electromyography (sEMG)-based Intention Recognition and Control Design for Human–Robot Interaction in Uncertain Environment. Sens. Mater. 2021, 33, 3153–3168. [Google Scholar]
  5. Kopke, J.V.; Ellis, M.D.; Hargrove, L.J. Determining user intent of partly dynamic shoulder tasks in individuals with chronic stroke using pattern recognition. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 28, 350–358. [Google Scholar] [CrossRef] [PubMed]
  6. Hui, C.J.; Wei, L.; Wei, L. Application of multi graph embedded representation in human motion pattern recognition. Comput. Sci. Explor. 2017, 11, 941–949. (In Chinese) [Google Scholar]
  7. Moly, A.; Costecalde, T.; Martel, F.; Martin, M.; Larzabal, C.; Karakas, S.; Verney, A.; Charvet, G.; Chabardes, S.; Benabid, A.L.; et al. An adaptive closed-loop ECoG decoder for long-term and stable bimanual control of an exoskeleton by a tetraplegic. J. Neural Eng. 2022, 19, 026021. [Google Scholar] [CrossRef]
  8. Baasch, G.; Rousseau, G.; Evins, R. A Conditional Generative adversarial Network for energy use in multiple buildings using scarce data. Energy AI 2021, 5, 100087. [Google Scholar] [CrossRef]
  9. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. AI Access Found. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  10. He, H.; Bai, Y.; Garcia, E.A.; Li, S. ADASYN: Adaptive synthetic sampling approach for imbalanced learning. In Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–8 June 2008; IEEE: New York, NY, USA, 2008. [Google Scholar]
  11. Goodfellow, I.J.; Pouget, A.J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. Adv. Neural Inf. Process. Syst. 2014, 3, 2672–2680. [Google Scholar] [CrossRef]
  12. Panwar, S.; Rad, P.; Jung, T.P.; Huang, Y. Modeling EEG data distribution with a wasserstein generative adversarial network to predict RSVP events. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 1720–1730. [Google Scholar] [CrossRef]
  13. Haradal, S.; Hayashi, H.; Uchida, S. Biosignal data augmentation based on generative adversarial networks. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; IEEE: New York, NY, USA, 2018; pp. 368–371. [Google Scholar]
  14. Xiang, X.; Wang, J.; Wang, Z.; Duan, S.; Pan, H.; Zhuang, R.; Han, P.; Liu, C. Medical simulation data generation method based on generation countermeasure network technology. J. Commun. 2022, 43, 211–224. (In Chinese) [Google Scholar]
  15. Zhao, J.; Mathieu, M.; Lecun, Y. Energy-based generative adversarial network. arXiv 2016, arXiv:1609.03126. [Google Scholar]
  16. Nasri, N.; Orts, E.S.; Gomez, D.F.; Cazorla, M. Inferring static hand poses from a low-cost non-intrusive sEMG sensor. Sensors 2019, 19, 371. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Jia, C.S.; Zeng, L.Z. Research on hand gesture EMG recognition based on long-term and short-term memory and convolutional neural network. J. Instrum. 2021, 42, 162–170. (In Chinese) [Google Scholar]
  18. Ping, X.; Li, W.X.; Hao, D.Y.; Chen, X.-L. Feature extraction method of surface electromyography based on self sorting entropy. Pattern Recognit. Artif. Intell. 2014, 27, 496–501. (In Chinese) [Google Scholar]
  19. Shanmuganathan, V.; Yesudhas, H.R.; Khan, M.S.; Khari, M.; Gandomi, A.H. R-CNN and wavelet feature extraction for hand gesture recognition with EMG signals. Neural Comput. Appl. 2020, 32, 16723–16736. [Google Scholar] [CrossRef]
  20. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  21. Xu, Q.; Huang, G.; Yuan, Y.; Guo, C.; Sun, Y.; Wu, F.; Weinberger, K. An empirical study on evaluation metrics of generative adversarial networks. arXiv 2018, arXiv:1806.07755. [Google Scholar]
  22. Jin, Q.; Lin, R.; Yang, F. E-WACGAN: Enhanced generative model of signaling data based on WGAN-GP and ACGAN. IEEE Syst. J. 2019, 14, 3289–3300. [Google Scholar] [CrossRef]
  23. Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein GAN. arXiv 2017, arXiv:1701.07875. [Google Scholar]
  24. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
  25. Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A.C. Improved training of Wasserstein GANs. arXiv 2017, arXiv:1704.00028. [Google Scholar]
  26. Hui, S.K.; Ying, Z.; Wei, Z.J.; Xiaojie, Y. Structured data table generation model based on generative countermeasure network. Comput. Res. Dev. 2019, 56, 1832–1842. (In Chinese) [Google Scholar]
  27. An, Y.; Wei, W.W. Liver and liver tumor segmentation based on conditional energy antagonism network. Comput. Eng. Appl. 2021, 57, 179–184. (In Chinese) [Google Scholar]
Figure 1. Acquisition scenario.
Figure 1. Acquisition scenario.
Electronics 12 01040 g001
Figure 2. Filtered surface EMG signal for each muscle.
Figure 2. Filtered surface EMG signal for each muscle.
Electronics 12 01040 g002
Figure 3. Three time−domain feature diagrams.
Figure 3. Three time−domain feature diagrams.
Electronics 12 01040 g003
Figure 4. Generative adversarial network architecture.
Figure 4. Generative adversarial network architecture.
Electronics 12 01040 g004
Figure 5. Architecture of the energy−based generative adversarial network.
Figure 5. Architecture of the energy−based generative adversarial network.
Electronics 12 01040 g005
Figure 6. Flowchart of the data generation and classification performance test.
Figure 6. Flowchart of the data generation and classification performance test.
Electronics 12 01040 g006
Figure 7. Comparison between real data and generated data.
Figure 7. Comparison between real data and generated data.
Electronics 12 01040 g007
Figure 8. EBGAN loss value.
Figure 8. EBGAN loss value.
Electronics 12 01040 g008
Table 1. Basic parameters of the testers.
Table 1. Basic parameters of the testers.
NumberGenderAgeHeight/cmWeight/kg
1Male2517260
2Male2317867
3Female2416852
4Male2517055
5Male2618276
Average 24.6174.062.0
Table 2. One−dimensional EBGAN model structure.
Table 2. One−dimensional EBGAN model structure.
GeneratorDiscriminator
Layer typeOutput Data DimensionActivation functionLayer typeOutput Data DimensionActivation function
Dense1(None, 128)LeakyReLUEncon1(None, 1024)LeakyReLU
BatchNormalization1(None, 128) BatchNormalization1(None, 1024)
Dense2(None, 256)LeakyReLUEncon2(None, 512)LeakyReLU
BatchNormalization2(None, 256) BatchNormalization2(None, 512)
Dense3(None, 512)LeakyReLUEncon3(None, 256)LeakyReLU
BatchNormalization3(None, 512) Decon1(None, 256)LeakyReLU
Dense4(None, 1024)tanhBatchNormalization3(None, 256)
Decon2(None, 512)LeakyReLU
BatchNormalization4(None, 512)
Decon3(None, 1024)LeakyReLU
Table 3. MMD values of various data generation methods.
Table 3. MMD values of various data generation methods.
BFVMVL
MAVRMSVARMAVRMSVARMAVRMSVAR
GAN0.1830.2090.2030.1560.2170.2430.3580.2940.263
WGAN0.1820.2050.1910.1540.2160.2040.3250.2460.227
DCGAN0.1790.2020.1900.1440.1980.2050.3140.1960.204
WGAN−GP0.1800.2050.1920.1500.2010.2020.3190.2250.216
EBGAN0.1770.2000.1890.1320.1880.1970.2760.1750.185
Table 4. Classification accuracy of each classification model on the test set.
Table 4. Classification accuracy of each classification model on the test set.
MethodLDAGNBSVMKNNMLP
Original dataset59.82%51.03%85.54%89.61%78.98%
Synthetic dataset61.14%54.36%89.70%93.47%83.66%
Table 5. Classification accuracy of each classification model under different generation scales.
Table 5. Classification accuracy of each classification model under different generation scales.
MethodLDAGNBSVMKNNMLP
Original dataset59.83%51.07%85.54%89.60%78.94%
Synthetic dataset * 0.560.37%52.35%87.06%90.71%79.90%
Synthetic dataset * 1.061.15%54.33%89.73%93.43%83.66%
Synthetic dataset * 2.063.26%52.98%88.45%93.10%84.23%
Synthetic dataset * 3.063.02%51.44%88.57%91.74%84.01%
Table 6. Classification accuracy of each classification model under different generation models.
Table 6. Classification accuracy of each classification model under different generation models.
MethodLDAGNBSVMKNNMLP
GAN59.41%52.11%88.25%90.94%80.89%
WGAN60.20%52.75%88.63%92.02%81.35%
DCGAN60.53%53.14%89.08%92.11%82.21%
WGAN−GP60.68%53.03%88.66%92.36%81.90%
EBGAN61.15%54.35%89.78%93.45%83.61%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, X.; Ma, M. Research on sEMG Feature Generation and Classification Performance Based on EBGAN. Electronics 2023, 12, 1040. https://doi.org/10.3390/electronics12041040

AMA Style

Zhang X, Ma M. Research on sEMG Feature Generation and Classification Performance Based on EBGAN. Electronics. 2023; 12(4):1040. https://doi.org/10.3390/electronics12041040

Chicago/Turabian Style

Zhang, Xia, and Mingyu Ma. 2023. "Research on sEMG Feature Generation and Classification Performance Based on EBGAN" Electronics 12, no. 4: 1040. https://doi.org/10.3390/electronics12041040

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop