Next Article in Journal
Investigation of Spindt Cold Cathode Electron Guns for Terahertz Traveling Wave Tubes
Previous Article in Journal
Intelligent Design Prediction of a Circular Polarized Antenna for CubeSat Application Using Machine Learning Algorithms
Previous Article in Special Issue
Polarization Direction of Arrival Estimation Using Dual Algorithms Based on Time-Frequency Cross Terms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Personal Identification Using Long Short-Term Memory with Efficient Features of Electromyogram Biomedical Signals

Interdisciplinary Program in IT-Bio Convergence System, Department of Electronics Engineering, Chosun University, Gwangju 61452, Republic of Korea
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(20), 4192; https://doi.org/10.3390/electronics12204192
Submission received: 9 August 2023 / Revised: 27 September 2023 / Accepted: 30 September 2023 / Published: 10 October 2023
(This article belongs to the Special Issue Advanced Technologies of Artificial Intelligence in Signal Processing)

Abstract

:
This study focuses on personal identification using bidirectional long short-term memory (LSTM) with efficient features from electromyogram (EMG) biomedical signals. Personal identification is performed by comparing and analyzing features that can be stably identified and are not significantly affected by noise. For this purpose, 13 efficient features, such as enhanced wavelength, zero crossing, and mean absolute value, were obtained from EMG signals. These features were extracted from segmented signals of a specific length. Then, the bidirectional LSTM was trained on the selected features as sequential data. The features were ranked based on their classification performance. Finally, the most effective features were selected, and the selected features were connected to achieve an improved classification rate. Two public EMG datasets were used to evaluate the proposed model. The first database was acquired from eight-channel Myo bands and was composed of EMG signals from 10 varying motions of 50 individuals. The total numbers of segments for the training and test sets were 30,000 and 20,000, respectively. The second dataset consisted of ten arm motions acquired from 40 individuals. A performance comparison of the dataset revealed that the proposed method exhibited good performance and efficiency compared to other well-known methods.

1. Introduction

Electromyogram (EMG) signals, which have mainly been used in hospitals to determine the health status of patients, have become widely available with the widespread use of personal smart devices, owing to the recent miniaturization of semiconductors. Their range of use is expanding because EMG signals can be easily measured. EMG signals measure and amplify minute electrical phenomena that occur when a person’s muscles move, and they correspond to biosignals such as the electrocardiogram, iris, and face. EMG signals have mainly been used for health examinations, and, currently, with the development of pattern recognition technology, complex and irregular EMG signals can be analyzed, and studies related to the control of assistive robots for amputees are being conducted. In addition, similar to the research on other biosignals, research is being conducted on personal identification for the authorization of access to smart systems. Because EMG signals can be measured only when a person contracts his/her muscles, they are more reliable for user identification than passively measurable variables, such as the face [1] or fingerprints [2,3], in authentication problems. Faces or fingerprints can be captured by others using noncontact capture devices, whereas EMG signals can only be generated while individuals are alive and are guided by their intentions. Moreover, unlike the face, EMG signals are not visually apparent from the outside, and, thus, replicating them with a camera or a similar device is difficult [4,5,6,7,8,9,10]. Consequently, they provide a useful authentication method.
Personal identification verifies the identity of a person by entering their personal information. With the development of information technology, personal identification has become automated and, based on it, banking services can be used quickly and easily without visiting a bank. Therefore, the security, accuracy, and stability of personal identification methods have become important. Using a biosignal for personal identification can reduce additional laborious work because the input signal for authentication comes from one’s body. Wearable devices have recently undergone significant advancements in terms of sensor size, functionality, and performance. These developments have enabled the effective collection of large amounts of environmental data, including biosignals. EMG signals hold a strong position in terms of security compared with other biosignals, particularly with regard to their physical characteristics. EMG signals can be generated while a person is alive and their arms are mentally controlled. Other biosignals, such as the face, iris, and fingerprints, can be captured by cameras without physical contact, which can compromise personal information. However, the EMG signal changes in various ways depending on the degree of force applied by the user, the location of the sensor, and motion during measurement. To stably perform personal identification using EMG signals, the user must acquire learning data by adjusting the force to be as similar as possible to the acquisition situation. It follows that the EMG signals are susceptible to noise contamination. Therefore, a personal identification model should be designed to exhibit robustness against noise [11,12,13,14,15,16].
Several studies have been conducted on personal identification methods that use EMG signals. Kim et al. [17] extracted features of the modified mean absolute value and mean absolute value slope in the time domain and features using a filter in the frequency domain and performed personal identification with EMG signals by training a K-nearest neighbor model with these features. In [18], individual identification using the optimum-patch forest method was performed after extracting features from electrocardiograms (ECGs) and EMGs, respectively. Kim et al. [19] attached an EMG sensor to a subject’s legs and achieved personal identification using data acquired while walking. Various features were learned using a multilayer perceptron. These features included integrated EMG, Willison amplitude, mean absolute value, variance, zero crossing, modified mean absolute value, modified mean absolute value, slope sign change, mean absolute value slope, root mean square, waveform length, and simple square integral. Subsequently, a comparative analysis of several features was conducted. In [20], the authors acquired EMG signals during several activities. After extracting the frequency-domain features [21] from EMG signals, the individuals were identified using the Mahalanobis distance. Yamaba et al. [22,23] performed personal identification by continuously analyzing EMG signals from personal devices. A Fourier transform was performed on the input EMG signals, and a support vector machine (SVM) was trained using the minimum and maximum times and corresponding amplitude values as features. Raurale et al. [24] divided EMG signals into several segments, extracted band power (BP) and root absolute sum square (RSS) features, and performed personal identification on the extracted features using a decision tree, multilayer perceptron, SVM, and radial basis function neural networks. Furthermore, personal identification was performed by combining the majority voting decisions and multilayer perceptrons and by combining decision-making and radial basis function neural networks for RSS features with a lowered dimension via kernel LDA. In [25], the researchers analyzed a personal identification model combining a discrete wavelet transform and an extra tree classifier, and another personal identification model combining a continuous wavelet transform and a convolutional neural network, and finally suggested a personal identification model using a continuous wavelet transform, Siamese networks, and convolutional neural networks.
Several studies have been conducted on motion classification methods using EMG signals. Yoo et al. [26] conducted research on a system that allows humans to interact with unmanned aerial vehicles (UAVs) using hand motions in real time. The study found that image-based gesture systems can be challenging to use because they struggle to track multidimensional hand motions, making the algorithms for detection and tracking complex. To overcome this, a hybrid hand motion system that combines an inertial measurement unit (IMU)-based motion capture and a vision-based gesture system has been proposed. This system uses information from the thumb to determine the movement direction command, detects the hand, and recognizes the shape to determine basic commands. In [27], to address privacy concerns associated with vision-based systems, the authors conducted research on a hand gesture recognition system that utilizes a 60 GHz frequency-modulated continuous-wave (FMCW) radar. The system receives radar signals from hand gestures and converts them into range, velocity, and angular data. These data are then used to train and classify the gestures using long-short term memory (LSTM). Shioji et al. [28] conducted research on hand motion recognition using EMG signals obtained from the wrist. They used a bandpass filter to remove noise and implemented a convolutional neural network (CNN) to classify different hand motions. Similarly, Li et al. [29] studied the use of EMG signals for motion classification, with a focus on increasing the robustness of the system against variations in force. The proposed solution uses a common spatial pattern (CSP), a feature extraction technique that aims to maximize the variance between different classes of signals.
Because EMG signals are sensitive to noise, noise-robust features should be observed, and the neural network should be optimized according to these features to stably perform noise-robust personal identification. Hence, we comparatively analyzed the EMG-based biometrics using LSTM with various efficient feature sets. First, the EMG signals were divided into segments of a certain size and features were extracted for each segment. The following 13 features of the EMG signals were considered: mean absolute value (MAV), wave length (WL), zero crossing (ZC), slope sign change (SSC), average amplitude change (AAC), log detector (LD), root mean square (RMS), difference absolute standard deviation value (DASDV), variance (VAR), modified mean absolute value (MMAV), modified mean absolute value 2 (MMAV2), enhanced mean absolute value (EMAV), and enhanced wave length (EWL) [30]. The extracted features were connected in sequence and used as inputs to the LSTM to proceed with learning. A performance comparison of the LSTMs for various feature sets was conducted through multiple experiments to optimize the LSTM along with the features. A public EMG dataset was used for the experiments. The experimental results revealed that the proposed method showed good performance compared to previous methods.
Our motivations can be summarized as follows:
  • EMG signals are sensitive to noise. Therefore, robust features are important for EMG-based classification problems.
  • Conventional EMG features for motion classification were applied to EMG-based personal classification to identify robust features.
  • We compared the performance of EMG-based personal identification using LSTM with selected feature sets among 13 features and found the best feature list in the design of the LSTM.
Our contributions can be summarized as follows:
  • We demonstrate that conventional EMG features for motion classification can effectively achieve competitive performance in EMG-based personal classification.
  • We demonstrate good performance in less time and fewer trials using our feature selection approach for person identification based on EMG signals.
The remainder of this paper is organized as follows. Section 2 describes the 13 EMG features, their definitions, and related equations and provides details on the structure of the LSTM. Section 3 illustrates personal identification using EMG signals and explains the details of the proposed personal identification method, including the diagrams. Section 4 describes the EMG database used in the experiments and presents the experimental results from various aspects, including the performance based on features and ANOVA analysis. Finally, Section 5 concludes the paper.

2. Methods

2.1. Methods for Feature Extraction of Electromyogram Signals for Biometrics

Various conventional feature extraction methods exist for the analysis of EMG signals. Thirteen of these are described here, and detailed explanations are available elsewhere [30].
MAV is a popular feature in EMG signal analysis. It is a measure of the average magnitude of a signal over a specified period. It is defined as the average of the summation of the absolute values of the signal and is often used as a simple and robust feature for muscle activity analysis. In many cases, it correlates well with muscle force. MAV is defined in Equation (1), where x represents the wavelet coefficient and L represents the length of the coefficient [30].
f M A V = 1 L i = 1 L x i .
WL is a feature commonly used in the analysis of EMG signals. It is a measure of the length of the muscle action potential waveform and can be calculated by simplifying the cumulative length of the waveform summation. WL is defined as in Equation (2) [30].
f W L = i = 2 2 | x i x i 1 | ,
where x represents the wavelet coefficient and L represents the length of the coefficient.
ZC is a feature used in the analysis of EMG signals that measures frequency information. It is the number of times that the signal crosses the zero level and is commonly used as an indicator of muscle activity. ZC is defined as in Equation (3) [30].
f Z C = i = 1 L 1 f ( x i ) f ( x i ) = 1 , i f   { ( x i > 0   a n d   x i + 1 < 0 )   o r   ( x i < 0   a n d   x x + 1 > 0 ) }   a n d   | x i x i + 1 | T 0 , o t h e r w i s e ,
where x is the wavelet coefficient, L is the length of the coefficient, and T is the threshold. SSC is a traditional feature used in the analysis of EMG signals that determines the number of times that the slope of the waveform changes sign. It is calculated by counting the number of ZCs in the first derivative of the signal. SSC is defined in Equation (4) [30]:
f S S C = i = 2 L 1 f x i f x i = 1 , i f   x i > x i 1   a n d   x i > x i + 1   o r   x i < x i 1   a n d   x i < x i + 1   a n d   x i x i + 1 T   o r   x i x i 1 T , 0 , o t h e r w i s e
where L is the length of the coefficient, x is the wavelet coefficient, and T is the threshold. AAC is a feature used in the analysis of EMG signals that measures the average change in the amplitude of the signal over a given period of time. AAC is defined as in Equation (5) [30].
f A C C = 1 L i = 1 L 1 | x i + 1 x i | ,
where x indicates the wavelet coefficient and L indicates the length of the coefficient.
LD is used in the analysis of EMG signals and is designed to estimate the exerted force. It is based on the logarithmic compression of the signal amplitude, which allows for a more sensitive and accurate measurement of low-amplitude signals. LD is defined as in Equation (6) [30].
f L D = e x p ( 1 L i = 1 L l o g ( | x i | ) ) ,
where x represents the wavelet coefficient and L represents the length of the coefficient.
RMS is a feature commonly used in the analysis of EMG signals to describe muscle activity. RMS is defined in Equation (7), where x represents the wavelet coefficient, and L indicates the length of the coefficient [30].
f R M S = 1 L i = 1 L ( x i ) 2 .
DASDV is used in the analysis of EMG signals. This is a measure of the signal variability. DASDV is defined in Equation (8) [30]:
f D A S D V = i = 1 L 1 ( x i + 1 x i ) 2 L 1 ,
where x is the wavelet coefficient, and L is the length of the coefficient.
VAR is a feature used in the analysis of EMG signals that provides information about the power of muscle activity. VAR is defined as in Equation (9) [30].
f V A R = 1 L 1 i = 1 L ( x i ) 2 ,
where x represents the wavelet coefficient and L represents the length of the coefficient.
MMAV is an extension of the MAV feature used in EMG signal analysis. This is similar to MAV in that it provides information about the amplitude of muscle activity; however, it is calculated using a weight window function ( w i ). MMAV is defined in Equation (10) [30]:
f M M A V = 1 L i = 1 L w i | x i | w i = 1 , i f   0.25 L i 0.75 L 0.5 , o t h e r w i s e ,
where x represents the wavelet coefficient and L represents the length of the coefficient.
MMAV2 is another extension of the MAV feature used in EMG signal analysis. It is similar to MMAV in that it provides information on the amplitude of muscle activity and is calculated using a weight window function; however, it uses a continuous weight function ( c w i ). MMAV2 is defined in Equation (11), where x indicates the wavelet coefficient and L is the length of the coefficient [30].
f M M A V 2 = 1 L i = 1 L c w i | x i | c w i = 1 , i f   0.25 L i 0.75 L 4 i / L , i f   i < 0.25 L 4 ( i L ) / L , o t h e r w i s e .
EMAV is an extension of the MAV feature that uses not only the amplitude of the muscle activity but also its temporal dynamics. EMAV is defined in Equation (12) [30].
f E M A V = 1 L i = 1 L | ( x i ) p | p = 0.75 , i f   0.2 L i 0.8 L 0.5 , o t h e r w i s e ,
where x represents the wavelet coefficient and L represents the length of the coefficient.
EWL is an extension of the WL feature that also considers the temporal dynamics of muscle activity. EWL is defined by Equation (13) [30].
f E W L = i = 2 L | ( x i x i 1 ) p | p = 0.75 , i f   0.2 L i 0.8 L 0.5 , o t h e r w i s e ,
where x represents the wavelet coefficient and L represents the length of the coefficient. Table 1 summarizes the features of EMG-based biometric systems.

2.2. Long Short-Term Memory (LSTM)

LSTM first obtains the output of the neural network for the input data and uses the output again as a new input of the neural network to repeatedly obtain the outputs of the neural network. Neural networks with this structure are known to be effective in finding regularities in continuous data. LSTM, which is effective for sequence data analysis, has been successfully applied to speech, language, and action recognition. LSTM consists of a forget gate, input gate, output gate, and cell state [31,32].
The model was designed using an LSTM layer. The first layer is the sequence input, the second layer is the bidirectional LSTM (Bi-LSTM) outputting the last time step of the sequence input, the third layer is fully connected, the fourth layer is softmax, and the last layer is the classification. In this study, the input size varied based on the number of features, and the number of hidden units varied from 300 to 900. The number of classes depends on the dataset; in this study, it was 50. Figure 1 shows the structure and flowchart of the Bi-LSTM used for individual identification. In Figure 1, sequential segments are input into the LSTM network. The Bi-LSTM calculates the output by repeatedly receiving and processing the input data. After the LSTM output is calculated, it is fully connected to a one-dimensional vector. This vector is multiplied by the weights to obtain the final classification score, and softmax is then applied.

3. Biometrics Using Electromyogram Signals

This section describes the EMG-based biometric method using deep learning with various feature sets. Because EMG signals react sensitively according to the body condition of the measured person, robust features must be used. Conventional EMG feature extraction methods for EMG-based motion classification are used to find robust features in EMG-based person identification. Features that can be extracted from EMG signals include MAV, WL, ZC, SSC, AAC, LD, RMS, DASDV, VAR, MMAV, MMAV2, EMAV, and EWL. Some feature extraction methods use a threshold to ignore small noise in the signal. This value should be determined based on the signal scale. A performance comparison of personal identification based on various combinations of features and parameters of the neural network was conducted. The LSTM was learned by extracting 13 features for each channel of the EMG signal and combining them to create one sequence of data. LSTM has been effectively employed to analyze sequential data featuring memory cells, forget gates, input gates, and output gates. It also effectively addresses long-term dependency issues. EMG signals are sequential signals obtained from the human body. When learning was repeated for various LSTM parameters, a change in performance was observed. Each feature was individually converted into sequence data for LSTM learning. The validity of the features was determined by observing the changes in performance according to individual features. The optimal feature combination was determined by configuring various feature sets for valid features in an effectively limited feature list [33]. Subsequently, the variation in performance with the parameters of various LSTMs was observed for the determination of optimal feature combinations. Figure 2 shows a diagram of the biometric model using LSTM from the raw signal through feature extraction and classification. The input signal was segmented across all channels, and features were extracted for each segment. These features were then concatenated within each segment and channel. The concatenated features were input into the LSTM network for training. Figure 3 depicts multiple instances of LSTM with diverse feature sets to discover an improved feature combination. Thirteen features were considered for the feature set of individual identification, and these features were grouped into various combinations to determine the most effective method to improve the feature set. The accuracy of individual identification is defined as follows [34]:
A c c u r a c y = n C C n C C + n W C ,
where n C C indicates the number of correct classifications, and n W C is the number of wrong classifications.
Figure 4 shows the proposed model and the process of feature extraction from EMG signals for personal classification. As shown in Figure 4, in Step 1, the input consists of EMG signals with multiple channels. In Step 2, the input EMG signals are divided into several segments with the same period. In Step 3, feature extraction is performed for each segment. In Step 4, the features are rearranged into sequential data. In Step 5, the training process is performed using the Bi-LSTM from the sequenced feature sets. This process is repeated for each feature. In Step 6, the individual features are ranked according to their classification accuracy. In Step 7, individual features are selected as the optimal feature sets based on their classification accuracy. In Step 8, the optimal individual features are combined into sequence data. In Step 9, the Bi-LSTM is trained with a set of the best optimal features, ultimately resulting in the highest classification accuracy. This Bi-LSTM is the deep neural network shown in Figure 1.
Certain features are combined and ranked based on their classification accuracy when used individually, with priority given to those with higher rankings. When selecting features, a classification rank based on individual features is primarily used. This can be considered as a failure to consider a wider variety of feature combinations. Combining features could potentially lead to valuable feature interactions, although further attempts could be required to discover optimal combinations. However, when a feature typically exhibits poor accuracy, it tends to negatively affect the overall performance. Therefore, this approach can potentially yield improved performance more efficiently, especially in challenging situations where numerous experiments are required. In scenarios where training a deep neural network is time-consuming, both sequential feature selection (SFS) and sequential floating feature selection methods often require a substantial number of repetitive experiments.

4. Experimental Results

This section describes the experimental results of feature extraction from EMG signals, feature learning using LSTM, personal identification, and performance comparison analysis according to the parameters of the classification model based on the feature sets. To perform the experiments, this study used a computer equipped with an Intel® Xeon(R) CPU E5-1650 v3 3.5 GHz, Windows 10 × 64-bit, 32 GB random access memory (RAM), an NVIDIA GeForce GTX Titan X, and Matlab 2022b.
Figure 5 presents a summary of the dataset. A public EMG dataset was used [35]. These data were acquired using an eight-channel Myo band, and EMG data for 10 motions of both arms were acquired from 50 individuals. Each motion was acquired five times for 3 s at a sampling rate of 200 Hz. The 10 motions were as follows: (1) neutral wrist, (2) pronation, (3) supination, (4) wrist extension, (5) wrist flexion, (6) ulnar deviation, (7) radial deviation, (8) fine pinch, (9) power grip, and (10) hand opening. For each motion, three of the five data points were used for learning and the remaining data were used for verification.
The EMG data were divided into 85 lengths with 12 overlaps allowed, and no further preprocessing was performed. Features were extracted for each divided segment, and the extracted features were connected to sequence data and inputted to the LSTM. The LSTM was trained by extracting the MAV, WL, ZC, SSC, AAC, LD, RMS, DASDV, VAR, MMAV, MMAV2, EMAV, and EWL features for each segment. A Bi-LSTM trained using Adam was applied. Cross-entropy was applied to the loss function, and the initial weight values were set randomly. The initial learning rate was 0.01, and a learning rate decay of 0.2 for every momentum of 0.9 and five epochs was applied. Separate data augmentation was not used, but the learning results were confirmed by varying the minibatch and node sizes. Table 2 presents the accuracy of the LSTM on the test data with 13 features. The LSTM used every feature considered in this study. As can be observed in Table 2, having an excessive number of randomly selected features is not beneficial. This table lists the requirements for efficient feature selection. The LSTM was trained using sufficient features; however, overall good performance could not be confirmed.
Similarly, the LSTM was trained while applying one feature at a time instead of extracting all 13 features from the EMG data simultaneously. The learning options were the same, the minibatch size was fixed at 500, the node size was 500, and the number of epochs was 600. Table 3 presents the accuracy of the LSTM on the test data according to a single feature. The performance of a single feature is critical for the proposed method when selecting a feature set. Figure 6 shows a bar chart of the accuracy of the LSTM on the test data according to a single feature. The x-axis of the chart indicates the features, and the y-axis indicates the classification accuracy for each feature. Table 4 presents the top eight performance features ordered by LSTM accuracy. Among the 13 features, only those with a recognition rate of 60% or more in the experiment of a single feature were considered valid, and the features were presented according to the recognition rate. Table 5 presents a comparison of the computation times for each feature. The times were measured while calculating the features of an 85-length segmental signal. The feature values were computed using a CPU, and computational time was allocated only to the feature calculation process. The time required for the training process was not included in this calculation, as it was considered a constant value given that only the features varied in the comparison. The resulting times were averaged over several experiments. The training time for the Bi-LSTM with a single feature was 245 s, which was computed using the GPU. The training parameters were a minibatch size of 300, 900 hidden units, and 300 epochs.
The performance was observed by training the LSTM by constructing various feature sets among the eight valid features. The training options were the same, the minibatch size was fixed at 500, the node size was 500, and the number of epochs was 600. For the feature set, the number of features was reduced one by one in the order of the low recognition rate, starting with the use of eight features. Table 6 presents the accuracy of the LSTM on the test data according to the feature set based on the feature order, showing good accuracy. The experimental results clearly indicate that the performance when using all eight valid features and using two valid features was similar, or that the latter was slightly higher.
In the experiment using valid features, the performance when using all eight features and the two features that showed the best performance for single features was similar, or the latter was slightly higher. Because minimizing the number of features in terms of computational load and memory is advantageous, the LSTM was trained using only the AAC and RMS features. The LSTM model was optimized while learning the minibatch size, node size, and epoch in various ways using the same learning options. Table 7 presents the accuracy of the LSTM on the test data according to the LSTM parameters. In the LSTM using the AAC and RMS features, when the minibatch size was 300 and the node size was 900, the accuracy reached 85.25%, confirming that it was at least 5.75% higher than that of the existing methods, as described in Table 8. Table 8 presents a comparison of the accuracy of the LSTM on the test data with that of existing methods. SFS is a well-established method. We also examined its performance in detail and compared it with our method, as presented in Table 9 and Table 10. Table 9 and Table 10 display a comparison of the accuracy of sequential forward feature selection with feature lengths of 2 and 3, respectively.
Assuming that the training time for a neural network with one feature is A, that with two features is B, that with three features is C, and n is the total number of features, the comparison of the processing times of the methods is described in Table 11. Assuming that the training time for a neural network, regardless of the number of features, is D, and when n is 13 with a targeted feature length of 3, in the case of SFS, it takes 36D (13D + 12D + 11D) time, whereas, in the case of our approach, it takes 15D (13D + D + D) time, resulting in a reduction of 21 (36–15) training trials. We recommend using the SFS in time-free situations because it is effective in achieving a more accurate system.
Table 12 and Figure 7 present the ANOVA of AAC and RMS for the 50 identities. An ANOVA was employed to assess whether there were differences in the means of a set of response data based on the values (levels) of one or more factors. Small p-values indicated that intergroup variation had a statistically significant effect on individual identification. At the 95% confidence level, intergroup variation had a p-value < 0.05, indicating that intergroup variation had a statistically significant effect on individual identification. The ANOVA figure shows the data by group. The central mark represents the median and the lower and upper edges of the box represent the 25th and 75th percentiles, respectively. Outliers are displayed individually using the “+” marker symbol.
In the comparison of the accuracy of the LSTM on the test data with existing methods, the accuracy of the existing methods was poorer than that described in the original articles. We used a dataset different from that described in the original paper, which reportedly yielded higher performance. We used the same dataset and method to calculate the accuracy to evaluate the performance of both our method and the existing methods described in this paper. We input the data under the same conditions without preprocessing, except for signal segmentation. We implemented existing methods as closely as possible to the original approach. Feature extraction was performed by referencing the mathematical formulations in the original paper, whereas other common components such as the multilayer perceptron were implemented using open-source libraries.
Table 13 presents a comparison of the biometric accuracy of the LSTM based on different motions. The LSTM was trained with minibatch sizes of 300, 300 epochs, and a node size of 900. The RMS and AAC features were utilized. The motion that exhibited the second highest accuracy was Motion3, which corresponded to pronation. However, the first motion that exhibited the highest accuracy was Motion1, despite representing “wrist in neutral,” which is similar to a line signal. It is assumed that when a single motion is used for person identification, the available data are significantly limited. Consequently, the reliability of the results was relatively low.
Table 14 presents a comparison of the biometric accuracy of the LSTM when the training and verification motions differed. The LSTM was trained with minibatch sizes of 300, 300 epochs, and a node size of 900. The RMS and AAC features were utilized.
To calculate additional metrics, such as precision, recall, F1-scores, false rejection rate (FRR), and false acceptance rate (FAR), the LSTM model with AAC and RMS features was retrained with a minibatch size of 300, 900 hidden units, and 600 epochs. Its accuracy was measured to be 87.00%, and the metrics are listed in Table 15, Table 16, Table 17, Table 18 and Table 19.
Another dataset (sEMG) was utilized to evaluate the performance of the proposed method [36]. These data were acquired using a four-channel Biopac MP36 device, and the EMG data of 10 motions were acquired from 40 individuals. Each motion was acquired five times for 6 s each at a sampling rate of 2000 Hz. The 10 motions were as follows: (1) rest, (2) wrist extension, (3) wrist flexion, (4) wrist ulnar deviation, (5) wrist radial deviation, (6) grip, (7) finger abduction, (8) finger adduction, (9) supination, and (10) pronation. We used the filtered signals from the raw data. The training process from the sEMG dataset was performed in the same manner as in the previous experiment. The Bi-LSTM was trained while applying one feature at a time instead of extracting all 13 features from the EMG data simultaneously. The learning options were the same as those used in the first experiment. The minibatch, node, and epoch sizes were 300, 900, and 300, respectively. The best classification performance was achieved by combining two features, WL and EWL. The accuracy of this combination was 65.25%. We determined that the reason for the lower accuracy compared with that in the previous experiment was the reduced amount of channel data in the signal.

5. Conclusions

An EMG-based biometric system based on Bi-LSTM with an efficient EMG feature set was proposed. The EMG signals were obtained by measuring and amplifying the minute electric current flow generated by the movement of human muscles and showed different shapes depending on the degree of activity of the muscles and of parts of these muscles. Biometric recognition using EMG signals increases the reliability of recognition situations. Because EMG signals are generated when a person contracts his/her own muscles, surrogate authentication is impossible, and replicating them is difficult because they are not visually revealed. However, EMG signals are sensitive to noise and exhibit poor recognition performance. Therefore, features should be compared and analyzed to allow the stable identification of each person while being less affected by noise. A public EMG dataset was used for this purpose. The experimental results revealed that, compared with conventional methods, the proposed method exhibited good classification performance. In the future, we plan to study a feature extraction method that can stably identify individuals with EMG signals, based on an analysis of the characteristics of the features that showed good performance in this study.

Author Contributions

Conceptualization, Y.-H.B. and K.-C.K.; Methodology, Y.-H.B. and K.-C.K.; Software, Y.-H.B. and K.-C.K.; Validation, Y.-H.B. and K.-C.K.; Formal Analysis, Y.-H.B. and K.-C.K.; Investigation, Y.-H.B. and K.-C.K.; Resources, K.-C.K.; Data Curation, K.-C.K.; Writing—Original Draft Preparation, Y.-H.B.; Writing—Review and Editing, K.-C.K.; Visualization, Y.-H.B. and K.-C.K.; Supervision and Administration, K.-C.K.; Funding Acquisition, K.-C.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Basic Science Research Program of the National Research Foundation of Korea (NRF) funded by the Ministry of Education (No. 2017R1A6A1A03015496).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xu, W.; Lee, E.J. A hybrid method based on dynamic compensatory fuzzy neural network algorithm for face recognition. Int. J. Control Autom. Syst. 2014, 12, 688–696. [Google Scholar] [CrossRef]
  2. Lin, C.; Kumar, A. Matching contactless and contact-based convolutional fingerprint images for biometrics identification. IEEE Trans. Image Process. 2018, 27, 2008–2021. [Google Scholar] [CrossRef] [PubMed]
  3. Jang, Y.K.; Kang, B.J.; Kang, R.P. A novel portable iris recognition system and usability evaluation. Int. J. Control Autom. Syst. 2010, 8, 91–98. [Google Scholar] [CrossRef]
  4. Mobarakeh, A.K.; Carrillo, J.A.C.; Aguilar, J.J.C. Robust face recognition based on a new supervised kernel subspace learning method. Symmetry 2019, 19, 1643. [Google Scholar] [CrossRef]
  5. Jain, A.K.; Arora, S.S.; Cao, K.; Best-Rowden, L.; Bhatnagar, A. Fingerprint recognition of young children. IEEE Trans. Inf. Forensics Secur. 2017, 12, 1505–1514. [Google Scholar] [CrossRef]
  6. Boles, W.W. A security system based on human iris identification using wavelet transform. In Proceedings of the First International Conference on Conventional and Knowledge Based Intelligent Electronics Systems, Adelaide, Australia, 21–23 May 1997; pp. 533–541. [Google Scholar]
  7. Wang, H.; Hu, J.; Deng, W. Compressing fisher vector for robust face recognition. IEEE Access 2017, 5, 23157–23165. [Google Scholar] [CrossRef]
  8. Zhang, Y.; Juhola, M. On biometrics with eye movements. IEEE J. Biomed. Health Inform. 2017, 21, 1360–1366. [Google Scholar] [CrossRef] [PubMed]
  9. Nguyen, B.P.; Tay, W.L.; Chui, C.K. Robust biometric recognition from palm depth images for gloved hands. IEEE Trans. Hum. Mach. Syst. 2015, 45, 799–804. [Google Scholar] [CrossRef]
  10. Pokhriyal, N.; Tayal, K.; Nwogu, I.; Govindaraju, V. Cognitive-biometric recognition from language usage: A feasibility study. IEEE Trans. Inf. Forensics Secur. 2017, 12, 134–143. [Google Scholar] [CrossRef]
  11. Zhang, L.; Cheng, Z.; Shen, Y.; Wang, D. Palmprint and palmvein recognition based on DCNN and a new large-scale con-tactless palmvein dataset. Symmetry 2018, 10, 78. [Google Scholar] [CrossRef]
  12. Hong, S.J.; Lee, H.S.; Tho, K.A.; Kim, E.T. Gait recognition using multi-bipolarized contour vector. Int. J. Control Autom. Syst. 2009, 7, 799–808. [Google Scholar] [CrossRef]
  13. Yang, J.; Sun, W.; Liu, N.; Chen, Y.; Wang, Y.; Han, S. A novel multimodal biometrics recognition model based on stacked ELM and CCA methods. Symmetry 2018, 10, 96. [Google Scholar] [CrossRef]
  14. Kim, M.J.; Kim, W.Y.; Paik, J.K. Optimum geometric transformation and bipartite graph-based approach to sweat pore matching for biometric identification. Symmetry 2018, 10, 175. [Google Scholar] [CrossRef]
  15. Tolosana, R.; Vera-Rodriguez, R.; Fierrez, J.; Ortega-Garcia, J. Exploring recurrent neural networks for on-line handwritten signature biometrics. IEEE Access 2018, 6, 5128–5138. [Google Scholar] [CrossRef]
  16. Korshunov, P.; Marcel, S. Impact of score fusion on voice biometrics and presentation attack detection in cross-database evaluations. IEEE J. Sel. Top. Signal. Process. 2017, 11, 695–705. [Google Scholar] [CrossRef]
  17. Kim, J.S.; Pan, S.B. EMG based two-factor security personal identification. In Proceedings of the Korean Institute of Information Technology Conference, Gwangju, Republic of Korea, 7–9 June 2018; pp. 35–36. [Google Scholar]
  18. Belgacem, N.; Fournier, R.; Nait-Ali, A.; Bereksi-Reguig, F. A novel biometric authentication approach using ECG and EMG signals. J. Med. Eng. Technol. 2015, 39, 226–238. [Google Scholar] [CrossRef]
  19. Kim, S.-H.; Ryu, J.-H.; Lee, B.-H.; Kim, D.-H. Human identification using EMG signal based artificial neural network. Korean J. IEIE 2016, 53, 622–628. [Google Scholar]
  20. He, J.; Jiang, N. Biometric from surface electromyogram: Feasibility of user verification and identification based on gesture recognition. Front. Bioeng. Biotechnol. 2020, 8, 58. [Google Scholar] [CrossRef]
  21. Jiayuan, H.; Zhang, D.; Sheng, X.; Meng, J.; Zhu, X. Improved discrete fourier transform based spectral feature for sur-face electromyogram signal classification. In Proceedings of the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 6897–6900. [Google Scholar]
  22. Yamaba, H.; Kurogi, A.; Kubota, S.-I.; Katayama, T.; Park, M.; Okazaki, N. Evaluation of feature values of surface electro-myograms for user authentication on mobile devices. Artif. Life Robot. 2017, 22, 108–112. [Google Scholar] [CrossRef]
  23. Yamaba, H.; Kuroki, T.; Aburada, K.; Kubota, S.-I.; Katayama, T.; Park, M.; Okazaki, N. On applying support vector machines to a user authentication method using surface electromyogram signals. Artif. Life Robot. 2018, 23, 87–93. [Google Scholar] [CrossRef]
  24. Raurale, S.A.; Mcallister, J.; Rincon, J.M.D. EMG biometric systems based on different wrist-hand movements. IEEE Access. 2021, 9, 12256–12266. [Google Scholar] [CrossRef]
  25. Lu, L.; Mao, J.; Wang, W.; Ding, G.; Zhang, Z. A study of personal recognition method based on EMG signal. IEEE Trans. Biomed. Circuits Syst. 2020, 14, 681–691. [Google Scholar] [CrossRef]
  26. Yoo, M.; Na, Y.; Song, H.; Kim, G.; Yun, J.; Kim, S.; Moon, C.; Jo, K. Motion estimation and hand gesture recognition-based human-uav interation approach in real time. Sensors 2022, 22, 2513. [Google Scholar] [CrossRef] [PubMed]
  27. Jhaung, Y.-C.; Lin, Y.-M.; Zha, C.; Leu, J.-S.; Köppen, M. Implementing a hand gesture recognition system based on range-doppler map. Sensors 2022, 22, 4260. [Google Scholar] [CrossRef]
  28. Shioji, R.; Ito, S.; Ito, M.; Fukumi, M. Personal authentication and hand motion recognition based on wrist EMG analysis by a convolutional neural network. In Proceedings of the IEEE International Conference on Internet of Things and Intelligence System, Bali, Indonesia, 1–3 November 2018; pp. 184–188. [Google Scholar]
  29. Li, X.; Fang, P.; Tian, L.; Li, G. Increasing the robustness against force variation in EMG motion classification by common spatial patterns. In Proceedings of the 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Jeju Island, Republic of Korea, 11–15 July 2017; pp. 406–409. [Google Scholar]
  30. Too, J.; Abdullah, A.R.; Saad, N.M. Classification of hand movements based on discrete wavelet transform and enhanced feature extraction. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 83–89. [Google Scholar] [CrossRef]
  31. Schuster, M.; Paliwal, K.K. Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 1997, 45, 2673–2681. [Google Scholar] [CrossRef]
  32. Graves, A.; Fernandez, S.; Schmidhuber, J. Bidirectional lstm networks for improved phoneme classification and recognition. Artif. Nerual Netw. Form. Models Their Appl. 2015, 3697, 799–804. [Google Scholar]
  33. Byeon, Y.-H.; Kwak, G.-C. Individual identification by late information fusion of emgCNN and emgLSTM from electromyogram signals. Sensors 2022, 22, 6770. [Google Scholar] [CrossRef] [PubMed]
  34. Choi, H.S.; Lee, B.H.; Yoon, S.R. Biometric authentication using noisy electrocardiograms acquired by mobile sensors. IEEE Access 2016, 4, 1266–1273. [Google Scholar] [CrossRef]
  35. Ángeles, I.J.R.; Fernández, M.A.A. Multi-Channel Electromyography Signal Acquisition of Forearm. Mendeley Data 2018. Available online: https://data.mendeley.com/datasets/p77jn92bzg/1 (accessed on 10 December 2022).
  36. Ozdemir, M.A.; Kisa, D.H.; Guren, O.; Akan, A. Dataset for Multi-Channel Surface Electromyography (sEMG) Signals of Hand Gestures. Mendeley Data 2021. Available online: https://data.mendeley.com/datasets/ckwc76xr2z/2 (accessed on 18 January 2023).
Figure 1. Structure and flowchart of the bidirectional long short-term memory (LSTM) from the sequence feature input obtained from electromyogram (EMG) signals for individual identification.
Figure 1. Structure and flowchart of the bidirectional long short-term memory (LSTM) from the sequence feature input obtained from electromyogram (EMG) signals for individual identification.
Electronics 12 04192 g001
Figure 2. Feature extraction and connection process obtained from EMG n-channel signals for bidirectional LSTM input.
Figure 2. Feature extraction and connection process obtained from EMG n-channel signals for bidirectional LSTM input.
Electronics 12 04192 g002
Figure 3. Process of obtaining the best feature set from various feature combinations for bidirectional LSTM input.
Figure 3. Process of obtaining the best feature set from various feature combinations for bidirectional LSTM input.
Electronics 12 04192 g003
Figure 4. The presented model and process of feature extraction from EMG signals for personal classification.
Figure 4. The presented model and process of feature extraction from EMG signals for personal classification.
Electronics 12 04192 g004
Figure 5. Components and partitioning method of the EMG database.
Figure 5. Components and partitioning method of the EMG database.
Electronics 12 04192 g005
Figure 6. Classification performance of bidirectional LSTM from a single feature for test data.
Figure 6. Classification performance of bidirectional LSTM from a single feature for test data.
Electronics 12 04192 g006
Figure 7. ANOVA of average amplitude change (AAC) and RMS to 50 identities.
Figure 7. ANOVA of average amplitude change (AAC) and RMS to 50 identities.
Electronics 12 04192 g007
Table 1. Summary of the feature list for EMG-based biometrics systems.
Table 1. Summary of the feature list for EMG-based biometrics systems.
Full Name of FeatureAbbreviation of FeatureExtended OriginCharacteristic
Mean Absolute ValueMAV-Amplitude
Wave LengthWL-Amplitude
Zero CrossingZC-Frequency
Slope Sign ChangeSSC-Frequency
Average Amplitude ChangeAAC-Frequency
Log DetectorLD-Amplitude
Root Mean SquareRMS-Amplitude
Difference Absolute Standard Deviation ValueDASDV-Amplitude
VarianceVAR-Amplitude
Modified Mean Absolute ValueMMAVMAVAmplitude
Modified Mean Absolute Value2MMAV2MAVAmplitude
Enhanced Mean Absolute ValueEMAVMAVAmplitude
Enhanced Wave LengthEWLWLAmplitude
Table 2. Accuracy of the LSTM on test data with 13 features.
Table 2. Accuracy of the LSTM on test data with 13 features.
Feature SetMinibatch SizeNode SizeEpochTest Accuracy
MAV, WL, ZC, SSC, AAC, LD, RMS, DASDV, VAR, MMAV, MMAV2, EMAV, EWL30030060045.00
50045.05
70045.70
90045.55
50030044.05
50048.90
70046.35
90049.85
70030042.25
50043.10
70045.85
90047.30
90030037.20
50042.05
70043.55
90044.45
Table 3. Accuracy of the LSTM on test data according to a single feature.
Table 3. Accuracy of the LSTM on test data according to a single feature.
Feature SetTest Accuracy
MAV73.00
WL21.50
ZC42.40
SSC23.45
AAC79.45
LD62.20
RMS79.10
DASDV75.75
VAR42.10
MMAV71.70
MMAV269.70
EMAV75.55
EWL36.40
Table 4. Eight best-performing features ordered by the LSTM accuracy.
Table 4. Eight best-performing features ordered by the LSTM accuracy.
Rank12345678
FeatureAACRMSDASDVEMAVMAVMMAVMMAV2LD
Accuracy79.4579.1075.7575.5573.0071.7069.7062.20
Table 5. Comparison of computing time of each feature.
Table 5. Comparison of computing time of each feature.
FeatureComputing Time (μs)
MAV4.6223
WL2.8940
ZC3.4667
SSC3.8094
AAC2.8638
LD4.6957
RMS4.4195
DASDV2.9657
VAR5.1020
MMAV2.7544
MMAV22.9031
EMAV8.6654
EWL8.2524
Table 6. Accuracy of LSTM on test data according to the feature set based on the feature order showing good accuracy.
Table 6. Accuracy of LSTM on test data according to the feature set based on the feature order showing good accuracy.
Feature SetTest Accuracy
AAC, RMS83.70
AAC, RMS, DASDV, EMAV83.55
MAV, AAC, RMS, DASDV, EMAV83.55
MAV, AAC, RMS, DASDV, MMAV, MMAV2, EMAV83.10
MAV, AAC, RMS, DASDV, MMAV, EMAV82.65
MAV, AAC, LD, RMS, DASDV, MMAV, MMAV2, EMAV82.50
AAC, RMS, DASDV82.45
Table 7. Accuracy of LSTM on test data according to the parameters of LSTM.
Table 7. Accuracy of LSTM on test data according to the parameters of LSTM.
Feature SetMinibatch SizeNode SizeEpochTest Accuracy
AAC, RMS30030030080.00
50083.55
70085.00
90085.25
50030077.15
50080.80
70083.85
90084.85
70030069.60
50073.85
70076.85
90079.10
90030061.40
50070.50
70072.85
90075.35
Table 8. Comparison of accuracy of the LSTM on test data with existing methods.
Table 8. Comparison of accuracy of the LSTM on test data with existing methods.
MethodAccuracy (%)
Proposed method85.25
BpRssLdaMlp [24]79.50
Table 9. Comparison of accuracy for sequential forward feature selection (feature length = 2).
Table 9. Comparison of accuracy for sequential forward feature selection (feature length = 2).
Feature-1Feature-2Accuracy (%)
AACRMS85.25
MAV85.10
MMAV85.05
ZC84.95
LD83.50
EMAV83.10
SSC82.05
DASDV81.95
MMAV281.55
VAR53.30
EWL44.30
WL29.35
Table 10. Comparison of accuracy for sequential forward feature selection (feature length = 3).
Table 10. Comparison of accuracy for sequential forward feature selection (feature length = 3).
Feature-1,2Feature-3Accuracy (%)
AAC, RMSEMAV86.10
ZC85.95
MMAV85.80
DASDV85.70
MAV85.55
LD85.30
MMAV385.05
SSC82.50
VAR51.70
EWL46.10
WL30.50
Table 11. Comparison of processing time by method.
Table 11. Comparison of processing time by method.
MethodTime
SFS n × A + n 1 × B + n 2 × C +
Ours n × A + B + C +
Table 12. ANOVA of AAC and RMS to 50 identities.
Table 12. ANOVA of AAC and RMS to 50 identities.
SourceSum of SquaresDegrees of FreedomMean SquaresF-Statisticp-Value
Intergroup variation 3.35 × 10 6 4968,320.30762.060
Intragroup variation 3.49 × 10 7 388,75089.70--
Total 3.82 × 10 7 388,799---
Table 13. Comparison of the biometric accuracy of the LSTM based on different motions.
Table 13. Comparison of the biometric accuracy of the LSTM based on different motions.
TrainingVerificationAccuracy (%)
Motion1—Wrist in neutralMotion1—Wrist in neutral92.50
Motion2—PronationMotion2—Pronation57.00
Motion3—SupinationMotion3—Supination58.00
Motion4—Wrist extensionMotion4—Wrist extension37.50
Motion5—Wrist flexionMotion5—Wrist flexion43.50
Motion6—Ulnar deviationMotion6—Ulnar deviation49.50
Motion7—Radial deviationMotion7—Radial deviation43.00
Motion8—Fine pinchMotion8—Fine pinch45.00
Motion9—Power gripMotion9—Power grip39.00
Motion10—Hand openMotion10—Hand open44.00
Table 14. Comparison of the biometric accuracy of the LSTM when training and verification motions differ.
Table 14. Comparison of the biometric accuracy of the LSTM when training and verification motions differ.
TrainingVerificationAccuracy (%)
Motion2—PronationMotion3—Supination30.00
Motion2—PronationMotion4—Wrist extension09.50
Motion2—PronationMotion5—Wrist flexion17.00
Motion2—PronationMotion6—Ulnar deviation12.00
Motion2—PronationMotion7—Radial deviation15.00
Motion2—PronationMotion8—Fine pinch14.00
Motion2—PronationMotion9—Power grip09.00
Motion2—PronationMotion10—Hand open11.00
Table 15. Precision by class.
Table 15. Precision by class.
ClassPrecisionClassPrecisionClassPrecisionClassPrecision
188.571494.122786.964091.67
288.571577.7828100.004195.00
390.001694.292988.644284.38
494.291782.223086.844394.59
5100.0018100.003185.714474.47
695.001997.563297.2245100.00
795.242090.003390.704693.75
879.552164.713497.064790.91
991.892281.583580.004896.55
1069.812392.503684.094985.37
1182.502474.423797.565094.29
1262.962568.523894.87
1392.682691.893973.08
Table 16. Recall by class.
Table 16. Recall by class.
ClassRecallClassRecallClassRecallClassRecall
177.501480.0027100.004082.50
277.501587.502887.504195.00
390.001682.502997.504267.50
482.501792.503082.504387.50
577.501882.503190.004487.50
695.0019100.003287.504587.50
7100.002090.003397.504675.00
887.502182.503482.5047100.00
985.002277.503570.004870.00
1092.502392.503692.504987.50
1182.502480.0037100.005082.50
1285.002592.503892.50
1395.002685.003995.00
Table 17. F1-scores by class.
Table 17. F1-scores by class.
ClassF1-ScoreClassF1-ScoreClassF1-ScoreClassF1-Score
182.671486.492793.024086.84
282.671582.352893.334195.00
390.001688.002992.864275.00
488.001787.063084.624390.91
587.321890.413187.804480.46
695.001998.773292.114593.33
797.562090.003393.984683.33
883.332172.533489.194795.24
988.312279.493574.674881.16
1079.572392.503688.104986.42
1182.502477.113798.775088.00
1272.342578.723893.67
1393.832688.313982.61
Table 18. False rejection rate (FRR) by class.
Table 18. False rejection rate (FRR) by class.
ClassFRRClassFRRClassFRRClassFRR
10.23140.12270.35400.18
20.23150.58280.00410.12
30.23160.12290.29420.29
40.12170.47300.29430.12
50.00180.00310.35440.70
60.12190.06320.06450.00
70.12200.23330.23460.12
80.53211.04340.06470.23
90.18220.41350.41480.06
100.93230.18360.41490.35
110.41240.64370.06500.12
121.16250.99380.12
130.18260.18390.82
Table 19. False acceptance rate (FAR) by class.
Table 19. False acceptance rate (FAR) by class.
ClassFARClassFARClassFARClassFAR
122.501420.00270.004017.50
222.501512.502812.50415.00
310.001617.50292.504232.50
417.50177.503017.504312.50
522.501817.503110.004412.50
65.00190.003212.504512.50
70.002010.00332.504625.00
812.502117.503417.50470.00
915.002222.503530.004830.00
107.50237.50367.504912.50
1117.502420.00370.005017.50
1215.00257.50387.50
135.002615.00395.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Byeon, Y.-H.; Kwak, K.-C. Personal Identification Using Long Short-Term Memory with Efficient Features of Electromyogram Biomedical Signals. Electronics 2023, 12, 4192. https://doi.org/10.3390/electronics12204192

AMA Style

Byeon Y-H, Kwak K-C. Personal Identification Using Long Short-Term Memory with Efficient Features of Electromyogram Biomedical Signals. Electronics. 2023; 12(20):4192. https://doi.org/10.3390/electronics12204192

Chicago/Turabian Style

Byeon, Yeong-Hyeon, and Keun-Chang Kwak. 2023. "Personal Identification Using Long Short-Term Memory with Efficient Features of Electromyogram Biomedical Signals" Electronics 12, no. 20: 4192. https://doi.org/10.3390/electronics12204192

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop