Next Article in Journal
Self-Calibrating Magnetometer-Free Inertial Motion Tracking of 2-DoF Joints
Next Article in Special Issue
A Novel Screen-Printed Textile Interface for High-Density Electromyography Recording
Previous Article in Journal
Identification of Nonlinear Soil Properties from Downhole Array Data Using a Bayesian Model Updating Approach
Previous Article in Special Issue
Supervised Myoelectrical Hand Gesture Recognition in Post-Acute Stroke Patients with Upper Limb Paresis on Affected and Non-Affected Sides
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Surface EMG Statistical and Performance Analysis of Targeted-Muscle-Reinnervated (TMR) Transhumeral Prosthesis Users in Home and Laboratory Settings

1
Department of Engineering, King’s College London, London WC2R 2LS, UK
2
Center for Bionic Medicine, the Shirley Ryan Ability, Chicago, IL 60611, USA
3
Department of Physical Medicine and Rehabilitation, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA
4
Faculté de Médecine, Université de Kindu, Site de Lwama II, Kindu, Maniema, Congo
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(24), 9849; https://doi.org/10.3390/s22249849
Submission received: 26 October 2022 / Revised: 9 December 2022 / Accepted: 12 December 2022 / Published: 14 December 2022
(This article belongs to the Special Issue Electromyography (EMG) Signal Acquisition and Processing)

Abstract

:
A pattern-recognition (PR)-based myoelectric control system is the trend of future prostheses development. Compared with conventional prosthetic control systems, PR-based control systems provide high dexterity, with many studies achieving >95% accuracy in the last two decades. However, most research studies have been conducted in the laboratory. There is limited research investigating how EMG signals are acquired when users operate PR-based systems in their home and community environments. This study compares the statistical properties of surface electromyography (sEMG) signals used to calibrate prostheses and quantifies the quality of calibration sEMG data through separability indices, repeatability indices, and correlation coefficients in home and laboratory settings. The results demonstrate no significant differences in classification performance between home and laboratory environments in within-calibration classification error (home: 6.33 ± 2.13%, laboratory: 7.57 ± 3.44%). However, between-calibration classification errors (home: 40.61 ± 9.19%, laboratory: 44.98 ± 12.15%) were statistically different. Furthermore, the difference in all statistical properties of sEMG signals is significant (p < 0.05). Separability indices reveal that motion classes are more diverse in the home setting. In summary, differences in sEMG signals generated between home and laboratory only affect between-calibration performance.

1. Introduction

Limb amputation refers to the remove all or part of an upper or lower extremity. When people lose their upper limbs, many activities of daily living are significantly limited, as they interact with their surroundings and perform sophisticated tasks with their hands. According to hand and upper-limb reconstruction statistics provided by the NHS [1], the total number of amputations in the United Kingdom is estimated to be 250,000, with 10,000 increments per year. One out of four people with limb loss is an upper-limb amputee.
Prostheses aim to replace lost limbs and restore functionality. Myoelectrically controlled prostheses are state-of-the-art devices that intuitively interpret muscle signals to control the prostheses. In developing control schemes for myoelectrically controlled prostheses, control schemes have evolved from the initial on–off control to the two most popular methods, namely proportional amplitude control and pattern-recognition-based control [2]. Conventional proportional control schemes with two electrodes control the prosthesis with one degree of freedom and vary the control voltage according to the amplitude of the sEMG signals, providing robust performance but limited functionality [3]. Both control schemes can provide reasonable controllability for prostheses. Despite advancements in myoelectric control of prostheses, the prosthetic abandonment rate has not changed significantly since 2007 [4]. It has been estimated that about 52.18% of amputees abandon their prostheses due to concerns about comfort and functionality.
Pattern-recognition-based control schemes increase functionality by mapping decoded sEMG signals to different motion patterns, but their ability to predict movements accurately deteriorates over time [5,6,7,8]. This is mainly caused by the limited amount of data used to train the systems, which does not include variability in sEMG caused by intrinsic and extrinsic factors such as muscle fatigue [9], skin impedance [10], electrode shift [11], mutual adaptation [12], etc. Extensive training data sets that include example data representative of these conditions are challenging to collect clinically. Consequently, tools such as mobile applications and device-supervised training routines have been developed to allow users to recalibrate their PR control systems. Whereas recalibration can be performed quickly, the necessity to do so should be reduced as much as possible and used as a tool to personalize control rather than accommodate factors such as poor socket fit. The need to excessively recalibrate the device could lead to an increased possibility of abandonment of the prosthesis.
Recently, pattern-recognition-based control has become a viable option for clinical application due to the great promise of improved dexterity and performance [13]. Recent studies have focused on improving the robustness of prosthetic control systems through the simulation of potential clinical influence factors to transfer laboratory results to the clinic. Samuel et al. [14] suggested using the dual-stage sequential method, hybrid strategy, and multiscenario strategy based on accelerometer mechanomyography to mitigate the effect of mobility. The results show a significant reduction in classification error compared with other traditional classifiers. To improve pattern recognition performance under force variation, Islam et al. [15] proposed an improved time-domain feature extraction method, achieving 97.93% accuracy for seven hand gestures of nine amputees. Gigli et al. [16] used a dynamic training protocol to reduce errors caused by different limb positions. These trained systems outperformed statically trained systems by a significant margin. In addition to using additional data, extracting invariant features, or training the system with a novel protocol, postprocessing can also improve system reliability. To enhance the performance of clinical classification, Bao et al. [17] developed convolutional neural networks (CNNs) with confidence scores for rejecting low-confidence classification results. In online testing, their proposed method achieved an average error of 9.75% lower than a CNN based on majority voting and an original CNN.
Although the above studies achieved promising results, these results remain in the context of the laboratory rather than a clinical or home setting. Home trials have only been conducted in a handful of research studies. Osborn et al. [18] conducted a nine-week home trial case study with an amputee with extensive experience using prosthetics to understand how pattern-recognition-based control prosthetic systems are incorporated into daily life. Furthermore, Simon et al. [19] compared user performance with pattern recognition control and direct control in eight-week home trials. Their study and the study conducted by Resnik et al. [20] provided insight into the use of different motion patterns in a home compared to a laboratory. However, these studies did not provide a deep analysis of the divergence of the sEMG in different contexts. There is a lack of knowledge about the usability of sEMG in the home versus laboratory settings. The open question is how prosthesis users perform in a laboratory relative to home. In these two contexts, the quality of control of prostheses might relate to the amputees’ motivation or awareness of the use of prostheses. There could be a potential difference in EMG signals and corresponding signal qualities. Hence, this study aims to compare the statistical properties of sEMG signals and quantify the difference between the calibration data (6–8 weeks) collected in home and laboratory setting using separation indices, repeatability indices, and correlation coefficients. Subsequently, we evaluate whether these metrics can be used to predict the usability of calibration data. In this study, we provide deep insight into the statistical properties sEMG signals and analyse the feature space distribution to evaluate the calibration quality of sEMG signals in home and laboratory contexts.

2. Materials and Methods

2.1. Data Source

Surface EMG signals were obtained from [21], acquired from eight targeted-muscle-reinnervated (TMR) transhumeral amputees with myoelectric prostheses using experience over six to eight weeks at home and in the laboratory. However, only data for seven participants were available to us; one contained home trial data, and another had a failure channel. Hence, we used the data of five participants in this study. The participants used custom-fabricated prostheses; a Boston Digital Elbow (Liberating Technologies Inc., Holliston, Massachusetts, USA), a Motion Control Wrist Rotator (Motion Control Inc., Salt lake City, Utah, USA), and a single-degree-of-freedom terminal device (a powered split hook or hand). The prostheses were embedded with eight stainless-steel electrodes sampling at 1000 Hz. These eight electrodes were grid-arranged [22] and placed on the wall of the prosthesis liner.
Before and after the home trial, several tests were performed in the laboratory to evaluate the prosthetic control performance of each participant. The goal was to identify optimal electrode sites inside the socket and make the amputee confident about using the device. The user was then included in the trial and sent home with the device. During the home trial, participants were instructed to control the prosthesis to perform activities of daily living and to record the use frequency and activities performed using the prosthesis. Calibration sessions at home were at the discretion of the participants. They could calibrate after donning or any time they noticed a decrease in performance. On the other hand, laboratory calibration sessions were conducted as instructed by the occupational therapist during laboratory visits throughout the trial.
In each calibration, seven movements were recorded, including elbow flexion, elbow extension, wrist pronation, wrist supernation, hand open, chunk grip, and rest. Except for rest, each calibration motion was supposed to be performed twice, lasting three seconds each. After each calibration, sEMG signal data were stored in the memory of the embedded controller so that prosthesis usage data could be accessed after the home or laboratory trial. We used the calibration data of the whole 6–8 weeks of home and laboratory trials. Table 1 shows calibration times for each participant. In addition, because the number of calibrations varies in the laboratory and home, we chose equal calibration times for the laboratory and home setting based on the side with fewer calibrations. We balanced the time of laboratory calibrations before and after the home trial. The selected data were as close in time as possible to minimize the effect of time, which could cause different body conditions, as well as familiarity with control of the prosthesis, resulting in different EMG signals.

2.2. Statistical Properties Calculation

We decided to describe raw sEMG signals using the following statistical properties to understand how the signals differ from home to the laboratory. Then, we averaged all calculated statistical properties of overall channels and motions for each calibration of each participant.
  • Root Mean Square (RMS)
R M S = 1 n i x i 2
where n is the number of samples, and x i is the amplitude of sample i .
2.
Mean Frequency (MeanF) [23]
M e a n F = i = 0 M P i f i i = 0 M P i
where M is the number of frequency bins, f i is the frequency of the spectrum at bin i , and P i is the power spectrum at bin i .
3.
Median Frequency (MedF) [23]
j = 1 M e d F P i = j = M e d F M P i = 1 2 i = 1 M P i
where P i is the power spectrum at bin i , and M is the number of frequency bins. The total power spectra are divided into two equal parts at the median frequency.
4.
Variance
V a r i a n c e = 1 n 1 ( x i x ¯ ) 2
where x i is the amplitude of the signal at sample point i , x ¯ is the mean amplitude of sEMG signals, and n is the number of samples.

2.3. Signal Processing and Feature Extraction

The obtained sEMG signals were processed using MATLAB R2020b. We filtered the EMG signals between 20 and 500 Hz using a fourth-order Butterworth filter. Subsequently, filtered signals were segmented using overlapping windows of 200 ms, each with 30 ms increments. Hudgin’s feature set [24] with Willison amplitude was extracted in each window.

2.4. Calibration Quality Quantification

In our previous research [25], we demonstrated that quantification of feature change could effectively reflect how sEMGs change under time effect. Hence, quantifying the feature space variation could be critical to evaluating changes in calibration data. We tested four separability indices, one repeatability index, and two correlation coefficients as signal quality quantification metrics.

2.4.1. Separability Indices

Separability indices between each motion were used to measure the diversity of each motion pattern in feature space based on statistical criteria for each calibration. These separability indices were related to the combination of within- and between-class information to describe the classifiability of calibration data. Because some methods are used to evaluate the separability between two classes, we calculated these indices between each motion class (i.e., there were K = x two-class combinations) and averaged them for single calibration data. In this study, we used the following four separability indices:
  • Davies-Bouldin index (DBI) [26]
The DBI measures the worst-case separability of neighbouring classes in feature space by averaging the highest magnitude of overlap among them. Hence, a lower value of DBI indicates higher class separability. Equations (5)–(7) illustrate how it is computed:
S h = 1 N h i = 1 N h ( x i μ h ) T ( x i μ h ) ,   x i C h
D h l = ( μ h μ l ) T ( μ h μ l )  
R h l = S h + S l D h l   D B I = 1 K k = 1 K m a x   h l ( R h l )
where S h is the diversity of features within a class, C h is the h t h class, C l is the l t h class ( C h C l ), N h is the number of feature vectors in the h t h class, x i is the i t h feature vector in the h t h class, D h l is the similarity between classes, μ h is the mean of the feature vector in the h t h class, R h l combines D h l and S h to measure the overlap between two classes, and K is the number of pairs of classes.
  • Simplified Silhouette value (SS) [27]
SS is a computationally efficient version of the silhouette value. It analyses the consistency of each point in its class and the diversity of each point from other classes. Summarizing SS of all data points enables determination of the level of separability between two classes. The range of SS is −1 to 1, with −1 representing the worst separability and 1 representing the best separability. Equations (8) and (9) illustrate how it is computed:
a ( i ) = d E ( x i ,   c h )   b ( i ) = d E ( x i , c l )   x i C h   C h C l  
s s ( i ) = b ( i ) a ( i ) m a x ( a ( i ) , b ( i ) )   S S = 1 K k = 1 K 1 N h i = 1 N h s s ( i )
where a ( i ) is the distance between a feature vector ( x i ) and a centroid of its own class, b ( i ) is the distance of x i to the centroid of the other class. s s ( i ) the single SS for a single-feature vector, and N h is the number of feature vectors in the h t h class.
  • Fisher’s linear discriminate analysis index (FLDI) [28]
FLDI can be applied to a multiclass problem, which is the ratio between the between-class and within-class scatter matrices, as shown in Equations (10)–(12). A larger FLDI implies greater separability.
S b = i = 1 c ( μ i μ ) T ( μ i μ )  
S w = i = 1 c j = 1 N i ( x i j μ i ) T ( x i j μ i )
F L D I = t r a c e ( S b ) t r a c e ( S w )  
where S b is the between-class scatter matrix, S w is the within-class scatter matrix, c is the number of classes, N i is the number of feature vectors in the i t h class, μ i is the mean feature vector in the i t h class, μ is the mean of all classes, and x i j is the j t h feature vector in the i t h class.
  • Separability index (SI) [29]
The SI measures distances between the centroid of the ellipse of each class and the nearest class averaged across all motion classes, as formulated in Equation (13). The higher the SI, the more separability there is between classes.
S I = 1 N i = 1 N m i n   j = 1 ,   j i j = N 1 2 ( μ i μ j ) T S i 1 ( μ i μ j )  
where N is the number of motion classes; μ i and μ j are the centroids of i t h class and j t h class, respectively; and S i 1 is the covariance of the i t h class.

2.4.2. Repeatability Index and Correlation Coefficients

To investigate the performance of a trained classifier on other calibration data, we calculated the repeatability index and correlation coefficients between training and testing calibration data. The change in feature space distribution can reflect the temporal and spatial variation in EMG signals [25]. Therefore, the selected correlation coefficients are primarily used to determine whether the distributions differ, indicating the consistency of the calibrations. We concatenated all channels for each motion to obtain each feature space’s kernel-smoothed probability density functions (PDFs). Subsequently, correlation coefficients were calculated based on PDFs. Equations (14)–(16)show these values were calculated. Because correlation coefficients are computed between two single-feature distributions, we averaged them over features and motions.
  • Repeatability index (RI) [29]
The RI was previously explored in [29,30]. Both results showed that RI is an effective index to measure the consistency of EMG motion patterns in feature space generated in different trials. The RI is calculated as the distance between the centroid of the ellipse in one calibration and the class in another calibration, then averaged over all motion classes. It is formulated as in Equation (14).
R I = 1 N i = 1 N 1 2 ( μ T r i μ T s i ) T S T r i 1 ( μ T r i μ T s i )
where N is the number of motion classes; μ T r i and μ T s i are the centroid of i t h training and testing class, respectively; and S i 1 is the covariance of the i t h training class. A lower RI indicates more consistency between training and testing data.
  • Two-Sample Kolmogorov–Smirnov Test statistics (K-S) [31]
K-S provides information on the similarity between two distributions as formulated in Equation (15). Data from training and testing tend to be well-correlated when the K-S is low.
K - S = 1 N i = 1 N 1 M j = 1 M max x i | F 1 ( x i j ) F 2 ( x i j ) |  
where F 1 ( · ) and F 2 ( · ) are the cumulative distribution functions of two feature distributions, M is the number of features in the feature space, and N is the number of motion classes.
  • Spearman correlations (rho) [32]
Rho measures how two distributions are monotonically related. It is explained in Equation (16). In the rho value, −1 indicates that two feature distributions are totally different, whereas 1 represents the highest similarity between two feature distributions.
r h o = 1 6 d 2 n ( n 2 1 )  
where d is the rank difference between the two ranks of each probability density, and n is the number of probability densities.

2.5. Data Analysis

Linear discriminant analysis (LDA) was selected as the classifier. Classification can be divided into two parts. In the first part, called within-calibration classification (WCC), we used an eightfold cross-validation procedure to evaluate how the classifier performed when trained and tested within the same calibration. Another part estimated the between-calibration classification (BCC) performance using the leave-one-calibration-out cross-validation method. To determine whether there are statistically significant differences in classification performance between home and laboratory calibration data, we performed sign tests on both WCC and BCC errors. Furthermore, we applied linear regression between each separability index as an independent variable against WCC errors.
Similarly, linear regression was used between the repeatability index and each correlation coefficient as independent variables against BCC errors. The linearity between these indices and classification errors was represented by the p-value and R-squared value of each linear model to determine whether they are reasonable to describe calibration data viability. We used the sign test to determine statistical differences between home and laboratory settings for each evaluation metric.

3. Results

3.1. Raw sEMG Signals

Figure 1 shows an example of the concatenated raw sEMG signals of seven motions of TH02 used to calibrate his prosthesis at home and in the laboratory.

3.2. Statistical Properties and Classification

The four statistical properties of sEMG from home and laboratory setting for each participant are shown in Figure 2. The sign test revealed a significant difference in the RMS and the variance of sEMG, which were both larger in the laboratory than at home ( p < 0.001 ) . There was a greater mean and median frequency in the home than in the laboratory (p < 0.001). The sign test results for calibrations of all participants between home and laboratory are summarized in Table 2.
WCC and BCC errors are presented in Table 3. All BCC errors are larger than those of WCC, with the lowest error of 28.40 ± 4.91% for BCC and 5.61 ± 1.55% for WCC. The overall absolute value of the global mean WCC and BCC errors in the laboratory is higher than at home, although only BCC showed a significant difference (p < 0.05).

3.3. Metrics for Calibration Quality Quantification

For all metrics used to quantify the quality of signals, the line-fitting results across metrics and classification errors from all participants are summarized in Table 4. Figure 3 and Figure 4 show examples of how we fitted WCC with DBI and BCC with RI into linear regression models.
All separability indices have a high degree of linear relationship with WCC errors in home and lab contexts. WCC errors are lower with lower DBI and higher SS, FLDI, and SI. Additionally, RI has a linear relationship with BCC errors (higher RI with higher BCC errors) in home and lab calibration data. In contrast, K-S and rho have no and low linearity with BCC error in lab calibration data, respectively. Based on the averaged index values across all calibrations and the sign test on all metrics, only DBI and SI indicate that home calibrations have better separability than laboratory calibrations.

4. Discussion

The aim of this study was to compare the calibration of sEMG signals between home and laboratory settings through analysis of the statistical properties of sEMG signals and to quantify the calibration quality in both contexts. The overall results shows a better calibration quality at home than in the laboratory. In sEMG signals, RMS is related to the contraction forces, and variance represents sEMG signal power. Statistical analysis results show that there is a significant difference between home and laboratory settings, which as contraction levels vary between the two contexts. Because it is difficult for amputees to consistently produce contraction levels without proprioceptive and visual feedback [33], the force used to calibrate prostheses can vary each time. In the laboratory, amputees might have been more concentrated (i.e., high motivation or awareness) on performing motions, which resulted in high RMS and variance values. In addition, intensive concentration can lead to mental fatigue, which causes the recruitment of muscle fiber to be altered when generating the same force and motion pattern [34], which influences the consistency of the EMG signal. On the other hand, contraction levels could be estimated by Med F and Mean F, but the estimation is affected by the type of contraction, the subject, and the muscle length [35]. Med F and Mean F are the gold standards for assessing muscle fatigue using surface EMG signals because muscle fatigue results in a downward frequency shift [23]. Given the significant differences between home and laboratory setting in Med F and Mean F, muscle fatigue could potentially occur in the muscle when the participant calibrates their prosthesis in the laboratory.
The WCC performance with the selected classifier and feature set obtained promising results with 6–8 weeks of home trial and lab calibration data. However, from the perspective of overall mean errors, the WCC errors in the lab are slightly higher than those in the home, despite no significant difference in the statistical test. In a study conducted by Waris et al. [8], LDA showed better performance and robustness than conventional classifiers on a fluctuated sEMG signal over seven days. Hence, the potential reason for the lack of difference in the WCC could be that the LDA and selected feature sets are robust to the divergence of sEMG between home and laboratory setting. During home-trial recording, signal noise and user timing issues could be the main reason for low-quality signals at home [19]. Signal noise issues include impedance change (when the skin’s temperature rises and sweat starts to form), intermittent electrode contacts with the skin (due to muscle volume variation when performing contraction, socket movement, etc.), and poor wire condition. User timing issues included unexpected activity during resting, insufficient contraction time, and missed contractions. Compared with home calibrations, calibrations in the lab also contained signal noise issues and timing issues, even under supervision. Figure 5 show a raw sEMG signal from the laboratory. In addition, we found that a large proportion of laboratory calibrations had issues of insufficient contraction time, which mixed resting signals with other motions. A short contraction time results in a low diversity between motion patterns and reduced classifiability. Furthermore, we used the resting-based threshold for WAMP to improve class separability [36]. The spontaneous activity during resting fluctuates the feature’s threshold and induces unknown motion into the signal.
On the other hand, BCC errors are much larger than WCC errors due to the stochastic characteristics of sEMG. Whereas we chose the calibration data as close in time as possible, the time interval between the two calibrations could be weeks, as the subjects calibrated the prosthesis at home for 6–8 weeks. The increasing time gaps between training and testing data deteriorated classification performance [5,8]. Except for TH04, all subjects had crossed home and laboratory trials or the interval was not more than one week. TH04’s lab trial was performed one month after the last calibration of the home trial. Because TH04 was not using the prosthesis for an extensive period, he could not produce consistent motion patterns across different calibrations in the lab. As a result, TH04 had the highest BCC error and, with a considerable difference in WCC error between home and lab settings.
In metrics for calibration quality quantification, DBI had the highest R-squared value, followed by SS and FLDI. With a reasonable degree of linearity, it can be concluded that these three indices can be used as quality indices to assist a user in determining whether additional calibrations for prostheses are needed. Because the repeatability index and correlation coefficient reveal the consistency between the two calibration data, they may compare calibration data with historical data with good motion patterns. Nathan et al. [37] developed a calibration quality feedback tool to increase the function of myoelectric prostheses. They used the separability index and repeatability index to evaluate calibration data with a rating system and advice for subsequent recalibration.
The results of this study are encouraging in terms of home use of myoelectric prostheses. However, the study is limited, as it only compares signals without considering contextual factors.

5. Conclusions

In this study, we adopted a dedicated methodological approach to assess the quality of data recorded at home during prosthesis use, data recorded in a laboratory setting, and how the two contexts affect performance. Results obtained in this study indicate that the within-calibration classification results of the sEMG of TMR amputees between home and laboratory settings did not significantly differ, but the quality of calibrations was different, with home data providing better separability. However, the between-calibration performance was better at home than in the laboratory despite no statistical difference in the repeatability metrics. These results show that although the motivation and engagement of patients might differ between home and laboratory settings, they have no significant influence on the within-calibration performance.

Author Contributions

Conceptualization, B.W. and E.N.K.; methodology, B.W. and E.N.K.; machine learning implementation, B.W.; data analysis, B.W.; data resources, L.H.; writing—original draft preparation, B.W.; writing—review and editing, E.N.K., L.H. and X.B.; supervision, E.N.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the China Scholarship Council and King’s College London, grant number 202008440267.

Institutional Review Board Statement

This is a registered clinical trial published on 30 March 2017: NCT03097978 https://clinicaltrials.gov/ct2/show/NCT03097978?term=NCT03097978&rank=1 (accessed on 12 March 2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data analyzed in this study are available from Levi Hargrove.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Specialised Commissioning Team; NHS England. Hand and Upper Limb Reconstruction Using Vascularised Composite Allotransplantation (HAUL-VCA); NHS England: Leeds, UK, 2015.
  2. Geethanjali, P. Myoelectric control of prosthetic hands: State-of-the-art review. Med. Devices 2016, 9, 247–255. [Google Scholar] [CrossRef] [Green Version]
  3. Farina, D.; Jiang, N.; Rehbaum, H.; Holobar, A.; Graimann, B.; Dietl, H.; Aszmann, O.C. The extraction of neural information from the surface EMG for the control of upper-limb prostheses: Emerging avenues and challenges. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 797–809. [Google Scholar] [CrossRef]
  4. Salminger, S.; Stino, H.; Pichler, L.H.; Gstoettner, C.; Sturma, A.; Mayer, J.A.; Szivak, M.; Aszmann, O.C. Current rates of prosthetic usage in upper-limb amputees-have innovations had an impact on device acceptance? Disabil. Rehabil. 2020, 44, 3708–3713. [Google Scholar] [CrossRef]
  5. Waris, A.; Niazi, I.K.; Jamil, M.; Gilani, O.; Englehart, K.; Jensen, W.; Shafique, M.; Kamavuako, E.N. The effect of time on EMG classification of hand motions in able-bodied and transradial amputees. J. Electromyogr. Kinesiol. 2018, 40, 72–80. [Google Scholar] [CrossRef] [Green Version]
  6. Sheng, X.; Lv, B.; Guo, W.; Zhu, X. Common spatial-spectral analysis of EMG signals for multiday and multiuser myoelectric interface. Biomed. Signal Process. Control 2019, 53, 101572. [Google Scholar] [CrossRef]
  7. Jaber, H.A.; Rashid, M.T.; Fortuna, L. Using the Robust High Density-surface Electromyography Features for Real-Time Hand Gestures Classification. IOP Conf. Ser. Mater. Sci. Eng. 2020, 745, 012020. [Google Scholar] [CrossRef]
  8. Waris, A.; Niazi, I.K.; Jamil, M.; Englehart, K.; Jensen, W.; Kamavuako, E.N. Multiday Evaluation of Techniques for EMG-Based Classification of Hand Motions. IEEE J. Biomed. Health Inform. 2019, 23, 1526–1534. [Google Scholar] [CrossRef] [Green Version]
  9. Díaz-Amador, R.; Mendoza-Reyes, M.A. Towards the reduction of the effects of muscle fatigue on myoelectric control of upper limb prostheses. Dyna 2019, 86, 110–116. [Google Scholar] [CrossRef]
  10. Sae-lim, W.; Phukpattaranont, P.; Thongpull, K. Effect of Electrode Skin Impedance on Electromyography Signal Quality. In Proceedings of the 2018 15th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), Chiang Rai, Thailand, 18–21 July 2018; pp. 748–751. [Google Scholar]
  11. He, J.; Sheng, X.; Zhu, X.; Jiang, N. Position Identification for Robust Myoelectric Control Against Electrode Shift. IEEE Trans Neural Syst. Rehabil. Eng. 2020, 28, 3121–3128. [Google Scholar] [CrossRef]
  12. Hahne, J.M.; Markovic, M.; Farina, D. User adaptation in Myoelectric Man-Machine Interfaces. Sci. Rep. 2017, 7, 4437. [Google Scholar] [CrossRef]
  13. Atzori, M.; Muller, H. Control Capabilities of Myoelectric Robotic Prostheses by Hand Amputees: A Scientific Research and Market Overview. Front. Syst. Neurosci. 2015, 9, 162. [Google Scholar] [CrossRef]
  14. Samuel, O.W.; Li, X.; Geng, Y.; Asogbon, M.G.; Fang, P.; Huang, Z.; Li, G. Resolving the adverse impact of mobility on myoelectric pattern recognition in upper-limb multifunctional prostheses. Comput. Biol. Med. 2017, 90, 76–87. [Google Scholar] [CrossRef]
  15. Islam, M.J.; Ahmad, S.; Haque, F.; Reaz, M.B.I.; Bhuiyan, M.A.S.; Islam, M.R. Force-Invariant Improved Feature Extraction Method for Upper-Limb Prostheses of Transradial Amputees. Diagnostics 2021, 11, 843. [Google Scholar] [CrossRef]
  16. Gigli, A.; Gijsberts, A.; Castellini, C. The Merits of Dynamic Data Acquisition for Realistic Myocontrol. Front. Bioeng. Biotechnol. 2020, 8, 361. [Google Scholar] [CrossRef]
  17. Bao, T.; Zaidi, S.A.R.; Xie, S.Q.; Yang, P.; Zhang, Z.Q. CNN Confidence Estimation for Rejection-Based Hand Gesture Classification in Myoelectric Control. IEEE Trans. Hum.-Mach. Syst. 2022, 52, 99–109. [Google Scholar] [CrossRef]
  18. Osborn, L.E.; Moran, C.W.; Dodd, L.D.; Sutton, E.E.; Norena Acosta, N.; Wormley, J.M.; Pyles, C.O.; Gordge, K.D.; Nordstrom, M.J.; Butkus, J.A.; et al. Monitoring at-home prosthesis control improvements through real-time data logging. J. Neural Eng. 2022, 19, 036021. [Google Scholar] [CrossRef]
  19. Simon, A.; Turner, K.; Miller, L.; Potter, B.; Beachler, M.; Dumanian, G.; Hargrove, L.; Kuiken, T. User performance with a transradial multi-articulating hand prosthesis during pattern recognition and direct control home use. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 1. [Google Scholar] [CrossRef]
  20. Resnik, L.; Acluche, F.; Borgia, M. The DEKA hand: A multifunction prosthetic terminal device—Patterns of grip usage at home. Prosthet. Orthot. Int. 2018, 42, 446–454. [Google Scholar] [CrossRef]
  21. Hargrove, L.J.; Miller, L.A.; Turner, K.; Kuiken, T.A. Myoelectric Pattern Recognition Outperforms Direct Control for Transhumeral Amputees with Targeted Muscle Reinnervation: A Randomized Clinical Trial. Sci. Rep. 2017, 7, 13840. [Google Scholar] [CrossRef] [Green Version]
  22. Tkach, D.C.; Young, A.J.; Smith, L.H.; Rouse, E.J.; Hargrove, L.J. Real-Time and Offline Performance of Pattern Recognition Myoelectric Control Using a Generic Electrode Grid With Targeted Muscle Reinnervation Patients. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 727–734. [Google Scholar] [CrossRef]
  23. Phinyomark, A.; Thongpanja, S.; Hu, H.; Phukpattaranont, P.; Limsakul, C. The Usefulness of Mean and Median Frequencies in Electromyography Analysis; Intech: Singapore, 2012; pp. 195–220. [Google Scholar] [CrossRef] [Green Version]
  24. Hudgins, B.; Parker, P.; Scott, R.N. A new strategy for multifunction myoelectric control. IEEE Trans. Biomed. Eng. 1993, 40, 82–94. [Google Scholar] [CrossRef]
  25. Wang, B.; Kamavuako, E.N. Correlation between the stability of feature distribution and classification performance in sEMG signals. In Proceedings of the 2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART), Salford Manchester, UK, 8–10 December 2021; pp. 1–4. [Google Scholar]
  26. Campbell, E.; Phinyomark, A.; Scheme, E. Current Trends and Confounding Factors in Myoelectric Control: Limb Position and Contraction Intensity. Sensors 2020, 20, 1613. [Google Scholar] [CrossRef] [Green Version]
  27. Wang, F.; Franco-Penya, H.-H.; Kelleher, J.; Pugh, J.; Ross, R. An Analysis of the Application of Simplified Silhouette to the Evaluation of k-means Clustering Validity; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar] [CrossRef] [Green Version]
  28. Phinyomark, A.; Khushaba, R.; Ibáñez-Marcelo, E.; Patania, A.; Scheme, E.; Petri, G. Navigating Features: A Topologically Informed Chart of Electromyographic Features Space. J. R. Soc. Interface 2017, 14, 20170734. [Google Scholar] [CrossRef] [Green Version]
  29. Bunderson, N.E.; Kuiken, T.A. Quantification of Feature Space Changes With Experience During Electromyogram Pattern Recognition Control. IEEE Trans. Neural Syst. Rehabil. Eng. 2012, 20, 239–246. [Google Scholar] [CrossRef]
  30. He, J.; Zhang, D.; Jiang, N.; Sheng, X.; Farina, D.; Zhu, X. User adaptation in long-term, open-loop myoelectric training: Implications for EMG pattern recognition in prosthesis control. J. Neural Eng. 2015, 12, 046005. [Google Scholar] [CrossRef]
  31. Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification, 2nd ed.; Wiley-Interscience: Hoboken, NJ, USA, 2000. [Google Scholar]
  32. Schober, P.; Boer, C.; Schwarte, L.A. Correlation Coefficients: Appropriate Use and Interpretation. Anesth. Analg. 2018, 126, 1763–1768. [Google Scholar] [CrossRef]
  33. Al-Timemy, A.H.; Khushaba, R.N.; Bugmann, G.; Escudero, J. Improving the Performance Against Force Variation of EMG Controlled Multifunctional Upper-Limb Prostheses for Transradial Amputees. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 24, 650–661. [Google Scholar] [CrossRef] [Green Version]
  34. Van Cutsem, J.; Marcora, S.; De Pauw, K.; Bailey, S.; Meeusen, R.; Roelands, B. The Effects of Mental Fatigue on Physical Performance: A Systematic Review. Sport. Med. 2017, 47, 1569–1588. [Google Scholar] [CrossRef]
  35. Roman-Liu, D. The influence of confounding factors on the relationship between muscle contraction level and MF and MPF values of EMG signal: A review. Int. J. Occup. Saf. Ergon. 2016, 22, 77–91. [Google Scholar] [CrossRef]
  36. Waris, A.; Kamavuako, E.N. Effect of threshold values on the combination of EMG time domain features: Surface versus intramuscular EMG. Biomedical Signal Process. Control. 2018, 45, 267–273. [Google Scholar] [CrossRef]
  37. Brantly, N.; Feuser, A.; Cummins, F.; Hargrove, L.J.; Lock, B.A. Patter Recognition Myoelectric Control Calibration Quality Feedback Tool To Increase Function. In Proceedings of the MEC Symposion 2017, Fredericton, NB, Canada, 15–18 August 2017. [Google Scholar]
Figure 1. An example of TH02’s calibration data with all seven motions (a) at home and (b) in a laboratory setting. The red vertical lines separate different motions.
Figure 1. An example of TH02’s calibration data with all seven motions (a) at home and (b) in a laboratory setting. The red vertical lines separate different motions.
Sensors 22 09849 g001
Figure 2. The average statistical properties of sEMG over the respective number of calibrations for each participant at home and in the lab: (a) root mean square values; (b) variances; (c) mean frequency; (d) median frequency.
Figure 2. The average statistical properties of sEMG over the respective number of calibrations for each participant at home and in the lab: (a) root mean square values; (b) variances; (c) mean frequency; (d) median frequency.
Sensors 22 09849 g002
Figure 3. Scatter plots of linear regression fitted models (in red), where WCC error is related to DBI. Each blue x represents a sample result.
Figure 3. Scatter plots of linear regression fitted models (in red), where WCC error is related to DBI. Each blue x represents a sample result.
Sensors 22 09849 g003
Figure 4. Scatter plots of linear regression fitted models (in red), where BCC error is related to RI. Each blue x represents a sample result.
Figure 4. Scatter plots of linear regression fitted models (in red), where BCC error is related to RI. Each blue x represents a sample result.
Sensors 22 09849 g004
Figure 5. An example of laboratory sEMG signals with signal noise and user timing issues.
Figure 5. An example of laboratory sEMG signals with signal noise and user timing issues.
Sensors 22 09849 g005
Table 1. Participants enrolled in the dataset study.
Table 1. Participants enrolled in the dataset study.
ParticipantAgeTime Since Amputation
(Years)
Time Since TMRAmputation SideEtiologyCalibration Times
HomeLaboratory
TH013543RightTrauma (military)728
TH02546<1LeftTrauma (military)7820
TH035851LeftSarcoma5717
TH043187LeftTrauma (military)2225
TH052721RightTrauma (crushing)18100
Table 2. The results of sign tests of calibrations for all participants in the laboratory and at home.
Table 2. The results of sign tests of calibrations for all participants in the laboratory and at home.
Statistical PropertyHomeLaboratoryp Value
RMS0.33 ± 0.11 0.35 ± 0.11 6.27 × 10 4
Variance0.19 ± 0.120.22 ± 0.13 4.08 × 10 5
Mean F151.42 ± 10.81145.16 ± 10.21 8.54 × 10 9
Med F138.70 ± 11.27131.95 ± 11.15 1.33 × 10 7
Table 3. Average classification error across calibrations.
Table 3. Average classification error across calibrations.
ParticipantWCC Error (%)BCC Error (%)
HomeLabHomeLab
TH015.61 ± 1.555.80 ± 3.6028.40 ± 4.9133.14 ± 12.47
TH026.72 ± 1.628.30 ± 3.1421.10 ± 10.9631.25 ± 10.94
TH037.77 ± 2.428.66 ± 3.3340.85 ± 9.6443.84 ± 13.79
TH046.73 ± 3.5510.62 ± 4.3754.49 ± 10.2360.09 ± 10.36
TH054.84 ± 1.494.55 ± 2.7858.22 ± 10.2156.59 ± 13.19
Overall mean error (%)6.33 ± 2.137.57 ± 3.4440.61 ± 9.1944.98 ± 12.15
Table 4. This table illustrates whether linearity exists (1) between separability indices and WCC errors and (2) between repeatability, correlation coefficient, and BCC errors. All R squares have p < 0.05, except for K-S in the laboratory. CC is the correlation coefficient. p-value indicates whether there are significant differences between the home and laboratory settings for each metric (bold-faced).
Table 4. This table illustrates whether linearity exists (1) between separability indices and WCC errors and (2) between repeatability, correlation coefficient, and BCC errors. All R squares have p < 0.05, except for K-S in the laboratory. CC is the correlation coefficient. p-value indicates whether there are significant differences between the home and laboratory settings for each metric (bold-faced).
MetricR-Squared Valuep-ValueAveraged Value
across All Calibrations
HomeLab HomeLab
Separability indicesDBI0.890.650.0113.06 ± 1.873.34 ± 1.87
SS0.810.840.0630.31 ± 0.120.25 ± 0.14
FLDI0.860.720.156−7.17 ± 2.86−7.58 ± 2.68
SI0.540.850.0126.96 ± 5.644.47 ± 2.99
Repeatability index and CC 1RI0.660.510.4452.05 ± 1.482.16 ± 1.59
K-S0.290.000.1560.19 ± 0.030.21 ± 0.04
rho0.460.120.9130.89 ± 0.030.88 ± 0.04
1 N/A indicated that the results could not be determined because there are no significant differences.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, B.; Hargrove, L.; Bao, X.; Kamavuako, E.N. Surface EMG Statistical and Performance Analysis of Targeted-Muscle-Reinnervated (TMR) Transhumeral Prosthesis Users in Home and Laboratory Settings. Sensors 2022, 22, 9849. https://doi.org/10.3390/s22249849

AMA Style

Wang B, Hargrove L, Bao X, Kamavuako EN. Surface EMG Statistical and Performance Analysis of Targeted-Muscle-Reinnervated (TMR) Transhumeral Prosthesis Users in Home and Laboratory Settings. Sensors. 2022; 22(24):9849. https://doi.org/10.3390/s22249849

Chicago/Turabian Style

Wang, Bingbin, Levi Hargrove, Xinqi Bao, and Ernest N. Kamavuako. 2022. "Surface EMG Statistical and Performance Analysis of Targeted-Muscle-Reinnervated (TMR) Transhumeral Prosthesis Users in Home and Laboratory Settings" Sensors 22, no. 24: 9849. https://doi.org/10.3390/s22249849

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop