Next Article in Journal
Differences between Systems Using Optical and Capacitive Sensors in Treadmill-Based Spatiotemporal Analysis of Level and Sloping Gait
Next Article in Special Issue
Perceptions of and Experiences with Consumer Sleep Technologies That Use Artificial Intelligence
Previous Article in Journal
Fetal Electrocardiogram Extraction from the Mother’s Abdominal Signal Using the Ensemble Kalman Filter
Previous Article in Special Issue
Accuracy of Estimated Bioimpedance Parameters with Octapolar Segmental Bioimpedance Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classifying Muscle States with One-Dimensional Radio-Frequency Signals from Single Element Ultrasound Transducers

1
Fraunhofer Institute for Biomedical Engineering (IBMT), Joseph-von-Fraunhofer-Weg 1, 66280 Sulzbach, Germany
2
Chair of Embedded Intelligence, Technical University of Kaiserslautern, Gottlieb-Daimler-Straße 47, 67663 Kaiserslautern, Germany
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(7), 2789; https://doi.org/10.3390/s22072789
Submission received: 2 March 2022 / Revised: 30 March 2022 / Accepted: 2 April 2022 / Published: 5 April 2022
(This article belongs to the Collection Medical Applications of Sensor Systems and Devices)

Abstract

:
The reliable assessment of muscle states, such as contracted muscles vs. non-contracted muscles or relaxed muscles vs. fatigue muscles, is crucial in many sports and rehabilitation scenarios, such as the assessment of therapeutic measures. The goal of this work was to deploy machine learning (ML) models based on one-dimensional (1-D) sonomyography (SMG) signals to facilitate low-cost and wearable ultrasound devices. One-dimensional SMG is a non-invasive technique using 1-D ultrasound radio-frequency signals to measure muscle states and has the advantage of being able to acquire information from deep soft tissue layers. To mimic real-life scenarios, we did not emphasize the acquisition of particularly distinct signals. The ML models exploited muscle contraction signals of eight volunteers and muscle fatigue signals of 21 volunteers. We evaluated them with different schemes on a variety of data types, such as unprocessed or processed raw signals and found that comparatively simple ML models, such as Support Vector Machines or Logistic Regression, yielded the best performance w.r.t. accuracy and evaluation time. We conclude that our framework for muscle contraction and muscle fatigue classifications is very well-suited to facilitate low-cost and wearable devices based on ML models using 1-D SMG.

1. Introduction

The reliable assessment of muscle states, such as contracted muscles vs. non-contracted muscles or relaxed muscles vs. fatigue muscles, is very crucial in many sports and rehabilitation scenarios. Signals from various non-invasive and wearable sensors, such as force sensors [1], inertial measurement units (IMUs) [2], mechanomyograms (MMGs) [3,4], surface electromyography (sEMG) [5,6], textile resistive pressure mapping sensors [7] or a combination of those can be used to determine muscle activities or muscle fatigue. More recently, mobile and wearable Electrical Impedance Tomography (EIT) has also been proposed as an imaging method for muscular activities [8].
However, an issue common to all of these techniques is that they only measure signals from the body surface without obtaining information from deeper tissue layers entailing the muscles directly. An alternative non-invasive technique relying on signals extracting information from structures within the body is sonomyography (SMG), which uses ultrasound (US) to obtain information about skeletal muscles. Two-dimensional Brightness mode (B-Mode) US images, which are produced with a US transducer array consisting of several elements, are often used for imaging in SMG-based approaches. A recent review lists 17 studies making use of B-Mode US for biomonitoring muscle and tendon dynamics during locomotion [9]. Even though B-Mode US has shown remarkable accuracies for the classification of various muscle activities and muscle states [9], this technique is not suitable for simple, low-cost and wearable solutions [10].

1.1. Related Work

In contrast to two-dimensional images, raw one-dimensional Amplitude Mode (A-Mode) radio frequency (RF) signals can be obtained from comparatively cheap single element US transducers and single channel transmit/receive electronics and do not require any sophisticated processing such as beamforming. A-Mode US is a much better option for low-cost and wearable muscle state recognition solutions and has already been exploited, partially in combination with other sensors, in previous works.
Pioneering work in the field of single element 1-D SMG has been conducted by Guo et. al. for the detection of dynamic thickness changes in skeletal muscles during contractions in 2008 [11] and skeletal muscle assessments for prosthetic controls in 2009 [12]. A follow-up study evaluated the feasibility of signals stemming from single element 1-D SMG for controlling a powered prosthesis with one degree of freedom [13]. In 2013, Machine Learning (ML) algorithms, such as Support Vector Machines (SVMs), and artificial neural networks (ANNs) were used for the first time on single element 1-D SMG signals to predict wrist angles [14]. Muscle fatigue states were first examined with single element 1-D SMG in 2017 by tracking thickness changes of the biceps brachii muscle during the fatigue process [15]. A comparative analysis published in 2018 concluded that the classification performance of US signals acquired with a single element of a conventional linear array transducer exceeded the classification performance of sEMG w.r.t. the recognition of six out of eight hand and wrist gestures but was significantly worse w.r.t. to the recognition of the rest state [16]. Another publication from the same year found that angles and torques of elbows could be reconstructed from 1-D SMG signals using SVM models [17]. The A-Mode US signals of a commercially available system were examined in a 2016 study comprising 206 individuals and it was found that a reliable body fat percentage estimation could be performed on the basis of those signals [18]. In a publication from 2019, A-Mode signals were examined w.r.t. their ability to identify acute changes in muscle thickness after four sets of biceps curls [19]. Another work published in 2020 made use of 1-D SMG signals to measure acoustic nonlinearity parameters and showed that they have the potential to represent skeletal muscles dynamically [20]. An additional publication from 2020 presented a single element wearable ultrasonic sensor, consisting of a transmitter and receiver, and a corresponding method to measure skeletal muscle contractile parameters [21].

1.2. Our Contribution

The goal of this work was to address the shortcomings of previous approaches by exploiting signals from deeper soft tissue layers that have been acquired with a wearable system, which is only possible with other approaches to a limited extent. To this end, we present a comprehensive ML framework for the classification of 1-D SMG signals stemming from single element US transducers of healthy volunteers to quantify muscle contraction and muscle fatigue states. Our working hypothesis is that it is possible to create ML models discriminating between relaxed and contracted or fatigue signals with high accuracy. In contrast to previous works [11], we did not emphasize the careful selection of any single muscle or muscle group. Instead, we allowed a large degree of freedom w.r.t. the exact US transducer position on a previously defined rough body area, such as the gastrocnemius muscle or biceps brachii, to allow the usage of our methods in environments such as fitness monitoring in gyms or rehabilitation centers. In such environments, we do not expect the user to put any emphasis on transducer positioning for optimal ultrasound signals. Figure 1 illustrates the examined muscles or muscle groups with the gastrocnemius muscle sketched on the left and the biceps brachii muscle sketched on the right. This work builds upon and extends previously published preliminary results [22,23].

2. Materials and Methods

2.1. Materials

We relied on 1-D US RF signals of healthy volunteers, which we acquired with the experimental setup shown in Figure 2. It consists of a single element transducer, connected to a custom-designed electronics board (approx. 90 mm × 30 mm × 13 mm), which can be battery powered with a power consumption of approx. 2.5 W while measuring. This board sent the acquired signals of each volunteer via a wireless connection to an Android smartphone with a custom-built app for storage and future processing. We used a Panametrics single element US transducer (with a 3.5 MHz center frequency) to obtain muscle contraction and muscle fatigue data in two different types of experiments.

2.2. Methods

Classifying 1-D RF signals poses a time series classification (TSC) task, which is a non-trivial challenge. A publication from 2019 states that TSC “is a hard problem that is not yet fully understood and numerous attempts have been made in the past to create generic and domain-specific classification methods” [25]. We analyzed the signals with a variety of different ML methods and grouped them into traditional ML algorithms, artificial neural networks (ANNs), and Gradient Boosting Machines (GBMs). The traditional ML algorithm was a 1 nearest neighbor approach based on the dynamic time warping distance (1-NN DTW) [26]. The ANNs included a multilayer perceptron (MLP) [27,28], a fully convolutional network (FCN) [27], a radial basis function (RBF) neural network [29,30] and a 1-D residual neural network (ResNet) [27,28]. We also included neural networks using random convolutional kernels. These were ROCKET, MINIROCKET and MultiRocket [31,32,33]. We also included the more recent Transformer model [34,35]. The GBMs included CatBoost [36], LightGBM [37] and XGBoost [38]. We implemented the ANNs in Python with the ML frameworks Keras [30,39] and TensorFlow [40], while we used the DTAIDistance package [41] to deploy 1-NN DTW. We also made use of Scikit-Learn [42] for splitting or standardization of the input data.
To visualize the distribution of the acquired high-dimensional signals and increase our understanding of the signals, we deployed dimensionality reduction techniques (DRTs) such as the linear method Principal Component Analysis (PCA) [43] and the non-linear approach t-distributed stochastic neighbor embedding (t-SNE) [44]. PCA attempts to increase the interpretability of the data by minimizing information loss and maximizing variance. t-SNE converts similarities between data points to joint probabilities and models each high-dimensional object by a low dimensional point in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points with a higher probability.

2.3. Experimental Setup

We acquired signals from eight healthy participants for the muscle contraction classification experiments, in which we asked the subjects to perform squats to distinguish between contracted and non-contracted muscles. We asked all participants to position the US transducer anywhere above the gastrocnemius calf muscle, located on the back of the lower leg (see Figure 1), without instructing them on any specific calf muscle locations. This approach might have resulted in the acquisition of signals suffering from avoidable disturbances, such as interference from neighboring muscles or muscle groups. However, we consider this approach to mimic real-life scenarios, such as usage in a gym or in a rehabilitation facility, adequately.
We acquired data from 21 healthy participants for the muscle fatigue state classification experiments, in which we asked the subjects to lift weights chosen according to the subjectively perceived fitness level for as long as possible to induce muscle fatigue. Muscle fatigue is defined as an exercise-induced reduction in maximal voluntary contraction (MVC) [45]. To simulate real-life scenarios, we did not instruct participants to choose any muscle areas carefully and instead only asked them to put the US transducer on any fitting area above the biceps brachii muscle (see Figure 1). Table 1 and Table 2 provide an overview of the respective gender, total amount of A-scans used for the classification task, total amount of unique datasets, and amount of performed squats or respectively maximum weight lifted for each subject. We provide the raw input data for the muscle contraction experiments [46] and the muscle fatigue experiments [47,48] online. We present the complete database for the muscle contraction classification in Table A1 (Appendix A). Table A2 and Table A3 (Appendix B) show the two respective study designs for the muscle fatigue state classification experiments. All subjects gave their informed consent for inclusion before they participated in the study.
We stored and processed all signals offline on a system suitable to perform the ML classification tasks.
All participants in the muscle contraction study annotated the signals manually by pushing a button during the experiment every time they performed a squat.
For the muscle fatigue signals, we annotated the signals by grouping all signals stemming from the first 10 s of each dataset into the “relaxed” category and all signals stemming from the last 10 s of each dataset into the “fatigue” category. Furthermore, we trimmed the signal sequences by removing the first and last two seconds of each dataset to account for any noise that might have stemmed from lifting or depositing the weight at the beginning and end of each dataset. We conducted two independent studies with the first group (study one) containing signals from all 21 female and male participants, and the second group (study two) containing only signals from a single male subject. This study design allowed us to analyze how the inclusion or exclusion of certain signal types (e.g., signals of subjects with different genders or arm positions) influenced the model performance. The reason for differentiating between genders is that previous small-scale studies have hinted at measurable differences in US B-Mode images of gastrocnemius muscles and tendons during calf raises [49] and differences in shear wave elastography measurements, showing higher passive biceps brachii muscle stiffness in the right arm for women in comparison to men [50]. Study one contains 19,677 annotated A-scans, while study two contains 13,160 annotated A-Scans.
As we suspected that truncated input data without parts belonging to overdriven excitation signals would lead to better results, we included them in our analysis as well. The raw or truncated signals served as input for our models. These input signals were either not processed at all, pre-processed with a bandpass filter, or transformed with the Fourier transform, Wavelet transform, or Hilbert transform. Figure 3 illustrates the raw 1-D US RF A-scans and transformed versions for a relaxed and fatigue signal stemming from the same subject. Even after careful examination, it is very hard to perceive the very small differences between signals depicted in Figure 3 visually. This necessitates the usage of sophisticated ML models relying on large amounts of data to distinguish between different categories of signals.
Additionally, we also included statistical, spectral, temporal features or a combination thereof in our analysis. We extracted those features with the help of the Time Series Feature Extraction Library for the Python programming language [51].

2.4. Evaluation

We only used muscle contraction signals stemming from different datasets from the same person and the same transducer position. For the muscle fatigue data, we distinguished between 12 evaluation modes to compare the impact of a variety of signals and their attributes on the results. Table 3 summarizes the considered evaluation modes.
We divided the available data into testing and training datasets, segregated them according to subject, and desired examined properties and computed the average F 1 -Score to compare the performance for each data type and ML model. For each testing dataset, we used the datasets of all other participants having the desired properties as training data.
For the muscle contraction data classification, we included the ML models MLP, FCN, ResNet, ROCKET, MINIROCKET, MultiRocket, CatBoost, XGBoost, LightGBM, Transformer, 1-NN DTW, SVM and Logistic Regression. For the muscle fatigue data classification, we omitted the models MLP, FCN, and ResNet as their inclusion would have led to a massive increase in computation time by several months for each model, while the expected performance improvement, judging from the results obtained for the muscle contraction classifications, was low [22]. This approach led to 252 possible combinations for the muscle contraction data and 2376 possible combinations for the muscle fatigue data. Figure 4 and Figure 5 show all possible and examined combinations of signal data, data types, and ML models for muscle contraction and muscle fatigue data respectively.

3. Results

In this section, we present t-SNE visualizations to gain a better understanding of the high-dimensional distribution of the acquired signals. Those insights can help to interpret the results of the applied ML methods in Section 4 with a thorough discussion of the results. Please note that we omitted the axes of the figures below on purpose, as the t-SNE technique is meant to provide visualizations of the signal distribution and not quantitative results.

3.1. Muscle Contraction Signals Classifications

3.1.1. T-Distributed Stochastic Neighbor Embedding

Figure 6 and Figure 7 show t-SNE visualizations illustrating the low-dimensional signal distribution of all A-scans from all datasets with the same transducer position. Each dot represents a single A-Scan. Figure 6 is color-coded according to the muscle state (relaxed vs. contracted), while Figure 7 is color-coded according to the datasets the signals stem from. Comparing both figures to each other clearly indicates that the signals group together more strongly according to muscle state than according to the dataset they belong to.

3.1.2. Machine Learning

Table 4 depicts the five best-performing data types and ML model combinations based on the achieved average F 1 - Scores.

3.2. Muscle Fatigue Signals Classifications

3.2.1. T-Distributed Stochastic Neighbor Embedding on Signals of Study One

Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 show t-SNE visualizations illustrating the low-dimensional signal distribution of all A-scans from study one. Each dot represents a single A-Scan. Figure 8 is color-coded according to the muscle state (normal vs. fatigue) and Figure 9 is color-coded according to the subjects the signals belong to. Figure 10 is color-coded according to genders (female vs. male), while Figure 11 is color-coded according to the arm the signals stem from (dominant vs. non-dominant). Finally, Figure 12 is color-coded according to the maximum weight lifted by each subject (2.5 kg, 5.0 kg, or 7.5 kg).
Figure 8 shows that there is no strict grouping of the signals according to muscle state, even though a tendency is visible.
Figure 9 shows that the signals have a very strong tendency to group together according to the subject they belong to.
Figure 10 shows that the signals tend to group according to the gender they are annotated with. However, this grouping is not very strict and shows only slight tendencies instead of rigorous borders.
Figure 11 shows that the signals also tend to group according to the arm position they are annotated with. However, this grouping is again not very strict and shows only slight tendencies instead of rigorous borders.
Figure 12 shows a slight tendency of the signals to group according to the maximum weight they have been annotated with.

3.2.2. T-Distributed Stochastic Neighbor Embedding on Signals of Study Two

Figure 13 and Figure 14 show t-SNE visualizations illustrating the low-dimensional signal distribution of all A-scans from study two. Each dot represents a single A-Scan. Figure 13 is color-coded according to the muscle state (normal vs. fatigue) and Figure 14 is color-coded according to the arm the signals stem from (dominant vs. non-dominant). In both figures, we removed outliers as they significantly distorted the visual representation.
Figure 13 shows that the signals of study two only have a slight tendency to group according to the muscle state they belong to.
Figure 14 shows that the signals of study two have a strong tendency to group according to the arm position they have been annotated with.

3.2.3. Machine Learning

Table 5 depicts the F 1 scores and the time needed for training and evaluation of the best performing ML model/data type combinations for the classification of muscle fatigue signals.
A Logistic Regression model relying on extracted spectral features of non-truncated A-scans and an SVM model relying on non-truncated A-Scans, which have been transformed with the Wavelet transform, achieve the best average F 1 Score of 86%. Both models only require 5 min or less to complete all training and evaluation computations.

4. Discussion

4.1. Muscle Contraction Signals Classifications

This work only includes results for signals stemming from the same person and the same transducer position for the muscle contraction state classifications. These signals represent 22.55% of all available A-scans, 38.1% of all acquired datasets and entail signals from 37.5% of all subjects. The inclusion of signals stemming from different persons and a variety of transducer positions did not lead to satisfying classification performances. The main reason for this is most probably an insufficient size and diversity of the database. The t-SNE visualizations colored by subjects (see Figure 6) and datasets (see Figure 7) show that the data points cluster stronger according to the subjects they stem from than according to the datasets they belong to. After applying t-SNE, a tendency of the data points to cluster according to their respective annotation was visible but a significant overlap remained (see Figure 6). This tendency allowed us to formulate the hypothesis that models discriminating between relaxed and contracted signals with a high accuracy are possible. An observation is that an SVM model based on Hilbert transformed A-scans achieved an average F 1 - Score of ca. 88% in less than 10 min for training and evaluation. Hence, the SVM model trumped the performance of all ANN models and even 1-NN DTW, which has been the de-facto TSC benchmark for decades [28]. SVM outperforming even the most recent ANNs in terms of speed and accuracy is a remarkable result and paves the way for real-life applications allowing wearable devices to classify different muscle contraction states based on ML models using 1-D SMG signals within minutes.

4.2. Muscle Fatigue Signals Classification

The results presented above allow several interpretations of muscle fatigue signal classifications. Firstly, no Gradient Boosting Machine method ranked among the top-performing models. Another observation is that using the model 1-NN DTW did not result in any competitive results, even though this algorithm has been the de-facto TSC benchmark for decades [28]. Model 1-NN DTW was prohibitively slow for the scenarios described in this work. The convolutional neural networks based on the ROCKET family [31,32,33] could achieve competitive results but always performed worse than SVM and Logistic Regression w.r.t. accuracy and time, regardless of the underlying data types used. The relatively straight-forwarded algorithms SVM and Logistic Regression always performed better than other algorithms and even outperformed complex deep learning approaches. These models are also, on average, among the fastest approaches. The deep learning Transformer models, that adopt the mechanism of attention, are the most recent ML architectures used in this work. Those models yielded competitive results in many scenarios but were never able to outperform all other models in any case. Transformers also required a prohibitive amount of time for training and evaluation.
SVM and Logistic Regression were superior approaches for all muscle fatigue state classification evaluation strategies. ML models based on raw A-scans usually outperformed models based on truncated A-Scans. Regardless of the best performing data type or model, the approaches based on signals of the dominant arm always outperformed approaches based on the non-dominant arm or signals of both arms in a mixed setting. Models based on all signals of only female subjects performed slightly worse than models based on all signals of only male subjects. A possible explanation is that less data was available for female subjects. An additional notable observation is that models based on signals of a single male subject performed, on average, worse than models based on different subjects. A possible explanation is that the inclusion of many diverse signals from subjects with different training levels allowed for easier signal discriminations in comparison to comparatively homogeneous signals stemming from a single subject with no change in training level. The training and evaluation times of the best-performing models always stayed below one hour for models based on signals of a certain arm. Only the best performing model based on all signals, mixing signals from the dominant and non-dominant arm, needed several hours to finish training and evaluation. These results show that scenarios requiring a near real-time classification of muscle fatigue states based on 1-D SMG signals are possible.

4.3. Future Work

Even though average F 1 - Scores ranging from 70% to 88% would not qualify the presented algorithms for critical clinical use, they would still be suitable for use cases in sports–and rehabilitation scenarios as mentioned above. We expect that improved performances will result from the availability of more data in the future. Real-life implementations might include libraries such as LIBSVM [52] to deploy the proposed framework directly on mobile devices. Using cloud computing resources might also be a solution for more complex models when these are trained and evaluated remotely, while the local mobile devices would simply acquire the signals without processing them any further in that scenario.
Additionally, alternative ML models might be used in the future to obtain even better accuracies. For example, TSC approaches based on transfer learning [53] have shown promising results in the past. Further work might also exploit data fusion from multiple modalities, as shown in previous works by others [54]. Adding more data, such as accelerometer measurements, age, or body mass index, could also be a solution to create more sophisticated ML models.
By openly releasing the datasets for this work [46,47,48], we encourage others to build on our work and hope to inspire further research in this promising field.

5. Conclusions

To the best of our knowledge, this work presents, for the first time, the implementation and evaluation of a unified framework for the classification of 1-D US RF signals of muscle contraction and muscle fatigue states. This is a crucial step towards mobile and wearable solutions, which could find applications in rehabilitation or fitness tracking scenarios. To this end, we simulated real-life scenarios as closely as possible by not asking participants to emphasize obtaining particularly distinct signals and not examining the skin surface with B-Mode US first to find strongly pronounced muscle areas.
We find that the amount, quality and annotation strategies of our data allow to build robust, accurate, and fast models. Even though training and initial evaluation of these models requires a significant amount of time and computational power, the inference computations, yielding results by presenting previously unseen signals to the trained models, merely require milliseconds to complete.
Very complex and sophisticated ML models are not necessary to obtain reasonably robust models. The straight-forward and well-tried algorithms SVM and Logistic Regression always outperform more sophisticated and complex approaches, such as ANNs, Gradient Boosting Machines, or 1-NN DTW.

Author Contributions

Conceptualization, L.B., H.H. and P.L.; methodology, L.B., H.H. and P.L.; software, L.B.; validation L.B. and H.H.; formal analysis, L.B.; investigation, L.B.; resources, L.B.; data curation, L.B.; writing—original draft preparation, L.B.; writing—review and editing, H.H. and P.L; visualization, L.B.; supervision, H.H. and P.L.; project administration, H.H. and P.L.; funding acquisition, H.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and conducted in conformance with Saarland medical council (Ärztekammer des Saarlandes).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found in [46,47,48].

Acknowledgments

We would like to thank all participants for kindly contributing their time to our experiments. Additionally, we also thank all members of the department of ultrasound of the Fraunhofer Institute for Biomedical Engineering IBMT for providing support in terms of highly valued advice and hardware.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1 shows the complete database of all signals acquired for the muscle contraction classification experiments. Eight healthy volunteers (7 male, 1 female) performed squats to induce muscle contractions. The volunteers placed the US transducer either in a unique position anywhere above the gastrocnemius calf muscle or kept the transducer in place for the acquisition of further datasets, which is indicated by non-unique transducer positions. Additionally, we also present the total amount of acquired A-scans and the average amount of A-scans acquired per second for each dataset.
Table A1. Complete muscle contraction signals database.
Table A1. Complete muscle contraction signals database.
Dataset
ID
Subject IDGenderUnique Transducer Position# A-ScansA-Scans/s
101maleno300054.71
201maleno300050.55
302maleno100055.32
402maleno100052.06
501maleyes600056.17
601maleyes50,00033.95
703femaleyes10,00048.56
803femaleyes887256.17
904maleyes10,00048.65
1004maleyes10,00038.98
1105maleyes10,00040.82
1206maleyes10,00045.33
1307maleno10,00058.73
1407maleno10,00059.55
1501maleyes10,00059.97
1601maleyes10,00060.33
1701maleyes10,00060.19
1808maleno10,00047.58
1908maleno10,00047.48
2008maleyes10,00054.44
2108maleyes10,00044.88

Appendix B

Table A2 and Table A3 show the two respective study designs for the muscle fatigue classification experiments. A total of 21 healthy volunteers (14 male, 7 female) lifted weights to induce an MVC resulting in muscle fatigue. The volunteers placed the US transducer on a unique position anywhere above the biceps brachii muscle of the dominant or non-dominant arm. We present the genders, the amount of datasets, the total amount of A-scans used for the classification task, and the maximum lifted weights for each subject. The maximum lifted weight was chosen according to the subjectively perceived fitness level of each test subject. For some subjects, the weights were adjusted in later datasets.
Table A2. Muscle fatigue signals database for all subjects (study one).
Table A2. Muscle fatigue signals database for all subjects (study one).
Subject IDGender# Datasets# A-ScansMax. Lifted Weight [kg]
01female413905.0
02female310432.5
03male26962.5
04male26852.5
05male26957.5
06male413867.5
07female413675.0
08male310445.0
09male1034537.5
10female310445.0
11female26965.0
12male26955.0
13male310357.5
14female26955.0
15male26665.0
16male26725.0
17male13487.5
18male310355.0
19male13427.5
20male13457.5
21female13452.5
Table A3. Muscle fatigue signals database for a single subject (study two).
Table A3. Muscle fatigue signals database for a single subject (study two).
Subject IDGender# Datasets# A-ScansMax. Lifted Weight [kg]
09male4213,1607.5

References

  1. Lukowicz, P.; Hanser, F.; Szubski, C.; Schobersberger, W. Detecting and interpreting muscle activity with wearable force sensors. In Proceedings of the International Conference on Pervasive Computing, Pisa, Italy, 13–17 March 2006; pp. 101–116. [Google Scholar]
  2. Mokaya, F.; Lucas, R.; Noh, H.Y.; Zhang, P. Burnout: A wearable system for unobtrusive skeletal muscle fatigue estimation. In Proceedings of the 2016 15th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN), Vienna, Austria, 11–14 April 2016; pp. 1–12. [Google Scholar]
  3. Islam, M.A.; Sundaraj, K.; Ahmad, R.B.; Ahamed, N.U. Mechanomyogram for muscle function assessment: A review. PLoS ONE 2013, 8, e58902. [Google Scholar] [CrossRef] [PubMed]
  4. Woodward, R.B.; Stokes, M.J.; Shefelbine, S.J.; Vaidyanathan, R. Segmenting mechanomyography measures of muscle activity phases using inertial data. Sci. Rep. 2019, 9, 5569. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Jang, M.H.; Ahn, S.J.; Lee, J.W.; Rhee, M.H.; Chae, D.; Kim, J.; Shin, M.J. Validity and reliability of the newly developed surface electromyography device for measuring muscle activity during voluntary isometric contraction. Comput. Math. Methods Med. 2018, 2018, 1–9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Toro, S.F.D.; Santos-Cuadros, S.; Olmeda, E.; Álvarez-Caldas, C.; Díaz, V.; San Román, J.L. Is the use of a low-cost sEMG sensor valid to measure muscle fatigue? Sensors 2019, 19, 3204. [Google Scholar] [CrossRef] [Green Version]
  7. Zhou, B.; Sundholm, M.; Cheng, J.; Cruz, H.; Lukowicz, P. Measuring muscle activities during gym exercises with textile pressure mapping sensors. Pervasive Mob. Comput. 2017, 38, 331–345. [Google Scholar] [CrossRef]
  8. Gibas, C.; Grünewald, A.; Wunderlich, H.W.; Marx, P.; Brück, R. A wearable EIT system for detection of muscular activity in the extremities. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 2496–2499. [Google Scholar]
  9. Leitner, C.; Hager, P.A.; Penasso, H.; Tilp, M.; Benini, L.; Peham, C.; Baumgartner, C. Ultrasound as a tool to study muscle–tendon functions during locomotion: A systematic review of applications. Sensors 2019, 19, 4316. [Google Scholar] [CrossRef] [Green Version]
  10. Ma, C.Z.H.; Ling, Y.T.; Shea, Q.T.K.; Wang, L.K.; Wang, X.Y.; Zheng, Y.P. Towards wearable comprehensive capture and analysis of skeletal muscle activity during human locomotion. Sensors 2019, 19, 195. [Google Scholar] [CrossRef] [Green Version]
  11. Guo, J.Y.; Zheng, Y.P.; Huang, Q.H.; Chen, X. Dynamic monitoring of forearm muscles using one-dimensional sonomyography system. J. Rehabil. Res. Dev. 2008, 45, 187–196. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Guo, J.Y.; Zheng, Y.P.; Huang, Q.H.; Chen, X.; He, J.F.; Chan, H.L.W. Performances of one-dimensional sonomyography and surface electromyography in tracking guided patterns of wrist extension. Ultrasound Med. Biol. 2009, 35, 894–902. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Chen, X.; Zheng, Y.P.; Guo, J.Y.; Shi, J. Sonomyography (SMG) control for powered prosthetic hand: A study with normal subjects. Ultrasound Med. Biol. 2010, 36, 1076–1088. [Google Scholar] [CrossRef]
  14. Guo, J.Y.; Zheng, Y.P.; Xie, H.B.; Koo, T.K. Towards the application of one-dimensional sonomyography for powered upper-limb prosthetic control using machine learning models. Prosthet. Orthot. Int. 2013, 37, 43–49. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Sun, X.; Li, Y.; Liu, H. Muscle fatigue assessment using one-channel single-element ultrasound transducer. In Proceedings of the 2017 8th International IEEE/EMBS Conference on Neural Engineering (NER), Shanghai, China, 25–28 May 2017; pp. 122–125. [Google Scholar]
  16. He, J.; Luo, H.; Jia, J.; Yeow, J.T.; Jiang, N. Wrist and finger gesture recognition with single-element ultrasound signals: A comparison with single-channel surface electromyogram. IEEE Trans. Biomed. Eng. 2018, 66, 1277–1284. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Zhou, Y.; Liu, J.; Zeng, J.; Li, K.; Liu, H. Bio-signal based elbow angle and torque simultaneous prediction during isokinetic contraction. Sci. China Technol. Sci. 2019, 62, 21–30. [Google Scholar] [CrossRef] [Green Version]
  18. Bielemann, R.M.; Gonzalez, M.C.; Barbosa-Silva, T.G.; Orlandi, S.P.; Xavier, M.O.; Bergmann, R.B.; Assunção, M.C.F. Estimation of body fat in adults using a portable A-mode ultrasound. Nutrition 2016, 32, 441–446. [Google Scholar] [CrossRef] [PubMed]
  19. Kuehne, T.E.; Yitzchaki, N.; Jessee, M.B.; Graves, B.S.; Buckner, S.L. A comparison of acute changes in muscle thickness between A-mode and B-mode ultrasound. Physiol. Meas. 2019, 40, 115004. [Google Scholar] [CrossRef]
  20. Yan, J.; Yang, X.; Chen, Z.; Liu, H. Dynamically characterizing skeletal muscles via acoustic non-linearity parameter: In vivo assessment for upper arms. Ultrasound Med. Biol. 2020, 46, 315–324. [Google Scholar] [CrossRef] [PubMed]
  21. AlMohimeed, I.; Ono, Y. Ultrasound measurement of skeletal muscle contractile parameters using flexible and wearable single-element ultrasonic sensor. Sensors 2020, 20, 3616. [Google Scholar] [CrossRef] [PubMed]
  22. Brausch, L.; Hewener, H.; Lukowicz, P. Towards a wearable low-cost ultrasound device for classification of muscle activity and muscle fatigue. In Proceedings of the 23rd International Symposium on Wearable Computers, London, UK, 9–13 September 2019; pp. 20–22. [Google Scholar]
  23. Brausch, L.; Hewener, H. Classifying muscle states with ultrasonic single element transducer data using machine learning strategies. Proc. Meet. Acoust. 2019, 38, 022001. [Google Scholar]
  24. Mitsuhashi, N.; Fujieda, K.; Tamura, T.; Kawamoto, S.; Takagi, T.; Okubo, K. BodyParts3D: 3D structure database for anatomical concepts. Nucleic Acids Res. 2009, 37, D782–D785. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Wenninger, M.; Bayerl, S.P.; Schmidt, J.; Riedhammer, K. Timage—A robust time series classification pipeline. In International Conference on Artificial Neural Networks; Springer: Berlin/Heidelberg, Germany, 2019; pp. 450–461. [Google Scholar]
  26. Berndt, D.J.; Clifford, J. Using dynamic time warping to find patterns in time series. In Proceedings of the KDD Workshop, Seattle, WA, USA, 31 July–1 August 1994; Volume 10, pp. 359–370. [Google Scholar]
  27. Wang, Z.; Yan, W.; Oates, T. Time series classification from scratch with deep neural networks: A strong baseline. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 1578–1585. [Google Scholar]
  28. Ismail Fawaz, H.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P.A. Deep learning for time series classification: A review. Data Min. Knowl. Discov. 2019, 33, 917–963. [Google Scholar] [CrossRef] [Green Version]
  29. Schwenker, F.; Dietrich, C.; Kestler, H.A.; Riede, K.; Palm, G. Radial basis function neural networks and temporal fusion for the classification of bioacoustic time series. Neurocomputing 2003, 51, 265–275. [Google Scholar] [CrossRef]
  30. Vidnerova, P. RBF-Keras: An RBF Layer for Keras Library. 2020. Available online: https://github.com/PetraVidnerova/rbf_keras (accessed on 4 April 2022).
  31. Dempster, A.; Petitjean, F.; Webb, G.I. ROCKET: Exceptionally fast and accurate time series classification using random convolutional kernels. Data Min. Knowl. Discov. 2020, 34, 1454–1495. [Google Scholar] [CrossRef]
  32. Dempster, A.; Schmidt, D.F.; Webb, G.I. Minirocket: A very fast (almost) deterministic transform for time series classification. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery Data Mining, Singapore, 14–18 August 2021; pp. 248–257. [Google Scholar]
  33. Tan, C.W.; Dempster, A.; Bergmeir, C.; Webb, G.I. MultiRocket: Effective summary statistics for convolutional outputs in time series classification. arXiv 2021, arXiv:2102.00457. [Google Scholar]
  34. Liu, M.; Ren, S.; Ma, S.; Jiao, J.; Chen, Y.; Wang, Z.; Song, W. Gated Transformer Networks for Multivariate Time Series Classification. arXiv 2021, arXiv:2103.14438. [Google Scholar]
  35. Allam Jr, T.; McEwen, J.D. Paying Attention to Astronomical Transients: Photometric Classification with the Time-Series Transformer. arXiv 2021, arXiv:2105.06178. [Google Scholar]
  36. Prokhorenkova, L.; Gusev, G.; Vorobev, A.; Dorogush, A.V.; Gulin, A. CatBoost: Unbiased boosting with categorical features. Adv. Neural Inf. Processing Syst. 2018, 31, 1–11. [Google Scholar]
  37. Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.Y. Lightgbm: A highly efficient gradient boosting decision tree. Adv. Neural Inf. Processing Syst. 2017, 30, 3149–3157. [Google Scholar]
  38. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  39. Chollet, F.; Zhu, Q.S.; Gardener, T.; Rahman, F.; Lee, T.; De Marmiesse, G.; Zabluda, O.; ChentaMS; Watson, M.; Santana, E.; et al. Keras. GitHub. 2015. Available online: https://github.com/fchollet/keras (accessed on 4 April 2022).
  40. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv 2016, arXiv:1603.04467. [Google Scholar]
  41. Meert, W.; Group, D.R. DTAIDistance. 2022. Available online: https://dtaidistance.readthedocs.io/ (accessed on 4 April 2022).
  42. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  43. Cunningham, J.P.; Ghahramani, Z. Linear dimensionality reduction: Survey, insights, and generalizations. J. Mach. Learn. Res. 2015, 16, 2859–2900. [Google Scholar]
  44. Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008. 9, 2579–2605.
  45. Gandevia, S.C. Spinal and supraspinal factors in human muscle fatigue. Physiol. Rev. 2001, 81, 1725–1789. [Google Scholar] [CrossRef] [PubMed]
  46. Brausch, L.; Hewener, H.; Lukowicz, P. Muscle Contraction A-Scan data annotated by volunteers. 2019. Available online: https://www.openml.org/d/41971 (accessed on 4 April 2022).
  47. Brausch, L.; Hewener, H.; Lukowicz, P. Muscle Fatigue A-Scan data of 21 volunteers (study 1/2). 2021. Available online: https://www.openml.org/d/43075 (accessed on 4 April 2022).
  48. Brausch, L.; Hewener, H.; Lukowicz, P. Muscle Contraction A-Scan data of a single volunteer (study 2/2). 2021. Available online: https://www.openml.org/d/43076 (accessed on 4 April 2022).
  49. Zhou, G.Q.; Zheng, Y.P.; Zhou, P. Measurement of gender differences of gastrocnemius muscle and tendon using sonomyography during calf raises: A pilot study. BioMed Res. Int. 2017, 2017, 1–10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Chen, J.; O’Dell, M.; He, W.; Du, L.J.; Li, P.C.; Gao, J. Ultrasound shear wave elastography in the assessment of passive biceps brachii muscle stiffness: Influences of sex and elbow position. Clin. Imaging 2017, 45, 26–29. [Google Scholar] [CrossRef] [PubMed]
  51. Barandas, M.; Folgado, D.; Fernandes, L.; Santos, S.; Abreu, M.; Bota, P.; Liu, H.; Schultz, T.; Gamboa, H. TSFEL: Time series feature extraction library. SoftwareX 2020, 11, 100456. [Google Scholar] [CrossRef]
  52. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 1–27. [Google Scholar] [CrossRef]
  53. Fawaz, H.I.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P.A. Transfer learning for time series classification. In Proceedings of the 2018 IEEE International Conference On Big Data (Big Data), Seattle, WA, USA, 10–13 December 2018; pp. 1367–1376. [Google Scholar]
  54. Xia, W.; Zhou, Y.; Yang, X.; He, K.; Liu, H. Toward portable hybrid surface electromyography/a-mode ultrasound sensing for human–machine interface. IEEE Sens. J. 2019, 19, 5219–5228. [Google Scholar] [CrossRef]
Figure 1. Gastrocnemius muscle (red) and soleus muscle (green) on the left and biceps brachii muscle on the right (Reprinted with permission from [24]. 2008, BodyParts3D [CC BY-SA 2.1 JP]).
Figure 1. Gastrocnemius muscle (red) and soleus muscle (green) on the left and biceps brachii muscle on the right (Reprinted with permission from [24]. 2008, BodyParts3D [CC BY-SA 2.1 JP]).
Sensors 22 02789 g001
Figure 2. Experimental setup showing a participant lifting a weight, while a single element ultrasound transducer was attached to the body surface via a stretch armband. The signals were acquired with our custom-made acquisition hardware, which transfers them wirelessly to a mobile device.
Figure 2. Experimental setup showing a participant lifting a weight, while a single element ultrasound transducer was attached to the body surface via a stretch armband. The signals were acquired with our custom-made acquisition hardware, which transfers them wirelessly to a mobile device.
Sensors 22 02789 g002
Figure 3. Raw 1-D US RF A-scans and transformed versions for a relaxed and fatigue signal stemming from the same subject.
Figure 3. Raw 1-D US RF A-scans and transformed versions for a relaxed and fatigue signal stemming from the same subject.
Sensors 22 02789 g003
Figure 4. Hierarchical diagram illustrating all computed data input combinations for muscle contraction data.
Figure 4. Hierarchical diagram illustrating all computed data input combinations for muscle contraction data.
Sensors 22 02789 g004
Figure 5. Hierarchical diagram illustrating all computed data input combinations for muscle fatigue data.
Figure 5. Hierarchical diagram illustrating all computed data input combinations for muscle fatigue data.
Sensors 22 02789 g005
Figure 6. t-SNE visualization illustrating the low-dimensional signal distribution of all datasets with the same transducer position, color-coded according to the muscle state (relaxed vs. contracted). The t-SNE parameters were set to a perplexity of 200, a learning rate of 200, and 10,000 iterations.
Figure 6. t-SNE visualization illustrating the low-dimensional signal distribution of all datasets with the same transducer position, color-coded according to the muscle state (relaxed vs. contracted). The t-SNE parameters were set to a perplexity of 200, a learning rate of 200, and 10,000 iterations.
Sensors 22 02789 g006
Figure 7. t-SNE visualization illustrating the low-dimensional signal distribution of all datasets with the same transducer position, color-coded according to corresponding datasets. The t-SNE parameters were set to a perplexity of 200, a learning rate of 200, and 10,000 iterations.
Figure 7. t-SNE visualization illustrating the low-dimensional signal distribution of all datasets with the same transducer position, color-coded according to corresponding datasets. The t-SNE parameters were set to a perplexity of 200, a learning rate of 200, and 10,000 iterations.
Sensors 22 02789 g007
Figure 8. t-SNE visualization illustrating the low-dimensional signal distribution of all datasets from the muscle fatigue study one, color-coded according to the muscle state (normal vs. fatigue). The t-SNE parameters were set to a perplexity of 200, a learning rate of 200, and 10,000 iterations.
Figure 8. t-SNE visualization illustrating the low-dimensional signal distribution of all datasets from the muscle fatigue study one, color-coded according to the muscle state (normal vs. fatigue). The t-SNE parameters were set to a perplexity of 200, a learning rate of 200, and 10,000 iterations.
Sensors 22 02789 g008
Figure 9. t-SNE visualization illustrating the low-dimensional signal distribution of all datasets from the muscle fatigue study one, color-coded according to corresponding subjects. The t-SNE parameters were set to a perplexity of 200, a learning rate of 200, and 10,000 iterations.
Figure 9. t-SNE visualization illustrating the low-dimensional signal distribution of all datasets from the muscle fatigue study one, color-coded according to corresponding subjects. The t-SNE parameters were set to a perplexity of 200, a learning rate of 200, and 10,000 iterations.
Sensors 22 02789 g009
Figure 10. t-SNE visualization illustrating the low-dimensional signal distribution of all datasets with from the muscle fatigue study one, color-coded according to gender (female vs. male). The t-SNE parameters were set to a perplexity of 200, a learning rate of 200, and 10,000 iterations.
Figure 10. t-SNE visualization illustrating the low-dimensional signal distribution of all datasets with from the muscle fatigue study one, color-coded according to gender (female vs. male). The t-SNE parameters were set to a perplexity of 200, a learning rate of 200, and 10,000 iterations.
Sensors 22 02789 g010
Figure 11. t-SNE visualization illustrating the low-dimensional signal distribution of all datasets from the muscle fatigue study one, color-coded according to the arm position (dominant vs. non-dominant). The t-SNE parameters were set to a perplexity of 200, a learning rate of 200, and 10,000 iterations.
Figure 11. t-SNE visualization illustrating the low-dimensional signal distribution of all datasets from the muscle fatigue study one, color-coded according to the arm position (dominant vs. non-dominant). The t-SNE parameters were set to a perplexity of 200, a learning rate of 200, and 10,000 iterations.
Sensors 22 02789 g011
Figure 12. t-SNE visualization illustrating the low-dimensional signal distribution of all datasets from the muscle fatigue study one, color-coded according to the maximum lifted weight (2.5 kg, 5.0 kg, or 7.5 kg). The t-SNE parameters were set to a perplexity of 200, a learning rate of 200, and 10,000 iterations.
Figure 12. t-SNE visualization illustrating the low-dimensional signal distribution of all datasets from the muscle fatigue study one, color-coded according to the maximum lifted weight (2.5 kg, 5.0 kg, or 7.5 kg). The t-SNE parameters were set to a perplexity of 200, a learning rate of 200, and 10,000 iterations.
Sensors 22 02789 g012
Figure 13. t-SNE visualization illustrating the low-dimensional signal distribution of all datasets from the muscle fatigue study two, color-coded according to the muscle state (normal vs. fatigue). The t-SNE parameters were set to a perplexity of 200, a learning rate of 200, and 10,000 iterations.
Figure 13. t-SNE visualization illustrating the low-dimensional signal distribution of all datasets from the muscle fatigue study two, color-coded according to the muscle state (normal vs. fatigue). The t-SNE parameters were set to a perplexity of 200, a learning rate of 200, and 10,000 iterations.
Sensors 22 02789 g013
Figure 14. t-SNE visualization illustrating the low-dimensional signal distribution of all datasets from the muscle fatigue study two, color-coded according to the arm position (dominant vs. non-dominant). The t-SNE parameters were set to a perplexity of 200, a learning rate of 200, and 10,000 iterations.
Figure 14. t-SNE visualization illustrating the low-dimensional signal distribution of all datasets from the muscle fatigue study two, color-coded according to the arm position (dominant vs. non-dominant). The t-SNE parameters were set to a perplexity of 200, a learning rate of 200, and 10,000 iterations.
Sensors 22 02789 g014
Table 1. Muscle contraction signals database (summary).
Table 1. Muscle contraction signals database (summary).
Subject IDGender# A-Scans# Datasets# Squats
1Male92,0007154
2Male200028
3Female18,872235
4Male20,000249
5Male10,000127
6Male10,000121
7Male20,000288
8Male40,0004133
Table 2. Muscle fatigue signals database (summary).
Table 2. Muscle fatigue signals database (summary).
Subject IDGender# A-Scans# DatasetsMax. Weight [kg]
01Female18,28345.0
02Female15,15532.5
03Male16,16122.5
04Male18,86322.5
05Male430227.5
06Male27,11247.5
07Female13,58545.0
08Male880935.0
09Male109,964517.5
10Female796735.0
11Female332625.0
12Male434925.0
13Male14,69137.5
14Female695025.0
15Male12,81725.0
16Male10,21825.0
17Male379217.5
18Male14,00535.0
19Male523627.5
20Male463527.5
21Female11,81722.5
Table 3. Evaluation modes for muscle fatigue signal classifications.
Table 3. Evaluation modes for muscle fatigue signal classifications.
Evaluation ModeStudySignals Taken from Study [%]
Leave-one-out cross-validation (LOOCV) on all signals1100
LOOCV on signals from dominant arm only152.48
LOOCV on signals from non-dominant arm only147.52
LOOCV on signals from female subjects only133.44
LOOCV on signals from male subjects only166.56
LOOCV on signals from dominant arm of female subjects only117.54
LOOCV on signals from non-dominant arm of female subjects only117.67
LOOCV on signals from dominant arm of male subjects only134.94
LOOCV on signals from non-dominant arm of male subjects only131.62
LOOCV on signals from a single subject only2100
LOOCV on signals from dominant arm of a single subject only254.89
LOOCV on signals from non-dominant arm of a single subject only245.11
Table 4. Muscle contraction signals classification results (summary).
Table 4. Muscle contraction signals classification results (summary).
ModelData TypeSignals
Truncated
Average F 1 -   Score for   All   Data   Types   [ % ] Standard
Deviation for All Data Types
Time for
Training and
Evaluation [h]
Average F 1 - Score [ % ]
SVMHilbert transformed
A-Scans
no851.950.1788
MLPHilbert transformed
A-Scans
no841.976.6688
SVMFourier transformed
A-Scans
no851.950.0987
MLPFourier transformed
A-Scans
no841.976.1287
SVMWavelet transformed
A-Scans
no851.950.1286
Table 5. Muscle fatigue signals classification results (summary).
Table 5. Muscle fatigue signals classification results (summary).
Evaluation ModeML ModelData
Type
F 1 Score   ( % ) Time for Evaluation and Training
(Minutes)
LOOCVSVMWavelet transformed A-scans82334
LOOCV
(dominant arm)
SVMWavelet transformed A-scans8420
LOOCV
(non-dominant arm)
SVMCombination of all possible features77<5
LOOCV (female)Logistic
Regression
Combination of all possible features77<5
LOOCV (female) [dominant arm]Logistic
Regression
Spectral features86<5
LOOCV (female) [non-dominant arm]Logistic
Regression
Wavelet transformed
A-Scans
75<5
LOOCV (male)SVMWavelet transformed A-scans8450
LOOCV (male) [dominant arm]SVMWavelet transformed A-scans865
LOOCV (male)
[non-dominant arm]
SVMCombination of all possible features (of truncated signals)79<5
LOOCV
(single subject 09)
Logistic
Regression
Statistical features70<5
LOOCV (single subject 09)
[dominant arm]
SVMWavelet transformed A-scans787
LOOCV
(single subject 09)
[non-dominant arm]
SVMTemporal features72<5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Brausch, L.; Hewener, H.; Lukowicz, P. Classifying Muscle States with One-Dimensional Radio-Frequency Signals from Single Element Ultrasound Transducers. Sensors 2022, 22, 2789. https://doi.org/10.3390/s22072789

AMA Style

Brausch L, Hewener H, Lukowicz P. Classifying Muscle States with One-Dimensional Radio-Frequency Signals from Single Element Ultrasound Transducers. Sensors. 2022; 22(7):2789. https://doi.org/10.3390/s22072789

Chicago/Turabian Style

Brausch, Lukas, Holger Hewener, and Paul Lukowicz. 2022. "Classifying Muscle States with One-Dimensional Radio-Frequency Signals from Single Element Ultrasound Transducers" Sensors 22, no. 7: 2789. https://doi.org/10.3390/s22072789

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop