Next Article in Journal
Validation of Alogo Move Pro: A GPS-Based Inertial Measurement Unit for the Objective Examination of Gait and Jumping in Horses
Next Article in Special Issue
Wearable Motion Capture Devices for the Prevention of Work-Related Musculoskeletal Disorders in Ergonomics—An Overview of Current Applications, Challenges, and Future Opportunities
Previous Article in Journal
Modular Design in Triboelectric Sensors: A Review on the Clinical Applications for Real-Time Diagnosis
Previous Article in Special Issue
Use, Validity and Reliability of Inertial Movement Units in Volleyball: Systematic Review of the Scientific Literature
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Synergy-Space Recurrent Neural Network for Transferable Forearm Motion Prediction from Residual Limb Motion

by
Muhammad Hannan Ahmed
1,*,
Jiazheng Chai
1,
Shingo Shimoda
2 and
Mitsuhiro Hayashibe
1,*
1
Department of Robotics, Graduate School of Engineering, Tohoku University, Sendai 980-8577, Japan
2
Graduate School of Medicine, Nagoya University, Nagoya 464-0813, Japan
*
Authors to whom correspondence should be addressed.
Sensors 2023, 23(9), 4188; https://doi.org/10.3390/s23094188
Submission received: 9 March 2023 / Revised: 17 April 2023 / Accepted: 19 April 2023 / Published: 22 April 2023
(This article belongs to the Special Issue Sensor Technology for Improving Human Movements and Postures: Part II)

Abstract

:
Transhumeral amputees experience considerable difficulties with controlling a multifunctional prosthesis (powered hand, wrist, and elbow) due to the lack of available muscles to provide electromyographic (EMG) signals. The residual limb motion strategy has become a popular alternative for transhumeral prosthesis control. It provides an intuitive way to estimate the motion of the prosthesis based on the residual shoulder motion, especially for target reaching tasks. Conventionally, a predictive model, typically an artificial neural network (ANN), is directly trained and relied upon to map the shoulder–elbow kinematics using the data from able-bodied subjects without extracting any prior synergistic information. However, it is essential to explicitly identify effective synergies and make them transferable across amputee users for higher accuracy and robustness. To overcome this limitation of the conventional ANN learning approach, this study explicitly combines the kinematic synergies with a recurrent neural network (RNN) to propose a synergy-space neural network for estimating forearm motions (i.e., elbow joint flexion–extension and pronation–supination angles) based on residual shoulder motions. We tested 36 training strategies for each of the 14 subjects, comparing the proposed synergy-space and conventional neural network learning approaches, and we statistically evaluated the results using Pearson’s correlation method and the analysis of variance (ANOVA) test. The offline cross-subject analysis indicates that the synergy-space neural network exhibits superior robustness to inter-individual variability, demonstrating the potential of this approach as a transferable and generalized control strategy for transhumeral prosthesis control.

1. Introduction

Amputation of the upper limb at any level can considerably affect an individual’s ability to perform the activities of daily living (ADLs). The proficiency for such activities decreases with higher amputation levels [1]. The general requirements of prosthetic users can be primarily summarized as intuitive control, ease of use, and sensory feedback [2,3]. With the advancements in robotics and sensor technologies, very sophisticated and state-of-the-art upper extremity prostheses such as the DEKA arm [4] and modular prosthetic limb [5] are currently available. However, one of the persistent drawbacks is the interface between the prosthetic device and the user, attributable to the growing gap between the control methods and hardware improvements for prosthesis development.
In the myoelectric control domain, which is the most widely used control approach for prosthetic arms [6], the number of input signals a user can provide is always less than the degree of control (DOC). The DOC refers to the number of functions of a prosthetic device controllable by the user. Hence, this issue is even more critical for the case of transhumeral amputees as they can only provide electromyography (EMG) signals from the upper arm. However, they must control a prosthesis with numerous active degrees of freedom (DOFs), such as a powered elbow, wrist, and hand. As feasible solutions, surgical innovations such as targeted muscle reinnervation (TMR) [6] and advanced signal processing techniques, such as pattern recognition [7], have been employed to classify more significant numbers of distinct commands from residual muscle activities.
Although such measures allowed transhumeral amputees better control over their multifunctional prosthetic arms, the control was slow, sequential, and unnatural as physiologically appropriate muscles were unavailable. Such counter-intuitive control strategies and a lack of functionality are often the reasons for the high rejection rates of these devices [8,9]. Furthermore, the calibration requirements and signal sensitivity issues associated with EMG-based control also challenge its continuous daily use.
Numerous investigations have been conducted to explore alternative solutions, such as those based on myokinetic signals [10] using the residual kinetic activity of the limb, ultrasound signals [11], mechanomyography [12] using vibrations caused by muscle contractions as the control signals, and the residual limb motion strategy [13,14]. Bio-inspired learning approaches based on studying the central nervous system (CNS) and human motor control abilities have also been employed to develop more natural and intuitive arm control strategies [15].
Concerning the target reaching task or control of the prosthetic elbow joint motion in transhumeral amputees, the ideal case would involve the prosthetic device acting as a natural extension of the human body. A promising scheme to achieve this is automatically controlling the prosthetic elbow joint based on the natural relationships between the arm joints. Analyses of such joint coordination approaches have shown evidence that recurrent patterns exist in the joint kinematics for upper limb movements while performing reaching or grasping tasks. For example, between the elbow flexion and humeral inclination during reaching [16], between the hand azimuth and movement direction during grasping [17], as well as a variety of many other arm movements in ADLs [18]. These patterns are referred to as synergies.
In the present study, we have also focused on a synergistic method for intuitive control of the prosthetic elbow joint for transhumeral amputees. The majority of previous such motion-based approaches using residual limb motion rely on ANNs [19,20] to identify and model the shoulder–elbow kinematic relationship, as inductive learning (IL) applied in [21] and radial basis function networks (RBFNs) used in [22,23]. The ANN is typically trained to map the shoulder kinematics (provided as input) to the elbow or forearm kinematics (provided as output) through supervised learning. To the best of our knowledge, in such previous approaches, no prior synergistic information is extracted from the motion data used for training the ANNs. This type of approach, in simplistic terms, can be represented as in Figure 1a and is hereafter referred to as “direct estimation”. However, the direct estimation method has a limitation due to its sensitivity to inter-individual variability. As a result, the performance of the model drops significantly when applied across multiple users, making it less suitable for generalization and transferability.

Transfer Learning Framework for Transhumeral Amputees

This paper presents the concept of the synergy-space neural network, whereby we explicitly combine the kinematic synergies with the learning system to address the limitation of the direct estimation method, aiming for a more robust and transferable control strategy. Synergies have been observed to be repeatable and shared across similar tasks and subjects. Therefore, we extract the synergistic information from the movement data and incorporate only the most significant synergy components in the learning process, enabling more precise and efficient training of the ANN while taking advantage of the shared nature of these synergies enhancing the transferability of the model. The idea is to train the ANN to predict the corresponding activation signals (see Equation (1)), which estimates the forearm motion when combined with the extracted synergy matrix. Because the ANN is trained using the most significant synergy components, the synergy-space neural network can learn particular features common to the arm movement tasks, allowing for better cross-subject transferability. In addition, using a smaller number of synergy components aids the ANNs in learning good policies by reducing the dimensionality of the state space. Lastly, being a synergistic approach, it allows for the kinetically natural and energy-efficient motion estimation of the arm movements.
Figure 1b illustrates the overall workflow of the proposed synergy-space neural network approach. We first extract the spatial synergy components and their corresponding time-varying activation signals from the source data (see Equation (1)) using the principal component analysis (PCA). Long short-term memory (LSTM), a particular type of recurrent neural network (RNN), is then trained to predict the extracted time-varying activation signals based on input shoulder kinematics. Finally, the forearm motions are estimated using Equation (2), where the extracted activation signals C are replaced with the LSTM-predicted signals C p . Introduced by Hochreiter and Schmidhuber [24], the reason for using LSTM is its ability to learn long-term dependencies. LSTMs have internal mechanisms called gates that regulate the flow of information to handle the vanishing gradient problem in RNNs, making them very suitable for time-series prediction such as motion data or, in our case, time-varying activation signals. This paper proposes and evaluates the synergy-space neural network for forearm motion estimation, comparing its performance with the direct estimation approach. The contributions made in this paper are as follows:
  • The implementation of the proposed synergy-space neural network as a transferable model for forearm motion estimation using residual shoulder kinematics during horizontal reaching movements.
  • Personalized LSTM Models Evaluation: To validate the proposed methodology and its better learning capability through a detailed comparison between the performance of the synergy-space neural network and the direct estimation approach.
  • Cross-Subject Evaluation: To demonstrate the strength of the synergy-space neural network as a transferable decoder, indicating its ability to handle inter-individual variabilities.
The remainder of this paper consists of the following sections. Section 2 gives the background into synergies and their characteristics. Section 3 explains the materials and methods employed to achieve our objectives, describing the experimental protocol, synergy extraction, LSTM training and analysis, and evaluation strategy. Section 4 demonstrates the validity of the proposed method through the experimental results. Section 5 discusses the findings and limitations. Lastly, Section 6 concludes this paper and gives future direction.

2. Motor Synergies

Humans can generate coordinated, efficient, and sophisticated movements by fully utilizing the dynamics of their complex musculoskeletal systems. The need to control and perpetually adjust a large number of DOFs is expected to be computationally formidable. To manipulate the inherently redundant musculoskeletal system for voluntary movement generation, the CNS must simultaneously activate and coordinate many muscles, each comprising thousands of motor units. Neuroscience researchers have proven that the concept of motor synergies exists within the CNS [25,26,27], whereby it simplifies control by combining several DOFs into synergies and significantly reducing the burden on the CNS.
There are numerous interpretations of this concept; most commonly, the CNS uses a considerably small set of instructions to control a large group of muscles for movement generation. Owing to the co-activation of a set of muscles using fewer neural commands, i.e., motor synergies, the EMG activities of these muscles tend to be spatially and temporally correlated, which is referred to as muscle synergies. At the same time, this covariation of the muscle activation imparts a certain level of coordination among closely related joints, coupling the angular movements in various joints, referred to as kinematic synergies [28]. Having excess DOFs allows the CNS the flexibility to use only those DOFs that align well with the task demands [29], enabling sophisticated and synergistic motion generation. This may explain the ability of humans to naturally perform complex movements in an energy-efficient manner without too much effort. Moreover, it also suggests that using a synergistic approach for motion estimation may allow more natural movement generation.
In various investigations, a wide range of motor behaviors has been explained and are suggested to be produced by synergies. The comprehensive analysis in [30] by d’Avella characterized the muscle synergies of healthy arm movements for a variety of upper limb reaching tasks spanning various directions, such as horizontal, up, and frontal. The original EMG envelopes for these highly variable upper limb movements were reconstructed using a reduced set of muscle synergies, which were observed to be repeatable and shared across directions and subjects. In the case of prosthesis control, this may be useful for developing a generalized and transferable control model using data from able-bodied subjects. The authors in [31] analyzed the kinematic synergies of various arm postures during unrestrained and natural catching movements, where the subjects were tasked to catch a ball thrown toward them along 16 different trajectories. The results show that 3 synergies were sufficient to represent approximately 90% of the variance in the recorded data for 10 joint angles, corresponding to 7 DOFs of the arm and 3 DOFs of the shoulder girdle. The analysis in [32] also featured similar results using joint angular velocities during reaching motions. These results justify that fewer kinematic synergies, with certain losses in accuracy depending on the number of synergies, can be used to represent and reconstruct the natural reaching movements of the human arm.
Many studies have identified synergies (i.e., motor primitives) at the electromyographic, kinetic, and kinematic levels during the last few decades. The calculations for extracting various types of synergies have been formalized by authors in [33,34,35,36]. In the present study, we used spatial synergies extracted from the joint angular movement data of the arm. The idea of inter-joint coordination during reaching movements of the human arm reflects that a set of DOFs potentially shows instantaneous covariations. Such types of movement primitives, implying the assumption that the ratios of the signals characterizing different DOFs remain constant over time [33], are referred to as spatial synergies. The spatial synergy decomposition is performed as in (1), where the source signals are represented by x l ( t ) at a time point t in trial number l, and N is the number of extracted spatial synergy components. w n indicates the spatial pattern of the kinematic synergies that are assumed to be invariant over the trials, whereas c n l ( t ) are the corresponding activation signals that vary for each trial. We can rewrite (1) in a simplified matrix form as (2) after ignoring the residuals term, where X represents the source signals, W defines the synergy components, and C is the corresponding activation signal matrix.
x l ( t ) = n = 1 N w n · c n l ( t ) + r e s i d u a l s
X = W · C
For the analysis, we used the principal component analysis (PCA) method, which is one of the most widely used algorithms for solving (2); its core idea is to minimize the reconstruction errors E in (3) with respect to W and C, where · F indicates the Frobenius norm.
E 2 = X W · C F 2

3. Materials and Methods

3.1. Data Acquisition

Fourteen healthy right-handed subjects (thirteen males and one female) volunteered for this investigation. The subjects were 20–28 years old, with no known upper limb neuromuscular disorders. All subjects had given informed consent prior to participation in this experiment.
To record the motion data of a subject’s arm movements during reaching tasks, we used Perception neuron pro, an inertial measurement unit (IMU) sensor-based full-body motion capture system. Although the accuracy of this system is inferior to that of optical cameras, it is possible to capture motions without spatial constraints from anywhere within the communicable range of the device. The device uses individual sensors called neurons, each housing an IMU, attached to different body parts (Figure 2a). We used a total of eight sensors. Figure 2b shows the placement of each neuron—three on each arm (placed on the forearm, upper arm, and shoulder) and one each on the chest and lower back. The subjects can quickly wear the sensors using straps so that no additional preparations, such as special clothes, are necessary.
The sensor system communicates with the axis neuron pro software that processes raw motion data to formulate a 3D skeletal model in real time. Motion information from the developed skeletal model, such as the position and angle of each joint, can be obtained at a sampling frequency of 120 Hz. In our experiments, the data under consideration were the three shoulder orientation angles along X, Y, and Z axes, i.e., internal–external rotation, flexion–extension, and abduction–adduction of the shoulder joint, and two forearm orientation angles, i.e., pronation–supination and flexion–extension of the elbow joint; hereafter, these are referred to as S H θ x , S H θ y , and S H θ z and F A θ x and F A θ y , respectively.

3.2. Experimental Protocol

Bearing in mind that the purpose of a prosthetic device is to assist the user with their ADLs, we designed the workspace based on a routine activity by considering the user’s everyday environment instead of constrained movements in a laboratory or restricted environment. Reaching for objects placed on a table is a common scenario in daily life, which also targets arm movements in the horizontal plane only, specifically on the top surface of a table. A 40 cm × 40 cm target grid (Figure 3a) with a start/rest point and 8 numbered points was placed in a horizontal position on the surface of a table. The target numbered point to be reached was projected on the screen in front of the subject. The timing and color of the displayed numbers were controlled automatically to produce more consistent and regular movements, based on which the subjects had to perform the required reaching tasks. The subjects were provided instructions at the beginning of the experiments and allowed time to familiarize themselves with the environment. Therefore, no verbal commands or communications were required during the experiments, thus making the procedures easier to follow. Eight healthy subjects participated in these behavioral experiments. They were tasked with performing reaching movements to explore the top surface of a table placed before them while sitting straight on a chair, as shown in Figure 3b. All the subjects performed two sets of tasks, and kinematic data were acquired for the reaching movements of only the right arm as follows:
  • Dataset 1: with multiple (15 times) repetitive reaching movements toward each target point.
  • Dataset 2: with reaching movements toward random target points (35 movements in total).
Dataset 1 with multiple repetitive movements was first used to extract the kinematic synergies and later as the training data for the neural network. Afterward, dataset 2, with reaching movements toward random target points, was used to cross-validate the trained neural network.

3.3. Kinematic Synergy Extraction

PCA has been used in numerous studies to investigate natural movements, such as catching [31] or reaching [32] tasks, where it was successful in representing the observed physiological complexities using fewer numbers of principal components (PCs) or synergies. One reason for this is that when capturing variances, the PCA uses a more intuitive method to exploit the coupling of the DOFs. Another important reason is that it considers linear correlations among the DOFs, which can be regarded as the minimal model of inter-joint coupling, i.e., a linear approach to explaining complex behaviors.
In this study, we performed a synergistic analysis using PCA as we are working with kinematic synergies. Dataset 1 was first segmented to acquire the data of interest, i.e., one individual reaching movement from the onset of the reaching motion until returning to the start/rest point (as in Figure 4). We then averaged the segmented data over the 15 trials for the same reaching movement, e.g., for target point 7. The averaged data were then low-pass filtered using a sixth-order Butterworth filter with a cutoff frequency of 10 Hz to remove motion artifacts and finally normalized to translate the angular values within the range of 1 to + 1 . For each subject, this averaged, filtered, and normalized dataset X (4) was obtained comprising submatrices x j m ( t m a x ) with joint angular values during reaching motions. Here, m = 8 represents the total number of target points for the reaching tasks, j = 5 is the number of DOFs or joint angles under consideration, and t m a x refers to a particular sample time for which the joints’ angular values were obtained. The X having size j × ( m t m a x ) is fed to the PCA algorithm, which then provides the PCs. Each PC is a synergy representing the covariation of the joint angular configurations. The total number of PCs or synergy components N (1) is equal to the number of DOFs or joint angles under consideration, which is equal to five here. Figure 5a represents the extracted kinematic synergy component matrix (graphical representation) for one of the subjects.
X = x 1 1 ( t m a x ) x 1 4 ( t m a x ) x 1 8 ( t m a x ) x 2 1 ( t m a x ) x 2 4 ( t m a x ) x 2 8 ( t m a x ) x 5 1 ( t m a x ) x 5 4 ( t m a x ) x 5 8 ( t m a x )
We retained the minimum of the most significant PCs that explain at least > 85 % of the total variance. Figure 5b presents the variations of the PCs for subject 1. We can observe that the first synergy accounts for more than 75% of the variance, and the sum of the first two synergies can account for more than 90% of the overall variance in the source data. Following our set criteria, it suggests that we can adequately approximate the original data using only the first two synergies that capture a large portion of the variance. Thus, the subject’s original movements can be reconstructed with an acceptable loss in accuracy. Figure 4 represents the difference in the reconstructed data for one of the subjects when represented using all five synergies (shown in red) and when using only two synergies (shown in green). It can be seen that the red curve is an exact match for the original movement data (shown in black). In contrast, the green curve still represents the original data reasonably accurately. The last two rows show the activation signals C (in blue) in the case of two synergies, i.e., C1 and C2. Based on these observations, using the synergy-space neural network approach, it is plausible to use only the first two synergies to estimate the forearm motions.

3.4. LSTM Training

In the present work, we trained a neural network to predict the extracted activation signals based on shoulder kinematics. As both sets of data are time-series signals, it is necessary to use an ANN that suitably processes the time-series or sequential data. Different ANN architectures have been employed in various studies to determine inter-joint coordination during human arm movements. The authors in [37] used a radial basis function network (RBFN)-based neural network, whereas [38] used a time-delayed adaptive neural network (TDANN) to estimate the distal joint angles. In this study, we used LSTM, a particular type of RNN capable of handling long-term dependencies. LSTMs have internal mechanisms called gates that regulate the flow of information to handle the vanishing gradient problem in RNNs, thus making them very suitable for multivariate time-series forecasting.
We used python’s machine learning library, Keras, to implement the LSTM model. There are various parameters in the neural network, and the estimation accuracy may change depending on the settings of these parameters. Therefore, we first tested the learning efficiency and estimation accuracy of the LSTM model by varying the parameters, such as the batch input size, number of LSTM hidden layers, number of nodes in each layer, and number of training epochs. These parameters affect the learning and estimation efficiencies of the model. The batch size controls the number of samples shown to the network before the weight updates are applied. If the input batch size is large, the model can quickly process the entire training dataset; however, it can overlook certain features during training that might be crucial to learning.
On the other hand, we can increase the complexity and expressiveness of the model by optimizing the number of LSTM hidden layers and the number of neurons in each layer. Even in the case of using only a single hidden layer, the LSTM model can learn the characteristics of the time-series data. The efficiency can be improved by stacking multiple layers. However, if the model is made more complex than necessary, the training may not be effective. Similarly, if we increase the number of nodes, the model requires more time for learning with no significant change in accuracy. The same is also true for the number of training epochs. Estimating the time-varying activation signals is classified as a regression problem, so we use the mean squared error (MSE) as the loss function and Adam as the optimization function, as has been widely used in similar studies.
Lastly, to avoid the overfitting problem, a dropout rate of 10% is used in each layer, whereby 10% of the neurons are dropped randomly. Supervised learning is then carried out, where the neural network develops the regression model based on the input–output pairs. As for the training data, at a single time step, we can apply a total of six inputs to the model: the S H θ x , S H θ y , and S H θ z angle of the shoulder joint and their respective derivatives S H θ ˙ x , S H θ ˙ y , and S H θ ˙ z (i.e., shoulder joint angular velocities). It is also possible to input multiple time-step data to the LSTM model at a time using the last few time-step data to predict the output for the current time step. This can improve the estimation accuracy at the cost of increasing calculations. We used the ten previous time-step data as inputs to the model (Figure 6). The model outputs were the activation signals, which in the case of two synergies are C 1 p and C 2 p .

3.5. Analysis Strategy

In the present work, we compared the performances of the synergy-space neural network approach, where the shoulder kinematics are mapped to synergistic activation signals, with the direct estimation approach, wherein the neural network is used to map the shoulder kinematics to the forearm kinematics. To thoroughly investigate this comparison, we devised a comprehensive strategy to train and test 36 different LSTM models for each subject and analyze the performances. The devised strategy was based on the following criteria.

3.5.1. Learning Methodology

First, we devised a scenario based on our proposed learning methodology for the LSTM model. This defines the approach chosen for training the network, that is, either synergy-space or direct estimation. Three strategies were devised based on the number of synergistic components used and the learning approach.
  • Synergy-Space estimation using 2 synergies components:
    In this case, we first extract the synergies W and their corresponding activation signals C from the reaching motion data considering five DOFs of the arm, i.e., S H θ x , S H θ y , S H θ z , F A θ x , and F A θ y . Subsequently, we train the LSTM model to predict two activation signals C 1 and C 2 based on the shoulder kinematics provided as inputs. The predicted activation signals C 1 p and C 2 p are then used along with the synergy matrix W to estimate the required forearm motions.
  • Synergy-space estimation using 1 synergy component:
    In this case, we extracted the synergies W and their corresponding activation signals C from the reaching motion data considering only two DOFs of the forearm, i.e., F A θ x and F A θ y . We then trained the LSTM model to predict only one of the activation signals C 1 , providing the shoulder kinematics as the input. Finally, the predicted activation signal C 1 p and the synergy matrix W are used to estimate the required forearm motions.
  • Direct estimation:
    For direct estimation, no prior information or synergistic components are extracted from the reaching motion data recorded during arm movements. The LSTM model is directly trained to predict F A θ x and F A θ y angles of the elbow joint based on the input shoulder kinematics.

3.5.2. Number of LSTM Hidden Layers

As noted previously, a single layer of LSTM can learn the necessary features of the time-series data. By stacking multiple layers of LSTMs, this ability can be enhanced. Thus, to verify the appropriate number of hidden layers for our task, we tested three scenarios with varying numbers of hidden LSTM layers in our model. For each learning methodology mentioned earlier, we constructed and trained three different models, namely M1, M2, and M3, having one, two, and three hidden LSTM layers, respectively.

3.5.3. Number of Inputs

As previously mentioned, we can apply a total of six inputs to our model, represented by S H θ x , S H θ y , S H θ z , S H θ ˙ x , S H θ ˙ y , and S H θ ˙ z . However, various combinations of these signals can also be used as inputs. This consideration was based on the fact that the level of residual limb movement control would vary for the user depending on the severity of the amputation. In addition, wearing a prosthetic socket can limit the range and types of movements the user can perform. In many cases, the shoulder internal rotation motion is the most difficult to perform for amputees. The other reason was to test whether there is any advantage to using shoulder joint angular velocities as the inputs. We created four different training scenarios with different numbers of inputs to the LSTM models. The combined and total numbers of inputs for each scenario are shown in Table 1.

3.6. Evaluation

3.6.1. RMSE

For the evaluation of the trained LSTM model, the estimated forearm orientation angles (i.e., pronation–supination F A θ x and flexion–extension F A θ y angles of the elbow joint) were compared to the actual forearm orientation angles captured using the neuron pro system. Figure 7 presents a sample of the continuous signal plot comparing the actual vs. estimated forearm motions during one of the scenarios tested for subject 5. To assess the performance of the joint angle estimations, the standard metric used is the root mean squared error (RMSE) [38] as given in Equation (5), where x ^ t is the predicted joint angle, x t is the actual joint angle at data point t, and N is the total number of data points.
To calculate the RMSE value, we used the “mean_squared_error” metric from the scikit-learn library for python and applied the square root. In the case of multiple outputs, this metric gives an average value of the RMSE.
RMSE = 1 N t = 0 N ( x ^ t x t ) 2

3.6.2. Pearson Correlation Coefficient

Pearson’s correlation method analyzes the linear relationship between two variables and provides a coefficient value as a measure of the correlation strength. The Pearson correlation coefficient is denoted by r and can have a value between + 1 and 1 . Table 2 presents the detailed interpretation of the Pearson correlation coefficient.
We used the “corrcoef” function from python’s NumPy library, which uses the actual and estimated values of forearm orientation angles (as in Figure 7) to compute the Pearson correlation coefficient.

3.6.3. Analysis of Variance (ANOVA) Test

To statistically verify the differences between the results obtained using the synergy-space neural network and direct estimation methods, we performed the analysis of variance (ANOVA) test. The ANOVA compares three or more populations to ascertain whether the variability between group means is larger than the variability in the observations within the groups. A significance level or threshold is chosen, and a p-value less than the threshold is interpreted as evidence of the difference between the population means. In this study, the p-value indicates significant differences between the learning strategies’ results.
To show the pertinence of each learning methodology, we perform the ANOVA test using the RMSE values obtained by comparing the estimated and actual forearm motions for the various LSTM models trained using the synergy-space and direct estimation approaches. We chose a significance level of 0.05, one of the standard choices. Suppose the calculated p-value is less than the threshold. In that case, the statistically significant ANOVA is followed up with the Tukey HSD (honest significant difference), a post hoc test pinpointing which learning methodology exhibits a statistically significant difference.

3.6.4. Cross-Subject Analysis

As discussed previously, one of the characteristics of the synergies is that they are shared among similar tasks to some extent. This suggests that a generalized or transferable control model can be developed based on the synergy-space neural network approach using the data recorded from healthy subjects. Therefore, cross-subject testing was performed to test this assumption for the robustness of the learning methodologies.
For the case of the synergy-space method, we performed the cross-subject analysis using, for example, subject A’s input data fed to the LSTM models trained using the other subjects’ data and then employing subject A’s synergy matrix for the forearm motion prediction. However, for the case of the direct estimation method, the cross-subject analysis was performed using, for example, subject A’s input data fed to the LSTM models trained using the other subjects’ data for subject A’s forearm motion prediction.
The evaluations were performed by calculating the RMSE values to compare the estimated motions with the actual measured values. For the cross-subject analysis, we used model M2 with six inputs (i.e., best-case scenario) and model M2 with two inputs (i.e., worst-case scenario). The ANOVA was then performed to statistically verify the difference between the cross-subject results based on the synergy-space and direct estimation approaches.

4. Results

In this study, we trained and evaluated 36 different LSTM models for each of the 14 subjects. The training strategy was devised based on three different learning methodologies. We constructed 3 separate LSTM models and trained each model using 4 different combinations of the input signals (i.e., 3 × 3 × 4 = 36 ).
The acquired RMSE values of the estimated forearm motions ( F A θ x and F A θ y ) for some subjects are shown as bar plots in Figure 8. It was observed that, overall, there are no significant differences in the results based on the learning methodology. This suggests that even with the reduced state-space representation, the synergy-space neural network is capable of keeping the performance similar to that of the direct estimation method. However, we see notable differences when using combinations of two and three inputs, implying that shoulder internal–external rotations ( S H θ z ) as the input significantly increase the estimation accuracy, which is the case in all the training scenarios. On the other hand, the joint angular velocities as the inputs provided a minor increase in the accuracy, as is visible when comparing the RMSE values based on the number of inputs 2 and 4 and also 3 and 6. However, no marked differences were observed in the performance based on the number of hidden LSTM layers among the models M1, M2, and M3. These observations are valid for all the subjects included in the study.
Regarding the minor differences for all the trained scenarios for each subject in the study, we obtained three general result types. First, similar to subject 1 (Figure 8), the synergy-space method (using 2 synergies) produced the lowest RMSE values with an overall RMSE and standard deviation value of 10.88 and 2.09 , respectively, for all the tested scenarios. Second, similar to subject 2 (Figure 8), we obtained almost similar RMSE values having some variations from an overall RMSE and standard deviation value of 7.55 and 1.16 , respectively, for all the learning methodologies. Lastly, similar to subject 8 (Figure 8), the direct estimation method had the lowest RMSE values with an overall RMSE and standard deviation value of 3.99 and 1.47 , respectively, for all the tested scenarios. We only selected the scenarios trained using model M2 for further analyses, i.e., with two hidden LSTM layers, as it produced the best overall results.

4.1. Personalized LSTM Models Evaluation

To evaluate the personalized LSTM models and compare the different learning methodologies, we used the LSTM model M2, which has two hidden LSTM layers and six inputs (Table 1). The average RMSE values obtained for the estimated F A θ x and F A θ y were 4.24 and 9.75 , 6.08 and 9.86 , and 5.84 and 7.40 for the two-synergy, one-synergy, and direct estimation methods, respectively.
We first employed Pearson’s correlation coefficient to quantify the estimation performance of the different learning methodologies. Table 3 shows a mostly strong positive linear correlation ( + 0.7 r + 1.0 ) for the personalized LSTM models. Furthermore, even for the few cases of a distinct or weak linear relation (highlighted cells in Table 3), the trend is similar for all the learning methodologies, suggesting a similar overall performance.
To show the pertinence of each learning methodology, we calculated the p-value using the ANOVA between the results of the training strategies using the RMSE values of the estimated forearm motions. The statistical ANOVA test associated with the RMSE values obtained for the three learning methodologies is reported in Table 4. The ANOVA provided evidence that there was no statistically significant difference, F ( 2 , 39 ) = 1.705 , p = 0.195 . The summary of the ANOVA test results is shown in Table 5. As the p-value corresponding to the F Statistic is greater than the threshold value of 0.05, this is interpreted as there is no significant difference between the population means and eliminates the need to perform any post hoc or multiple comparison corrections test, such as the Tukey HSD. Thus, we can say that the three training strategies tend to produce similar RMSE values, and there is no significant difference between their performances for subject-specific or personalized LSTM models. This further validates that the proposed synergy-space approach for mapping the inter-joint coordination for each subject’s personalized ANN models performs on par with the direct estimation method even with reduced dimensionality.

4.2. Cross-Subject Evaluation

With transhumeral amputees, the target application of this study, it is impossible to measure the elbow joint or forearm motion information. Therefore, testing the transferability of the trained predictive models is crucial.
Figure 9 shows a box plot summarizing the RMSE values for each subject’s cross-subject evaluations using the LSTM model M2 with six inputs. The box size reflects the range where 75% of the sample values lie, with a smaller box size indicating less variation in the estimation performance. The results demonstrate that the synergy-space neural network, particularly when using two synergies, exhibits stronger robustness to inter-individual variability compared to the direct estimation method. This may be attributed to the shared nature of the synergies across similar tasks and subjects, which enables the network to learn features common to the human arm’s reaching task. The descriptive statistics associated with the RMSE values obtained for the cross-subject testing are reported in Table 6, where the proposed synergy-space RNN exhibited an average reduction of 50% in the variation in the RMSE compared to the direct estimation method (highlighted cells in Table 6). These results demonstrate the effectiveness of the proposed synergy-space RNN in achieving better transferability during the cross-subject evaluation.
To statistically verify whether the difference in learning methodologies affected the performance during the cross-subject evaluations, we again calculated the p-values using the ANOVA between the results using the RMSE values of the estimated forearm motions. The ANOVA yielded a statistically significant effect, F ( 2 , 585 ) = 3.227 , p = 0.040 . Table 7 shows the summary of the ANOVA test results. As the p-value corresponding to the F Statistic is lower than the threshold value of 0.05, this suggests that the performance of one or more learning methodologies is significantly different. We further evaluated the nature of the differences between the three population means, i.e., to check which learning methodology tends to perform differently from the others.
The statistically significant ANOVA was followed-up with the Tukey HSD (honest significant difference), a post hoc test pinpointing which learning methodology exhibits a statistically significant difference. The post hoc Tukey HSD test results are reported in Table 8. The p-values corresponding to the Q Statistic are lower than 0.05 in the cases when comparing the direct estimation method with the synergy-space neural network approach (when using two synergies), suggesting a significant difference in the performance of the two approaches.
Similar results were also obtained for the cross-subject evaluations using the LSTM model M2 with two inputs only, presented in Table 9, Table 10 and Table 11. Even when using only two inputs (considering limited shoulder internal rotation motion (Table 1)) the proposed synergy-space method had about 40% less variation in the RMSE compared to the direct estimation method (highlighted cells in Table 9). The evaluation of the cross-subject analysis suggests that the synergy-space approach is more robust and may provide the possibility of developing a transferable model for prosthesis control.

5. Discussion

We have proposed and evaluated the synergy-space neural network for transhumeral prosthesis control. By explicitly incorporating kinematic synergies into the model, our approach addresses the limitations of traditional ANNs and provides a more robust and superior transferability across different subjects. Our rigorous evaluation of the model has shown promising results, demonstrating its potential.
We evaluated the performance of the proposed synergy-space approach for personalized LSTM models and compared it to the direct estimation method. The results of Pearson’s correlation method and the ANOVA analysis indicate that the proposed method performs comparably to the direct estimation method, with no significant difference in performance. However, the proposed approach still performs well, even with the reduction in dimensionality, suggesting its efficient and better learning capabilities for personalized LSTM models.
In the various tested scenarios, we observed that using shoulder internal–external rotations ( S H θ x ) as the input significantly increases the estimation accuracy of the LSTM models, which was typical for all the learning methodologies. It can be because the shoulder rotation is coupled to the forearm rotation; however, we have already extracted the kinematic synergies from the subjects’ arm motion data corresponding to the five DOFs of the arm, including the shoulder and forearm rotations (i.e., S H θ x and F A θ x ). It seems not associated with the joint coordination issue and is more concerned with learning the LSTM model. An additional input (i.e., S H θ x ) provides an additional parameter to the LSTM model during the supervised training/learning, and probably a more unique feature compared to using the joint angular velocities (i.e., S H θ ˙ y and S H θ ˙ z ). Therefore, it improves the model’s accuracy.
The synergy-space approach demonstrated its superiority during the cross-subject evaluation as a more robust and transferable learning methodology. It showed more minor variations in the estimation accuracy when using one subject’s motion data and extracted the synergy matrix for forearm motion estimation using the personalized LSTM models of the other thirteen subjects. However, one of the limitations of this study for the actual implementation on amputee users will be obtaining the subject-specific synergy matrix. This is because, as mentioned earlier, amputee users cannot provide the necessary motion information.
Based on the properties of synergies being repeatable and shared across similar tasks and subjects, one practical solution can be to create a generalized synergy matrix based on the data of all the able-bodied participants and use it for amputee users’ forearm motion prediction. Another possibility would be to use motion data from the user’s healthy arm to generate a synergy matrix for the amputated arm. As the synergy matrix represents the inter-joint coordination, the LSTM model needs to learn a simplified relation between the shoulder kinematics and the activation signals. That means effectively extracting synergies can significantly affect performance.
This first investigation of the proposed synergy-space neural network demonstrates its potential as a robust and transferable predictive model, which was successfully confirmed through the cross-subject evaluation results. This finding can contribute toward creating a synergistic and generalized control strategy for transhumeral prostheses and other rehabilitation applications.

6. Conclusions

The primary aim of this study was to improve the control of transhumeral prostheses, focusing on their transferability across users. A highly accurate transferable predictive model is necessary for transhumeral amputees because individual calibration or personalized learning methods cannot be used effectively as they cannot provide the required data.
In this research, we proposed the synergy-space neural networks, as a transferable model, to predict the joint angles of the forearm motion based on the residual shoulder motion. We presented the implementation and evaluation of the proposed method, discussing its learnability and robustness for transferability to amputee users. The study was conducted with able-bodied subjects, focusing on reaching movements of the arm in the horizontal plane only. We compared the synergy-space neural network approach with the direct estimation method, using the actual and estimated joint angular values for the performance evaluation. In the best-case scenario, average RMSEs of 9.75 and 4.24 were achieved using the synergy-space method (using 2 synergies) for the flexion–extension ( F A θ y ) and pronation–supination ( F A θ x ) angles of the forearm motion. Consequently, we verified that for the case of personalized predictive models, even with a reduced state space, the proposed synergy-space neural network approach produced results similar to the direct estimation method.
We investigated the transfer learning ability of the proposed model through cross-subject analysis. The results indicate that the synergy-space neural network exhibited superior learning capabilities compared to the traditional direct estimation method during cross-subject evaluations. This highlights the strength of our approach as a transferable decoder, demonstrating its ability to handle inter-individual variabilities and providing a more generalized model for transhumeral prosthesis control.
In the future, we can send the output of predicted joint angles from the proposed model to a transhumeral prosthesis for real-time control. As a next step, this approach would be extended to incorporate reaching motions in three-dimensional space. The ultimate goal in the future is to develop a framework for the real-time estimation of forearm motions to further test and improve the proposed approach, such that it can be employed on actual transhumeral prostheses that allow users to control the device intuitively.

Author Contributions

Conceptualization, M.H.A. and M.H.; investigation, M.H.A.; project administration, M.H.; software, M.H.A. and J.C.; supervision, S.S. and M.H.; validation, M.H.A., J.C., S.S. and M.H.; writing—original draft, M.H.A.; writing—review and editing, M.H.A., J.C., S.S. and M.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the JSPS Grant-in-Aid for Scientific Research on Innovative Areas “Hyper-Adaptability” project (22H04764).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the ethical committee of Tohoku University under 22A-3.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ADLActivities of Daily Living
ANNArtificial Neural Network
ANOVAAnalysis of Variance
CNSCentral Nervous System
DOCDegree of Control
DOFDegree of Freedom
EMGElectromyography
HSDHonest Significant Difference
IMUInertial Measurement Unit
LSTMLong Short-Term Memory
PCAPrincipal Component Analysis
RMSERoot Mean Squared Error
RNNRecurrent Neural Network
TMRTargeted Muscle Reinnervation

References

  1. Francesca, C.; Lisa, C.A.; Rinaldo, S.; Angelo, D.; Giovanni, C.A.; Eugenio, G.; Loredana, Z. Literature Review on Needs of Upper Limb Prosthesis Users. Front. Neurosci. 2016, 10, 209. [Google Scholar]
  2. Kyberd, P.J.; Hill, W. Survey of upper limb prosthesis users in Sweden, the United Kingdom and Canada. Prosthet. Orthot. Int. 2011, 35, 234–241. [Google Scholar] [CrossRef] [PubMed]
  3. Atkins, D.J.; Heard, D.C.Y.; Donovan, W.H. Epidemiologic Overview of Individuals with Upper-Limb Loss and Their Reported Research Priorities. J. Prosthet. Orthot. 1996, 8, 2–11. [Google Scholar] [CrossRef]
  4. Resnik, L.; Klinger, S.L.; Etter, K. The DEKA Arm: Its features, functionality, and evolution during the Veterans Affairs Study to optimize the DEKA Arm. Prosthet. Orthot. Int. 2014, 38, 492–504. [Google Scholar] [CrossRef]
  5. Perry, B.N.; Moran, C.W.; Armiger, R.S.; Pasquina, P.F.; Vandersea, J.W.; Tsao, J.W. Initial Clinical Evaluation of the Modular Prosthetic Limb. Front. Neurol. 2018, 9, 153. [Google Scholar] [CrossRef]
  6. Jiang, N.; Dosen, S.; Muller, K.; Farina, D. Myoelectric Control of Artificial Limbs—Is There a Need to Change Focus? [In the Spotlight]. IEEE Signal Process. Mag. 2012, 29, 150–152. [Google Scholar] [CrossRef]
  7. Englehart, K.; Hudgins, B. A robust, real-time control scheme for multifunction myoelectric control. IEEE Trans. Biomed. Eng. 2003, 50, 848–854. [Google Scholar] [CrossRef] [PubMed]
  8. Biddiss, E.; Chau, T. Upper-Limb Prosthetics: Critical Factors in Device Abandonment. Am. J. Phys. Med. Rehabil. 2007, 86, 977–987. [Google Scholar] [CrossRef]
  9. Biddiss, E.; Chau, T. Upper limb prosthesis use and abandonment: A survey of the last 25 years. Prosthet. Orthot. Int. 2007, 31, 236–257. [Google Scholar] [CrossRef]
  10. Kuttuva, M.; Burdea, G.; Flint, J.; Craelius, W. Manipulation Practice for Upper-Limb Amputees Using Virtual Reality. Presence 2005, 14, 175–182. [Google Scholar] [CrossRef]
  11. Gonzalez, D.S.; Castellini, C. A realistic implementation of ultrasound imaging as a human-machine interface for upper-limb amputees. Front. Neurorobot. 2013, 7, 17. [Google Scholar]
  12. Silva, J.; Heim, W.; Chau, T. A self-contained, mechanomyography-driven externally powered prosthesis. Arch. Phys. Med. Rehabil. 2005, 86, 2066–2070. [Google Scholar] [CrossRef]
  13. Losier, Y.; Englehart, K.; Hudgins, B. Evaluation of shoulder complex motion-based input strategies for endpoint prosthetic-limb control using dual-task paradigm. J. Rehabil. Res. Dev. 2011, 48, 669–678. [Google Scholar] [CrossRef] [PubMed]
  14. Lipschutz, R.D.; Lock, B.; Sensinger, J.; Schultz, A.E.; Kuiken, T.A. Use of two-axis joystick for control of externally powered shoulder disarticulation prostheses. J. Rehabil. Res. Dev. 2011, 48, 661–667. [Google Scholar] [CrossRef] [PubMed]
  15. Hayashibe, M.; Shimoda, S. Synergetic Learning Control Paradigm for Redundant Robot to Enhance Error-Energy Index. IEEE Trans. Cogn. Dev. Syst. 2018, 10, 573–584. [Google Scholar] [CrossRef]
  16. Lacquaniti, F.; Soechting, J.F. Coordination of arm and wrist motion during a reaching task. J. Neurosci. 1982, 2, 399–408. [Google Scholar] [CrossRef]
  17. Roby-Brami, A.; Bennis, N.; Mokhtari, M.; Baraduc, P. Hand orientation for grasping depends on the direction of the reaching movement. Brain Res. 2000, 869, 121–129. [Google Scholar] [CrossRef]
  18. Averta, G.; Santina, C.D.; Battaglia, E.; Felici, F.; Bianchi, M.; Bicchi, A. Unvealing the Principal Modes of Human Upper Limb Movements through Functional Analysis. Front. Robot. AI 2017, 4, 37. [Google Scholar] [CrossRef]
  19. Kaliki, R.R.; Davoodi, R.; Loeb, G.E. Evaluation of a Noninvasive Command Scheme for Upper-Limb Prostheses in a Virtual Reality Reach and Grasp Task. IEEE Trans. Biomed. Eng. 2013, 60, 792–802. [Google Scholar] [CrossRef]
  20. Dasanayake, W.D.I.G.; Gopura, R.A.R.C.; Dassanayake, V.P.C.; Mann, G.K.I. Estimation of prosthetic arm motions using stump arm kinematics. In Proceedings of the 7th International Conference on Information and Automation for Sustainability, Colombo, Sri Lanka, 22–24 December 2014; pp. 1–6. [Google Scholar]
  21. Popovic, M.; Popovic, D. Cloning biological synergies improves control of elbow neuroprostheses. IEEE Eng. Med. Biol. Mag. 2001, 20, 74–81. [Google Scholar] [CrossRef]
  22. Iftime, S.D.; Egsgaard, L.L.; Popovic, M.B. Automatic determination of synergies by radial basis function artificial neural networks for the control of a neural prosthesis. IEEE Trans. Neural Syst. Rehabil. Eng. 2005, 13, 482–489. [Google Scholar] [CrossRef]
  23. Merad, M.; de Montalivet, E.; Roby-Brami, A.; Jarrassé, N. Intuitive prosthetic control using upper limb inter-joint coordinations and IMU-based shoulder angles measurement: A pilot study. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016; pp. 5677–5682. [Google Scholar]
  24. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  25. Bizzi, E.; Cheung, V.C. The neural origin of muscle synergies. Front. Comput. Neurosci. 2013, 7, 51. [Google Scholar] [CrossRef]
  26. Turvey, M.T. Action and perception at the level of synergies. Hum. Mov. Sci. 2007, 26, 657–697. [Google Scholar] [CrossRef] [PubMed]
  27. De Rugy, A.; Loeb, G.E.; Carroll, T.J. Are muscle synergies useful for neural control? Front. Comput. Neurosci. 2013, 7, 19. [Google Scholar] [CrossRef] [PubMed]
  28. Tagliabue, M.; Ciancio, A.L.; Brochier, T.; Eskiizmirliler, S.; Maier, M.A. Differences between kinematic synergies and muscle synergies during two-digit grasping. Front. Hum. Neurosci. 2015, 9, 165. [Google Scholar] [CrossRef]
  29. Tresch, M.C.; Jarc, A. The case for and against muscle synergies. Curr. Opin. Neurobiol. 2009, 19, 601–607. [Google Scholar] [CrossRef] [PubMed]
  30. Scano, A.; Dardari, L.; Molteni, F.; Giberti, H.; Tosatti, L.M.; d’Avella, A. A Comprehensive Spatial Mapping of Muscle Synergies in Highly Variable Upper-Limb Movements of Healthy Subjects. Front. Physiol. 2019, 10, 1231. [Google Scholar] [CrossRef]
  31. Bockemühl, T.; Troje, N.F.; Dürr, V. Inter-joint coupling and joint angle synergies of human catching movements. Hum. Mov. Sci. 2010, 29, 73–93. [Google Scholar] [CrossRef]
  32. Tang, S.; Barsotti, M.; Stroppa, F.; Frisoli, A.; Wu, X.; Hou, W. Upper Limb Joint Angular Velocity Synergies of Human Reaching Movements. In Proceedings of the IEEE International Conference on Cyborg and Bionic Systems (CBS), Shenzhen, China, 25–27 October 2018; pp. 641–646. [Google Scholar]
  33. Chiovetto, E.; d’Avella, A.; Giese, M.M. A Unifying Framework for the Identification of Motor Primitives. arXiv 2016, arXiv:1603.06879. [Google Scholar]
  34. Ramos, F.M.; d’Avella, A.; Hayashibe, M. Identification of Time-Varying and Time-Scalable Synergies from Continuous Electromyographic Patterns. IEEE Robot. Autom. Lett. 2019, 4, 3053–3058. [Google Scholar] [CrossRef]
  35. Overduin, S.; d’Avella, A.; Roh, J.; Carmena, J.; Bizzi, E. Representation of Muscle Synergies in the Primate Brain. J. Neurosci. 2015, 35, 12615–12624. [Google Scholar] [CrossRef]
  36. Hiroshi, R.Y.; Shimoda, S. Spatiotemporal modular organization of muscle torques for sit-to-stand movements. J. Biomech. 2016, 49, 3268–3274. [Google Scholar]
  37. Merad, M.; Roby-Brami, A.; Jarrassé, N. Towards the implementation of natural prosthetic elbow motion using upper limb joint coordination. In Proceedings of the 2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob), Singapore, 26–29 June 2016; pp. 821–826. [Google Scholar]
  38. Akhtar, A.; Aghasadeghi, N.; Hargrove, L.; Bretl, T. Estimation of distal arm joint angles from EMG and shoulder orientation for transhumeral prostheses. J. Electromyogr. Kinesiol. 2017, 35, 86–94. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Illustration of the idea of this study: (a) a simplistic representation of direct estimation method; (b) the proposed synergy-space neural network method. Here, S H θ x , S H θ y , and S H θ z are the shoulder kinematics, and F A θ x and F A θ y represent the forearm orientations. The PCA blocks symbolize the process of synergy extraction, where W represents the synergistic components and C is the corresponding activation signal matrix. The predictive model is the trained RNN that outputs the predicted activation signals C p , with the cross-operator representing the matrix multiplication of the incoming values.
Figure 1. Illustration of the idea of this study: (a) a simplistic representation of direct estimation method; (b) the proposed synergy-space neural network method. Here, S H θ x , S H θ y , and S H θ z are the shoulder kinematics, and F A θ x and F A θ y represent the forearm orientations. The PCA blocks symbolize the process of synergy extraction, where W represents the synergistic components and C is the corresponding activation signal matrix. The predictive model is the trained RNN that outputs the predicted activation signals C p , with the cross-operator representing the matrix multiplication of the incoming values.
Sensors 23 04188 g001aSensors 23 04188 g001b
Figure 2. Neuron pro sensor placements for capturing the required motion data: (a) subjects wearing neuron sensors; (b) green spots marking the placements of the sensors for the upper body mode of neuron pro because only arm movement data are required.
Figure 2. Neuron pro sensor placements for capturing the required motion data: (a) subjects wearing neuron sensors; (b) green spots marking the placements of the sensors for the upper body mode of neuron pro because only arm movement data are required.
Sensors 23 04188 g002
Figure 3. Experimental setup for the target reaching tasks: (a) target grid with a subject’s right hand at the start point; (b) complete experimental setup for the target reaching tasks.
Figure 3. Experimental setup for the target reaching tasks: (a) target grid with a subject’s right hand at the start point; (b) complete experimental setup for the target reaching tasks.
Sensors 23 04188 g003
Figure 4. The first 5 rows represent the shoulder and elbow joints’ angular values normalized between 1 and + 1 . It comprises the source data (solid black curves) of subject one and its reconstructions using all five synergies (red dotted curves) and only two synergies (green dotted curves). The last two rows represent the corresponding activation signals C (C1 and C2 in the case of two synergies).
Figure 4. The first 5 rows represent the shoulder and elbow joints’ angular values normalized between 1 and + 1 . It comprises the source data (solid black curves) of subject one and its reconstructions using all five synergies (red dotted curves) and only two synergies (green dotted curves). The last two rows represent the corresponding activation signals C (C1 and C2 in the case of two synergies).
Sensors 23 04188 g004
Figure 5. Example of the kinematic synergies of subject 1. (a) The spatial synergies extracted from the training data. The notations S H θ x , S H θ y , S H θ z , F A θ x , and F A θ y indicate the axis of the degree of freedom. (b) Bar plot showing the importance of each principal component in explaining the variance in the source data.
Figure 5. Example of the kinematic synergies of subject 1. (a) The spatial synergies extracted from the training data. The notations S H θ x , S H θ y , S H θ z , F A θ x , and F A θ y indicate the axis of the degree of freedom. (b) Bar plot showing the importance of each principal component in explaining the variance in the source data.
Sensors 23 04188 g005
Figure 6. Input dataset creation: 10 previous time-steps data are combined and provided as the input x ( t ) to the LSTM model to predict the output C p ( t ) at the current time step t.
Figure 6. Input dataset creation: 10 previous time-steps data are combined and provided as the input x ( t ) to the LSTM model to predict the output C p ( t ) at the current time step t.
Sensors 23 04188 g006
Figure 7. Sample of the joint angular value plots of the actual vs. estimated forearm motions using two synergies, one synergy, and direct estimation learning methodologies for one of the tested scenarios of subject 5.
Figure 7. Sample of the joint angular value plots of the actual vs. estimated forearm motions using two synergies, one synergy, and direct estimation learning methodologies for one of the tested scenarios of subject 5.
Sensors 23 04188 g007
Figure 8. Results of the forearm motion estimations ( F A θ x and F A θ y ) for all tested scenarios for subjects 1, 2, and 8, where each bar represents the calculated RMSE. The bars are first divided into M1, M2, and M3 based on the number of LSTM hidden layers in the model. The golden, blue, and gray bars represent the errors in the estimations when using two synergies, one synergy, and direct estimation learning approaches, respectively. The number inside the bar represents the number of inputs to the LSTM model (Table 1), whereas the error bar represents the standard deviation of estimation error values.
Figure 8. Results of the forearm motion estimations ( F A θ x and F A θ y ) for all tested scenarios for subjects 1, 2, and 8, where each bar represents the calculated RMSE. The bars are first divided into M1, M2, and M3 based on the number of LSTM hidden layers in the model. The golden, blue, and gray bars represent the errors in the estimations when using two synergies, one synergy, and direct estimation learning approaches, respectively. The number inside the bar represents the number of inputs to the LSTM model (Table 1), whereas the error bar represents the standard deviation of estimation error values.
Sensors 23 04188 g008
Figure 9. The box plot shows RMSE values for each subject’s cross-subject evaluation. The box size represents the range of 75% of the values, and the solid vertical golden line inside the box represents the median, with the black diamond marker indicating the mean value. Outliers are represented by circular markers, and the whiskers show the maximum and minimum values. A smaller box size represents minor variation in results and better transferability.
Figure 9. The box plot shows RMSE values for each subject’s cross-subject evaluation. The box size represents the range of 75% of the values, and the solid vertical golden line inside the box represents the median, with the black diamond marker indicating the mean value. Outliers are represented by circular markers, and the whiskers show the maximum and minimum values. A smaller box size represents minor variation in results and better transferability.
Sensors 23 04188 g009
Table 1. Training Scenarios based on No. of Inputs.
Table 1. Training Scenarios based on No. of Inputs.
No. of InputsSignal Combination
SH θ x SH θ y SH θ z SH θ ˙ x SH θ ˙ y SH θ ˙ z
2
3
4
6
Note: ✔ cell means signal used as input.
Table 2. Interpretation of Pearson Correlation Coefficient.
Table 2. Interpretation of Pearson Correlation Coefficient.
Range of rDegree of Relationship
1.0 r 0.7 A strong negative linear relationship
0.7 r 0.3 A distinct negative linear relationship
0.3 r 0.1 A weak negative linear relationship
0.1 r + 0.1 Not a linear relationship
+ 0.1 r + 0.3 A weak positive linear relationship
+ 0.3 r + 0.7 A distinct positive linear relationship
+ 0.7 r + 1.0 A strong positive linear relationship
Table 3. Pearson’s correlation coefficients “r” for Model M2 with 6 inputs.
Table 3. Pearson’s correlation coefficients “r” for Model M2 with 6 inputs.
Subject No.Learning Methodology
Direct Estimation1 Synergy2 Synergies
FEPSFEPSFEPS
10.910.750.910.780.940.75
20.950.890.920.890.960.90
30.960.600.890.860.920.70
40.960.710.910.700.970.71
50.980.900.980.890.980.92
60.970.830.930.760.950.84
70.900.860.890.860.920.89
80.970.380.930.670.960.51
90.940.630.960.510.940.57
100.910.310.920.290.880.34
110.930.560.960.280.940.45
120.980.800.980.830.970.81
130.960.350.920.300.940.42
140.960.700.920.570.950.65
Note: FE = flexion–extension, PS = pronation–supination. Highlighted cells mark weak correlation.
Table 4. Descriptive statistics of RMSE values obtained for different learning methodologies using model M2 scenarios only.
Table 4. Descriptive statistics of RMSE values obtained for different learning methodologies using model M2 scenarios only.
Learning MethodologyCountSumAverageVariance
Direct Estimation1486.7566.1974.672
1 Synergy14107.9307.7097.335
2 Synergies14108.9727.7847.748
Table 5. ANOVA summary table for the results using model M2 scenarios only.
Table 5. ANOVA summary table for the results using model M2 scenarios only.
Source of VariationSSdfMSF Statisticp-ValueF Critical
Between Methodologies22.452211.2261.7050.1953.238
Within Methodology256.826396.585
Total279.27841
Note: SS = Sum of Squares, df = Degrees of Freedom, MS = Mean Square.
Table 6. Descriptive statistics of the RMSE values obtained for different learning methodologies during cross-subject evaluation using model M2 with 6 inputs.
Table 6. Descriptive statistics of the RMSE values obtained for different learning methodologies during cross-subject evaluation using model M2 with 6 inputs.
Learning MethodologyCountSumAverageVariance
Direct Estimation1962157.12211.00642.696
1 Synergy1962025.91010.33619.789
2 Synergies1961895.9859.67318.365
Note: Highlighted cells mark the least values of average and variance.
Table 7. ANOVA summary table for the cross-subject evaluation using model M2 with 6 inputs.
Table 7. ANOVA summary table for the cross-subject evaluation using model M2 with 6 inputs.
Source of VariationSSdfMSF Statisticp-ValueF Critical
Between Methodologies173.962286.9813.2270.0403.011
Within Methodology15,765.77958526.950
Total15,939.741587
Note: SS = Sum of Squares, df = Degrees of Freedom, MS = Mean Square.
Table 8. Post hoc Tukey HSD test results for the cross-subject evaluation using model M2 with 6 inputs.
Table 8. Post hoc Tukey HSD test results for the cross-subject evaluation using model M2 with 6 inputs.
Group PairQ Statisticp-ValueQ Critical
Direct Estimation vs. 1 Synergy1.8050.411
Direct Estimation vs. 2 Synergies3.5930.0303.323
1 Synergy vs. 2 Synergies1.7880.418
Table 9. Descriptive statistics of the RMSE values obtained for different learning methodologies during cross-subject evaluation using model M2 with 2 inputs.
Table 9. Descriptive statistics of the RMSE values obtained for different learning methodologies during cross-subject evaluation using model M2 with 2 inputs.
Learning MethodologyCountSumAverageVariance
Direct Estimation1962792.53914.24841.462
1 Synergy1962646.62813.50324.561
2 Synergies1962440.45412.45123.557
Note: Highlighted cells mark the least values of average and variance.
Table 10. ANOVA summary table for the cross-subject evaluation using model M2 with 2 inputs.
Table 10. ANOVA summary table for the cross-subject evaluation using model M2 with 2 inputs.
Source of VariationSSdfMSF Statisticp-ValueF Critical
Between Methodologies319.3232159.6615.3470.0043.011
Within Methodology17,468.04958529.860
Total17,787.372587
Note: SS = Sum of Squares, df = Degrees of Freedom, MS = Mean Square.
Table 11. Post hoc Tukey HSD test results for the cross-subject evaluation using model M2 with 2 inputs.
Table 11. Post hoc Tukey HSD test results for the cross-subject evaluation using model M2 with 2 inputs.
Group PairQ Statisticp-ValueQ Critical
Direct Estimation vs. 1 Synergy1.9070.370
Direct Estimation vs. 2 Synergies4.6020.0033.323
1 Synergy vs. 2 Synergies2.6950.138
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahmed, M.H.; Chai, J.; Shimoda, S.; Hayashibe, M. Synergy-Space Recurrent Neural Network for Transferable Forearm Motion Prediction from Residual Limb Motion. Sensors 2023, 23, 4188. https://doi.org/10.3390/s23094188

AMA Style

Ahmed MH, Chai J, Shimoda S, Hayashibe M. Synergy-Space Recurrent Neural Network for Transferable Forearm Motion Prediction from Residual Limb Motion. Sensors. 2023; 23(9):4188. https://doi.org/10.3390/s23094188

Chicago/Turabian Style

Ahmed, Muhammad Hannan, Jiazheng Chai, Shingo Shimoda, and Mitsuhiro Hayashibe. 2023. "Synergy-Space Recurrent Neural Network for Transferable Forearm Motion Prediction from Residual Limb Motion" Sensors 23, no. 9: 4188. https://doi.org/10.3390/s23094188

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop