Next Article in Journal
Acknowledgment to the Reviewers of Signals in 2022
Next Article in Special Issue
A Sparse Multiclass Motor Imagery EEG Classification Using 1D-ConvResNet
Previous Article in Journal
Low-Cost Implementation of an Adaptive Neural Network Controller for a Drive with an Elastic Shaft
Previous Article in Special Issue
A Survey on Denoising Techniques of Electroencephalogram Signals Using Wavelet Transform
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Online Classification Performance in Motor Imagery-Based Brain–Computer Interfaces for Stroke Neurorehabilitation

by
Athanasios Vavoulis
,
Patricia Figueiredo
and
Athanasios Vourvopoulos
*
Institute for Systems and Robotics—Lisboa, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisbon, Portugal
*
Author to whom correspondence should be addressed.
Signals 2023, 4(1), 73-86; https://doi.org/10.3390/signals4010004
Submission received: 25 November 2022 / Revised: 5 January 2023 / Accepted: 17 January 2023 / Published: 20 January 2023
(This article belongs to the Special Issue Advancing Signal Processing and Analytics of EEG Signals)

Abstract

:
Motor imagery (MI)-based brain–computer interfaces (BCI) have shown increased potential for the rehabilitation of stroke patients; nonetheless, their implementation in clinical practice has been restricted due to their low accuracy performance. To date, although a lot of research has been carried out in benchmarking and highlighting the most valuable classification algorithms in BCI configurations, most of them use offline data and are not from real BCI performance during the closed-loop (or online) sessions. Since rehabilitation training relies on the availability of an accurate feedback system, we surveyed articles of current and past EEG-based BCI frameworks who report the online classification of the movement of two upper limbs in both healthy volunteers and stroke patients. We found that the recently developed deep-learning methods do not outperform the traditional machine-learning algorithms. In addition, patients and healthy subjects exhibit similar classification accuracy in current BCI configurations. Lastly, in terms of neurofeedback modality, functional electrical stimulation (FES) yielded the best performance compared to non-FES systems.

1. Introduction

Worldwide, stroke is a leading cause of adult long-term disability [1]. Growing evidence supports that chronic stroke patients maintain brain plasticity, meaning that there is still potential for additional recovery of impaired limbs [2]. Consequently, various motor rehabilitation techniques have been developed, including motor training [3], mirror therapy [4], motor imagery (MI) [5] and action observation (AO) [6].
Presently, there is increasing evidence that motor imagery practice combined with brain–computer interfaces (MI-BCI’s) in a closed-loop could promote long-lasting improvements in motor function in chronic stroke patients [7,8,9]. Specifically, BCI’s can act as an alternative non-muscular communication channel between the user’s brain and a computer system for motor rehabilitation. MI is the cognitive process of imagining the movement of a body part without actually moving it [10]. Through MI, stroke rehabilitation effectively promotes structural and functional reorganization [11]. This is achieved due to the repeated recruitment of motor-neuron circuits which repair connections between damaged neurons through neural plasticity, and eventually improve motor dysfunction [12].
The neurophysiological mechanisms underlying the MI practice are reflected in sensorimotor rhythms (SMR), recorded through electroencephalography (EEG) [13]. This type of rhythmic oscillation refers to the organized neural activity modulated by the MI and recorded over the sensorimotor cortex as decreases in the Alpha (8–12 Hz, also known as Mu rhythm) and Beta (13–26 Hz) frequency bands. When activity in a frequency band increases in response to stimuli, it is called event-related synchronization (ERS), while a decrease is called event-related desynchronization (ERD) [14].
The translation of the brain signals to the output of the BCI, as a neurofeedback, is accomplished through the distinct features aroused by the MI of different limbs. Traditionally, the classification involves a machine-learning algorithm [15]. However, EEG signals pose processing challenges, since they exhibit low signal-to-noise ratio (SNR) and are prone to signal artifacts and external noise. Therefore, pre-processing is mandatory in order to remove those signals that are unrelated to the brain (i.e., heartbeats, eye blinking, tongue and muscle movements, electronic equipment and environmental noise). This notwithstanding, combined with the high dimensionality of EEG signals, interpretation, and classification of brain signals is a difficult task, with many of the approaches utilized suffering from poor classification accuracy. In addition, the training of the subject and the BCI system is usually prolonged, as most of the classification methods require a significant amount of data to predict the MI label accurately [16]. However, motivation and attention have an important influence on the emergence of distinguishable features for various MI tasks and the stability of EEG patterns. A monotonous MI-based BCI practice frequently affects the engagement and concentration levels of the subjects, leading to the poor translation of the brain signals by the computer interface and a decline in therapy effectiveness [17]. Consistent with this viewpoint, several studies have reported that the feedback modalities of BCI systems create a more immersive and motivating environment, which increases the embodiment of the user and, thus, provides more robust EEG features [18]. Therefore, machine-learning algorithms should adapt to non-stationary brain signals, while human learning approaches should assist in the production of more consistent EEG patterns for the user.
The feedback modalities used for BCI motor rehabilitation include: non-embodied simple two-dimensional graphics on a screen [19], embodied avatar representation of the patient on a screen or with virtual reality (VR) [20], functional electrical stimulation (FES) [21] or robotic exoskeletal movement [22]. In VR, the subject can perceive the imagined motor action, which could potentially activate mirror neurons that are also employed by mirror therapy for stroke rehabilitation [23]. The decoded brain oscillations are used to control a VR avatar which reproduces the imagined actions, most commonly between left and right-hand movement. This “closed-loop” feedback is presented in real time; therefore, the classification algorithm used should be fast, as well as accurate. Moreover, sensory feedback has been suggested to improve not only the performance [24], but also induce neuroplastic changes in post-stroke patients [9]. This is achieved mainly by involving a greater part of the sensorimotor system (e.g., visual, auditory, haptic and tactile feedback) and producing more distinguishable features [25].
During the process of identifying classification methods more suitable to the nature of EEG signals, many researchers were motivated to address the difficulties involved in classifying MI-EEG signals by employing deep-learning methods. Deep learning was selected due to its successful development in different fields, such as computer vision and speech analysis. Unlike conventional machine-learning approaches, deep learning can automatically identify individualized features from raw MI-EEG data using deep architecture, while eliminating the need for pre-processing and time-consuming feature extraction, since it can execute feature engineering by itself [26]. However, their disadvantages are also evident, since a large amount of data is required; in addition, the large number of hyperparameters which must be learned during training can increase the training time compared to other methods [27].
Another vital limitation of EEG-based BCIs is variability across subjects. It is found that the discriminative information of EEG signals varies based on the basic characteristics (e.g., demographics, prior BCI experience) and psychological states (e.g., motivation, confidence and frustration) of the BCI users [28]. This phenomenon, combined with the need for minimizing the training and calibration time of the BCI system, leads to the use of transfer learning [29,30]. Transfer learning describes the procedure of using data to solve one problem and applying it to a different but related problem. Transfer functions are employed so that the classification models can be adapted from the source domain to different target domains. However, the effectiveness of transfer learning strongly depends on how well related the two subject domains are [31].
Nonetheless, we cannot neglect the fact that stroke patients suffer from individualized lesions, which might evoke different EEG patterns. Therefore, current efforts might be directed towards adaptive models to tailor the individual use of the system [32]. In other words, the difficulty in reducing individual differences is to consider individual attributes comprehensively, and then to select some of the attributes that are effective for the system.
Although a lot of research has been devoted to identifying and comparing the best algorithm in offline BCI systems with healthy subjects, detailed information on which classifiers lead to the most accurate prediction of motor imagery is still missing in online BCI systems with post-stroke patients. It is still unclear whether the cortical lesions of post-stroke patients evoke discriminable EEG patterns for different body parts. Our perspective is that if both traditional and novel classification algorithms induce insufficient accuracy, then the features generated by the EEG MI signals of post-stroke patients might not be distinguishable. Overall, it is widely agreed that recovery may be promoted through contingent activation of efferent and afferent pathways. A good level of BCI accuracy is, thus, a prerequisite (otherwise, there is no adequate contingency between the BCI command “efference” and the feedback). Thus, the main goal of this survey is to identify the current MI classification methods and highlight their limitations in BCI’s for neurorehabilitation.

2. Motor Imagery Classification Pipelines

In this section, we provide a brief background for all the steps that are necessary in order to extract typical MI-related EEG features, and the different classifier algorithms.

2.1. Pre-Processing

Initially, the task of filtering is to prepare the recorded signals for processing by enhancing the SNR. In most studies, the simultaneous electro-oculography (EOG), electro-myography (EMG) and electro-cardiography (ECG) is used, to exclude irrelevant signals. In addition, band-pass filtering within specific frequency bands is employed. However, this processing is challenging, since each subject expresses motor imagery ERD’s and ERS’s in varying frequency patterns [33]. Other filtering methods involve principal components analysis (PCA) and independent component analysis (ICA), which separate artifacts from the EEG signals either by excluding correlated activity or by transforming EEG signals into temporal and spatial independent components, respectively [34]. A simpler approach is to subtract common activity from the position of interest with common average reference (CAR) [35]. Lastly, spatial filtering with Laplacian filters is robust against the artifacts generated at regions that are not covered by an electrode cap, and it solves the “electrode reference problem” [36].

2.2. Feature Extraction and Selection

EEG-based BCIs typically collect data from multiple electrodes placed on the scalp, and each electrode produces a separate time series of measurements. This means that an EEG data set can have hundreds or thousands of dimensions, depending on the number of electrodes used. This gives rise to a common challenge referred to as “the curse of dimensionality” ([15]). However, there are techniques that can be used to mitigate the impact of the curse of dimensionality, such as dimensionality reduction through feature extraction. The extracted EEG features are able to capture signal characteristics which can be used as a basis for the differentiation between task-specific brain states.
During feature extraction, EEG characteristics are extracted from the signals in either the time, frequency and/or space domains. The most common type of EEG feature used in MI is the band power. Band-power (or rhythm) features represent the power of EEG signals for a given frequency band averaged over a time window, and over specific scalp locations [37]. These bands are usually divided into five main frequency ranges: Delta (0.5–4 Hz), Theta (4–8 Hz), Alpha (8–12 Hz), Beta (12–30 Hz), and Gamma (30–100 Hz) [38]. The simplest frequency domain feature-extraction method is fast Fourier transform (FFT). However, FFT does not take time information into account. An alternative approach is the short-time Fourier transform (STFT), which divides the signal into multiple overlapping frames [36]. Nonetheless, the spectra obtained from FFT over short epochs still have poor resolution when compared to an autoregressive model (AR) [36]. However, the validity of power-spectra estimates depends on the selection of a proper model order. The adaptive autoregressive (AAR) model establishes the parameters of the model [39].
Nevertheless, FFT and AR provide spectral characteristics of EEG and are not very robust due to the non-stationarity of the EEG signals. In contrast, wavelet transform (WT) uses varying size windows such that high frequencies are evaluated on the shorter window and low frequencies over a longer window contrast [40]. Thus, WT could perform better in the time resolution of high frequencies compared to STFT [37].
However, we cannot ignore that, in MI BCI, multichannel EEG recordings are used to discriminate the motor imagery patterns. Therefore, feature-extraction and selection methods are important in multichannel EEG in order to further reduce the feature space and improve the accuracy [41], and avoid the loss of crucial data in MI tasks [42].
For example, the use of the coefficient of determination [43], or r 2 , can provide a measure for the extent to which a particular EEG feature (i.e., power at a specific frequency and electrode location) is influenced by the subject’s mental task (e.g., rest vs. hand movement) [44]. Similarly, the Fisher score is commonly used for determining how strongly a feature (e.g., band power) is correlated with the labels (i.e., motor vs. rest), and in which channel [45]. Moreover, spatial filters have been widely used to extract spatial information from features. The most common method is the common spatial pattern (CSP) filter. CSP generates spatial filters that minimize the variance in one class and maximize the variance in other classes simultaneously [46]. The CSP performance depends on the operational frequency band of EEG [47]. Therefore, several approaches have been proposed to fine tune the subject-specific frequency range for CSP algorithm, such as filter bank common spatial pattern (FBCSP) [33]. Further, within the last few years, several papers in the literature investigated the use of automatic channel-selection methods, including Pearson correlation [48], regularized common spatial pattern with aggregation (RCSPA) [49], and the use of the standard deviation of wavelet coefficients (stdWC) across channels [50].

2.3. Classification

After feature extraction, the signals are classified into various classes of movement imagination (e.g., left or right MI) through the use of classifiers. A classifier is a model that is used to predict the class to which the feature belongs. There are two main types of classifiers: supervised and unsupervised [51]. Supervised classifiers are trained on labeled data, where the correct class for each feature is known. Examples of supervised classifiers include logistic regression, decision trees, and support vector machines (SVM). Unsupervised classifiers, on the other hand, are trained on data that is not labeled with the correct class. Examples of unsupervised classifiers include k-means clustering and Gaussian mixture models. In general, supervised classifiers tend to perform better than unsupervised classifiers because they are trained on more informative data [52]; hence, they are the most used classifier type in BCIs.
Most of the reviews in the literature classified the classification methods into linear classifiers, non-linear classifiers, neural networks and deep neural networks [13,15,37]. The two main types of linear classifiers are linear discriminant analysis (LDA) [53,54,55,56,57,58] and support vector machine (SVM) [21,59,60]. LDA has low computational requirements; however, it might provide poor results on complex non-linear EEG data [37]. SVM overcomes this obstacle by using non-linear kernel functions to map the data onto higher dimensional space [36]. Non-linear classifiers, on the other hand, are not as widespread and popular as linear classifiers and neural networks [61,62]. Although k-nearest neighbor (k-NN) [63] and Bayesian classifiers [64] are generative and easy to implement, they are sensitive to irrelevant and redundant features [36]. Finally, fuzzy classification is another approach used for EEG classification, since EEG classification is a decision-making problem suited for fuzzy logic [40].
An important upside of artificial neural networks (ANN) is that they take into account the dynamic nature of an EEG signal. They are assemblies of artificial neurons, arranged in layers, which can be used to approximate any nonlinear decision boundary. The multi-layer perceptron (MLP) and the Gaussian classifier are the most used neural-network architectures used in BCI research [65,66]. In contrast to traditional neural networks, where weights have to be chosen carefully, deep-learning approaches, such as convolutional neural networks (CNN) [67,68] and recurrent neural networks (RNN) [69,70], have high descriptive power.
Further, deep-learning algorithms, particularly CNNs, have been successful at performing feature extraction on EEG data. This allows the network to automatically learn features from the raw data that are relevant for a given task, such as classification [71]. However, it should be noted that CNNs were adopted in EEG signal processing after first being established as a tool in image processing. Therefore, when using CNNs for the classification of MI EEG, pre-processing of the input data might be needed. Either raw data is fed into the CNN, and the first layers of the network are devoted to extracting spatial and temporal information, or a time-frequency domain image is obtained from the data using STFT or WT [72,73].
Another way to use deep learning for feature extraction on EEG data is to use RNN, such as a long short-term memory (LSTM) networks [74]. LSTMs are able to capture temporal dependencies in the data by processing the data sequentially in time and using the hidden state to “remember” information from previous time points.
Finally, additional deep-learning approaches include the use of restricted Boltzmann machines (RBMs), deep belief network (DBN), generative adversarial networks (GAN), and the variational autoencoder (VAE) [26].

3. Methods

In this section, we describe our search criteria for this review, including the statistical tests used for the comparison. The scientific database used for this review was Scopus database (https://www.scopus.com, accessed on 25 November 2022) (Elsevier, Amsterdam, Netherlands). Scopus covers over 23,000 titles from more than 5000 international publishers, making it one of the largest abstract and citation databases in the world. Further, Scopus has a rigorous selection process for inclusion in the database, which helps ensure the quality and relevance of the content.

3.1. Searching Criteria

The searching string utilized in Scopus consisted of the following format: TITLE-ABS-KEY (“brain computer interface” AND “motor imagery” AND stroke AND classification). The initial searching procedure yielded 138 results; however, we applied a set of inclusion criteria to further narrow our search. Specifically, we selected a sub-set of papers which reported online accuracies (with new data from actual participants); originating from EEG-based MI-BCI’s; and included two-class classification of upper limbs (left- and right-hand MI); this further reduced the sample to 18 papers (Table 1). The exclusion criteria were: offline BCI studies; data not recorded by EEG (e.g., fNIRS); multi-class classification (three and above); two-class studies using data coming from MI of limbs other than the two hands (e.g., feet, tongue, etc.); two-class studies classifying hand-movement imagination and resting state; and studies using data from other BCI types, such as steady-state visual-evoked potentials or P300.
The features gathered from each paper were: (1) author name; (2) date of publication; (3) classifier; (4) type of classifier (e.g., traditional/ deep learning); (5) classification accuracy; (6) number of electrodes used; (7) number of subjects; (8) healthy or patients; (9) BCI protocol (number of sessions and trials); (10) feedback modality (e.g., screen, robot, etc.); and (11) average age of participants (Table 1).

3.2. Statistical Tests

Firstly, for assessing the normality of the data, we performed a Kolmogorov–Smirnov test. Since data distributions were not normal, but also due to the small sample size, non-parametric tests were used. Specifically, for comparing the classification performance of patients and healthy subjects or traditional machine-learning methods and deep-learning techniques, we used the Wilcoxon rank sum test. Next, for identifying the influence of the various types of neurofeedback, a Kruskal–Wallis and Dunn’s post-hoc test was employed. Finally, for the evaluation of the correlation of a parameter with the classification performance of the BCI configuration, we used the Pearson’s correlation coefficient (r). For all statistical comparisons, the significance level was set to 5% (p < 0.05) and results were computed using MATLAB R2016b.

4. Results

In this section, we present the classification accuracy of different ML algorithms and for different feature-extraction methods. Moreover, we investigate the influence of non-ML factors, such as user demographics (e.g., age), the user type (e.g., patient vs. healthy), and experimental setup (e.g., number of: trials, electrodes, subjects) on the BCI performance.

4.1. Comparison of the Algorithms Used for Classifying Motor Imagery

Here, we focus on identifying features and classification algorithms which identify the imagination of upper-limb movements with the best performance.
According to the various studies surveyed, the classification performance ranges from 62% to 95%, with an average of 77% (Table 1). The majority of papers use CSP for extracting the MI features that will afterwards be classified, in most cases by LDA [16,22,77,78,80,83]. The rest of the papers used either traditional machine-learning methods, such as SVM [21,79], logistic regression (LR) [85], naïve Bayes (NB) [86,87], quadratic discriminant analysis (QDA) [76], and fuzzy logic systems (T2FLS) [19,75], or deep-learning techniques, such as convolutional neural networks (CNN) [81,82,84] and autoencoder [19]. Further, Irimia et al., 2017 [78] and Anchancarray et al., 2021 [21] reported the best classification accuracy at 95% and 93% utilizing LDA and SVM, respectively. In both cases, the feature-extraction method used was CSP. As illustrated in Figure 1a, there is no statistically significant correlation between the classification performance and time in years ( r = 0.3 , p = 0.23 ).
Apart from the various classification methods utilized in the BCIs, it is important to investigate the features that are extracted from the EEG signal and the neurofeedback that is provided to subjects. In an attempt to discover if there is a particular BCI configuration that gives rise to the best performance, we present the different feature-extraction methods and types of neurofeedback with respect to the classification accuracy (Figure 1b). With solid blue lines, we indicate the BCI approaches that exceed the average classification accuracy (77%), while dashed lines indicate the BCI configurations that performed with a classification accuracy below the average.
In the literature, we discovered various feature-extraction techniques, such as spatial filters; CSP was the most used and provided the most distinguishable features. The other methods used are selecting the features either from the frequency domain, FT and power spectrum density (PSD), or the time-frequency domain, WT and AR. Lastly, Benzy et al., 2020 [86] used phase locking value (PLV) for discriminating the two upper-limb MI, while three other papers employed a combination of spatial and time-frequency domain feature-extraction methods [16,76,77]. Although the results are not robust for the feature-extraction and classification techniques, the neurofeedback modality provided to the subjects seems to have an important impact on the classification performance (Figure 1b,c). We show that the quality of the neurofeedback during the BCI experiment has a vital role, since the BCI approaches using VR, robotic arm and/or FES have higher classification accuracy than BCI systems that employ only feedback via a computer screen. This result agrees with previous surveys which argue that neurofeedback evokes more distinguishable features in the EEG signal [88,89]. Nonetheless, no statistically significant differences were found between the groups ( χ 2 ( 4 ) = 9.76 , p = 0.044 ; Figure 1c). Finally, no statistically significant differences had been found between deep-learning and traditional methods ( Z = 0.2127 , p = 0.8315 ; Figure 1d). Overall, we can observe that LDA performed satisfyingly on average, whenever it was employed.

4.2. Influence of User’s Characteristics to the BCI Performance

BCI systems can also be affected by anthropogenic factors; thus, it is hard to draw solid conclusions based only upon the limitations of the machine level.
Thus far, we have examined the influence of different types of classifiers, feature-extraction methods, and the impact of providing different feedback modalities. Nonetheless, since our target is the applicability of BCIs in neurorehablitation, we also accounted for the user type, and, specifically, compared the performance between healthy and patients. Previous research has revealed that the EEG of patients is considerably different from signals recorded in healthy subjects [90]. In fact, it is not yet clear if post-stroke patients have features distinguishable from the imagination of upper-limb movements. Hence, our survey included studies examining the classification accuracy in both patients and healthy participants. Contrary to our expectations, patients and healthy subjects performed with a similar accuracy (Figure 2a, ( Z = 0.0890 , p = 0.9291 ).
In terms of the user’s characteristics, we attempted to determine the effect of the subject’s age on the BCI performance. Interestingly, the correlation of the user’s age and BCI performance was almost zero (Figure 2b, r = 0.03 , p = 0.91 ). These results are in-line with the prior work of Blanco-Mora et al., 2022, where no statistically significant correlation was found between age and classifier performance [91]. However, we cannot neglect the negative trend, which declines among the classification performance and the age of the subjects in each experiment.

4.3. Correlation of Classification Accuracy with Various Parameters of BCI Framework

Up to this point, our study is missing an evaluation of the BCI experimental protocol employed for each study. Therefore, we collected various parameters, such as the number of electrodes, the total number of trials (number of session * number of trials for each session) and the number of subjects that participated in each survey (Table 1). At this stage, it is important to provide some information about the presentation of these parameters. The number of electrodes reported in this review includes the electrodes used in the feature-extraction methods and not the recording process. Moreover, the total number of trials consists of both training and testing sessions.
Concerning the number of electrodes, no significant relationship was found with classification accuracy (Figure 3a; r = 0.05 , p = 0.84 ). According to the literature, there is contradictory evidence about the impact of the number of channels used and the classification performance. Although Meng et al., 2018 [92] found that subjects’ average online BCI performance using a large EEG montage does not show significantly better performance than a smaller montage [92], Farquhar et al., 2013 [93] reported an effect when varying the number of electrodes used as features for the analysis [93].
The number of trials, and by extension the trials used for training the classifier, showed the highest correlation to the BCI performance (Figure 3b). Although the correlation of these two parameters is not statistically significant ( r = 0.38 , p = 0.16 ), we have to acknowledge the positive trend of interaction between the number of trials and the classification accuracy. In addition, it is important to mention that studies including post-stroke patients had more sessions, due to the neurorehabilitation longitudinal protocol.

5. Discussion

Many of the classification methods surveyed in reviews are evaluated using offline BCI data [13,15,27,30,31,32,37,94]. However, an actual BCI application is fundamentally online. Based on the papers surveyed in this manuscript, we attempted to identify some guidelines on whether to use various types of classification and feature-extraction methods, neurofeedback modalities and BCI experimental parameters.
One of the major issues in MI-BCI research is to define the direction of future studies regarding the classification methods. Since the BCI pipeline is evolving rapidly and novel approaches such as deep learning and transfer learning are increasingly used for discriminating EEG signals, one of our main concerns was if traditional and new approaches exhibit significant differences. According to our research, machine-learning methods are not inferior to deep-learning techniques. If we consider deep learning as an approach which is able to identify individualized separation rules for each subject, it would be logical to focus future work towards CNN and recurrent neural networks (RNN). Nonetheless, the application of transfer learning in future studies seems more appealing, since traditional and novel classifiers perform with similar accuracy. In addition, patients and healthy subjects did not show significant differences with respect to the classification performance; therefore, the training of robust and powerful traditional classification methods in the healthy domain and the adaptation to each patient’s domain might be the next direction that should be taken in online BCI systems.
Contrary to our results, Tayeb et al., 2019 [81] achieved better classification performance with deep-learning models compared to traditional machine-learning techniques, suggesting a route ahead for developing new robust techniques for EEG signal decoding. Tayeb et al., 2019 [81] was one of the few papers that attempted several classification methods with online data; however, in our research, only the algorithm with the best performance from each paper is mentioned. Another important detail of the classification accuracy reported in our manuscript is that, in many cases where the BCI performance was presented along with the neurorehabilitation period, we decided to outline the accuracy of the final week’s sessions, since it was the value that the authors presented.
Apart from the significant variety of factors influencing the classification performance, mentioned in the Section 4, we should not neglect the diverse methods of computing the classification performance in different studies. The usual way of calculating the classification performance in BCI systems is by training the algorithm with data deriving from several starting sessions and evaluating the performance based on the testing sessions. However, Irimia et al., 2017 [78], for instance, trained the classifier from data recorded during the first four runs of each session and testing the accuracy of the model in the last two runs of each session.
Future surveys should include the clinical improvement in the patients and translate the impact of the classification accuracy to the clinical outcome. Unfortunately, this was impossible in this current version, since only three papers in our survey reported the clinical evaluation of the patients [19,78,80].

6. Limitations

Although this study collected and explored 138 EEG datasets, it was limited (N = 18). Our findings, therefore, have limited statistical power, and should be interpreted with caution. In addition, the statistical outcomes of our measurement and the comparisons presented here are exploratory and not confirmative.
Moreover, an important aspect of BCI configurations is to evaluate their clinical impact on the patients. However, not all studies report the clinical scale. In addition, even if the clinical scale is reported, different scale measures are used, which makes it difficult to relate the clinical outcome to the classification performance. Nonetheless, we tried to cluster the clinical outcome between improved and not-improved patients, which yielded ten patients with reported improvement and two with stable conditions. Due to the unbalanced clustering of these studies, we decided not to further investigate possible relationships through correlations. Finally, so far, we have not been able to find studies with negative results, nor reported non-improvements in patients after interventions.

7. Conclusions

In this survey, we identified the EEG classification approaches that have been developed and evaluated in MI-based BCI systems using EEG recordings for the classification of upper-limb movement. From existing data, we can see no significant differences in terms of classification accuracy between patients and healthy volunteers. This suggests that current BCI configurations used in rehabilitation, although not optimal, provide patients with modest benefits.
Regarding the parameters and demographics used in BCI configurations, we found that although there is a positive trend towards better classification accuracy over the years, no significant correlations are detectable. Moreover, with respect to neurofeedback modalities, FES yielded the best performance in both Screen and VR modalities compared to non-FES. Finally, in terms of the classifier’s performance, we found that traditional methods (e.g., LDA, SVM, etc.) are still not surpassed by current deep-learning methods.

Author Contributions

Conceptualization, A.V. (Athanasios Vourvopoulos) and A.V. (Athanasios Vavoulis); methodology, A.V. (Athanasios Vavoulis); validation, A.V. (Athanasios Vavoulis), A.V. (Athanasios Vourvopoulos) and P.F.; formal analysis, A.V. (Athanasios Vavoulis); investigation, A.V. (Athanasios Vavoulis); data curation, A.V. (Athanasios Vavoulis); writing—original draft preparation, A.V. (Athanasios Vavoulis); writing—review and editing, A.V. (Athanasios Vourvopoulos) and P.F.; visualization, A.V. (Athanasios Vavoulis); supervision, A.V. (Athanasios Vourvopoulos). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Fundação para a Ciência e Tecnologia (FCT) through CEECIND/01073/2018, the LARSyS—FCT Project UIDB/50009/2020 and NOISyS project 2022.02283. PTDC.

Data Availability Statement

Data available on request from the authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mozaffarian, D.; Benjamin, E.J.; Go, A.S.; Arnett, D.K.; Blaha, M.J.; Cushman, M.; De Ferranti, S.; Després, J.P.; Fullerton, H.J.; Howard, V.J.; et al. Heart disease and stroke statistics—2015 update: A report from the American Heart Association. Circulation 2015, 131, e29–e322. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Butler, A.J.; Page, S.J. Mental practice with motor imagery: Evidence for motor recovery and cortical reorganization after stroke. Arch. Phys. Med. Rehabil. 2006, 87, 2–11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Thomas, L.H.; French, B.; Coupe, J.; McMahon, N.; Connell, L.; Harrison, J.; Sutton, C.J.; Tishkovskaya, S.; Watkins, C.L. Repetitive task training for improving functional ability after stroke: A major update of a Cochrane review. Stroke 2017, 48, e102–e103. [Google Scholar] [CrossRef] [Green Version]
  4. Pérez-Cruzado, D.; Merchán-Baeza, J.A.; González-Sánchez, M.; Cuesta-Vargas, A.I. Systematic review of mirror therapy compared with conventional rehabilitation in upper extremity function in stroke survivors. Aust. Occup. Ther. J. 2017, 64, 91–112. [Google Scholar] [CrossRef] [PubMed]
  5. Kho, A.Y.; Liu, K.P.; Chung, R.C. Meta-analysis on the effect of mental imagery on motor recovery of the hemiplegic upper extremity function. Aust. Occup. Ther. J. 2014, 61, 38–48. [Google Scholar] [CrossRef] [PubMed]
  6. Celnik, P.; Webster, B.; Glasser, D.M.; Cohen, L.G. Effects of action observation on physical training after stroke. Stroke 2008, 39, 1814–1820. [Google Scholar] [CrossRef] [Green Version]
  7. Hong, X.; Lu, Z.K.; Teh, I.; Nasrallah, F.A.; Teo, W.P.; Ang, K.K.; Phua, K.S.; Guan, C.; Chew, E.; Chuang, K.H. Brain plasticity following MI-BCI training combined with tDCS in a randomized trial in chronic subcortical stroke subjects: A preliminary study. Sci. Rep. 2017, 7, 9222. [Google Scholar] [CrossRef] [Green Version]
  8. Ramos-Murguialday, A.; Curado, M.R.; Broetz, D.; Yilmaz, Ö.; Brasil, F.L.; Liberati, G.; Garcia-Cossio, E.; Cho, W.; Caria, A.; Cohen, L.G.; et al. Brain-machine interface in chronic stroke: Randomized trial long-term follow-up. Neurorehabilit. Neural Repair 2019, 33, 188–198. [Google Scholar] [CrossRef]
  9. Cervera, M.A.; Soekadar, S.R.; Ushiba, J.; Millán, J.d.R.; Liu, M.; Birbaumer, N.; Garipelli, G. Brain-computer interfaces for post-stroke motor rehabilitation: A meta-analysis. Ann. Clin. Transl. Neurol. 2018, 5, 651–663. [Google Scholar] [CrossRef]
  10. Jeannerod, M.; Decety, J. Mental motor imagery: A window into the representational stages of action. Curr. Opin. Neurobiol. 1995, 5, 727–732. [Google Scholar] [CrossRef]
  11. Di Rienzo, F.; Collet, C.; Hoyek, N.; Guillot, A. Impact of neurologic deficits on motor imagery: A systematic review of clinical evaluations. Neuropsychol. Rev. 2014, 24, 116–147. [Google Scholar] [CrossRef] [PubMed]
  12. Cramer, S.C.; Sur, M.; Dobkin, B.H.; O’Brien, C.; Sanger, T.D.; Trojanowski, J.Q.; Rumsey, J.M.; Hicks, R.; Cameron, J.; Chen, D.; et al. Harnessing neuroplasticity for clinical applications. Brain 2011, 134, 1591–1609. [Google Scholar] [CrossRef]
  13. Nicolas-Alonso, L.F.; Gomez-Gil, J. Brain computer interfaces, a review. Sensors 2012, 12, 1211–1279. [Google Scholar] [CrossRef] [PubMed]
  14. Pfurtscheller, G. EEG event-related desynchronization (ERD) and synchronization (ERS). Electroencephalogr. Clin. Neurophysiol. 1997, 1, 26. [Google Scholar] [CrossRef]
  15. Lotte, F.; Congedo, M.; Lécuyer, A.; Lamarche, F.; Arnaldi, B. A review of classification algorithms for EEG-based brain-computer interfaces. J. Neural Eng. 2007, 4, R1–R13. [Google Scholar] [CrossRef] [Green Version]
  16. Zhang, X.; Hou, W.; Wu, X.; Feng, S.; Chen, L. A Novel Online Action Observation-Based Brain-Computer Interface That Enhances Event-Related Desynchronization. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 2605–2614. [Google Scholar] [CrossRef]
  17. Myrden, A.; Chau, T. Effects of user mental state on EEG-BCI performance. Front. Hum. Neurosci. 2015, 9, 308. [Google Scholar] [CrossRef] [Green Version]
  18. Juliano, J.M.; Spicer, R.P.; Vourvopoulos, A.; Lefebvre, S.; Jann, K.; Ard, T.; Santarnecchi, E.; Krum, D.M.; Liew, S.L. Embodiment Is Related to Better Performance on a Brain–Computer Interface in Immersive Virtual Reality: A Pilot Study. Sensors 2020, 20, 1204. [Google Scholar] [CrossRef] [Green Version]
  19. Prasad, G.; Herman, P.; Coyle, D.; McDonough, S.; Crosbie, J. Applying a brain-computer interface to support motor imagery practice in people with stroke for upper limb recovery: A feasibility study. J. NeuroEng. Rehabil. 2010, 7, 60. [Google Scholar] [CrossRef] [Green Version]
  20. Vourvopoulos, A.; Pardo, O.M.; Lefebvre, S.; Neureither, M.; Saldana, D.; Jahng, E.; Liew, S.L. Effects of a brain-computer interface with virtual reality (VR) neurofeedback: A pilot study in chronic stroke patients. Front. Hum. Neurosci. 2019, 13, 210. [Google Scholar] [CrossRef]
  21. Achanccaray, D.; Izumi, S.I.; Hayashibe, M. Visual-Electrotactile Stimulation Feedback to Improve Immersive Brain-Computer Interface Based on Hand Motor Imagery. Comput. Intell. Neurosci. 2021, 2021, 8832686. [Google Scholar] [CrossRef]
  22. Gaur, P.; Gupta, H.; Chowdhury, A.; McCreadie, K.; Pachori, R.B.; Wang, H. A Sliding Window Common Spatial Pattern for Enhancing Motor Imagery Classification in EEG-BCI. IEEE Trans. Instrum. Meas. 2021, 70, 4002709. [Google Scholar] [CrossRef]
  23. Vourvopoulos, A.; Jorge, C.; Abreu, R.; Figueiredo, P.; Fernandes, J.C.; Bermúdez i Badia, S. Efficacy and Brain Imaging Correlates of an Immersive Motor Imagery BCI-Driven VR System for Upper Limb Motor Rehabilitation: A Clinical Case Report. Front. Hum. Neurosci. 2019, 13, 244. [Google Scholar] [CrossRef] [Green Version]
  24. Vourvopoulos, A.; Blanco-Mora, D.A.; Aldridge, A.; Jorge, C.; Figueiredo, P.; Badia, S.B.i. Enhancing Motor-Imagery Brain-Computer Interface Training with Embodied Virtual Reality: A Pilot Study with Older Adults. In Proceedings of the 2022 IEEE International Conference on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), Rome, Italy, 26–28 October 2022; pp. 157–162. [Google Scholar] [CrossRef]
  25. Fleury, M.; Lioi, G.; Barillot, C.; Lécuyer, A. A Survey on the Use of Haptic Feedback for Brain-Computer Interfaces and Neurofeedback. Front. Neurosci. 2020, 14, 528. [Google Scholar] [CrossRef] [PubMed]
  26. Altaheri, H.; Muhammad, G.; Alsulaiman, M.; Amin, S.U.; Altuwaijri, G.A.; Abdul, W.; Bencherif, M.A.; Faisal, M. Deep learning techniques for classification of electroencephalogram (EEG) motor imagery (MI) signals: A review. Neural Comput. Appl. 2021, 1–42. [Google Scholar] [CrossRef]
  27. Padfield, N.; Zabalza, J.; Zhao, H.; Masero, V.; Ren, J. EEG-Based Brain-Computer Interfaces Using Motor-Imagery: Techniques and Challenges. Sensors 2019, 19, 1423. [Google Scholar] [CrossRef] [Green Version]
  28. Ahn, M.; Jun, S.C. Performance variation in motor imagery brain-computer interface: A brief review. J. Neurosci. Methods 2015, 243, 103–110. [Google Scholar] [CrossRef]
  29. Jayaram, V.; Alamgir, M.; Altun, Y.; Scholkopf, B.; Grosse-Wentrup, M. Transfer learning in brain-computer interfaces. IEEE Comput. Intell. Mag. 2016, 11, 20–31. [Google Scholar] [CrossRef] [Green Version]
  30. Wu, D.; Xu, Y.; Lu, B.L. Transfer Learning for EEG-Based Brain–Computer Interfaces: A Review of Progress Made Since 2016. IEEE Trans. Cogn. Dev. Syst. 2020, 14, 4–19. [Google Scholar] [CrossRef]
  31. Lotte, F.; Bougrain, L.; Cichocki, A.; Clerc, M.; Congedo, M.; Rakotomamonjy, A.; Yger, F. A review of classification algorithms for EEG-based brain-computer interfaces: A 10 year update. J. Neural Eng. 2018, 15, 031005. [Google Scholar] [CrossRef] [Green Version]
  32. Mladenović, J. A generic framework for adaptive EEG-based BCI training and operation. arXiv 2017, arXiv:1707.07935. [Google Scholar]
  33. Ang, K.K.; Chin, Z.Y.; Zhang, H.; Guan, C. Filter Bank Common Spatial Pattern (FBCSP) in brain-computer interface. In Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–8 June 2008; pp. 2390–2397. [Google Scholar] [CrossRef]
  34. Delorme, A.; Sejnowski, T.; Makeig, S. Enhanced detection of artifacts in EEG data using higher-order statistics and independent component analysis. NeuroImage 2007, 34, 1443–1449. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Syam, S.H.F.; Lakany, H.; Ahmad, R.B.; Conway, B.A. Comparing Common Average Referencing to Laplacian Referencing in Detecting Imagination and Intention of Movement for Brain Computer Interface. MATEC Web Conf. 2017, 140, 01028. [Google Scholar] [CrossRef]
  36. Lakshmi, M.R.; Prasad, T.; Prakash, D.V.C. Survey on EEG signal processing methods. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 2014, 4, 84–91. [Google Scholar]
  37. Aggarwal, S.; Chugh, N. Signal processing techniques for motor imagery brain computer interface: A review. Array 2019, 1–2, 100003. [Google Scholar] [CrossRef]
  38. Da Silva, F.L. EEG: Origin and measurement. In EEg-fMRI; Springer: Berlin/Heidelberg, Germany, 2022; pp. 23–48. [Google Scholar]
  39. Schlögl, A.; Lugger, K.; Pfurtscheller, G. Using adaptive autoregressive parameters for a brain-computer-interface experiment. In Proceedings of the Proceedings of the 19th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. ‘Magnificent Milestones and Emerging Opportunities in Medical Engineering’ (Cat. No. 97CH36136); Chicago, IL, USA, 30 October–2 November 1997, Volume 4, pp. 1533–1535.
  40. Darvishi, S.; Al-Ani, A. Brain-computer interface analysis using continuous wavelet transform and adaptive neuro-fuzzy classifier. In Proceedings of the 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon, France, 22–26 August 2007; Volume 2007, pp. 3220–3223. [Google Scholar] [CrossRef] [Green Version]
  41. McFarland, D.J.; Wolpaw, J.R. Sensorimotor rhythm-based brain-computer interface (BCI): Feature selection by regression improves performance. IEEE Trans. Neural Syst. Rehabil. Eng. 2005, 13, 372–379. [Google Scholar] [CrossRef]
  42. Molla, M.K.I.; Al Shiam, A.; Islam, M.R.; Tanaka, T. Discriminative feature selection-based motor imagery classification using EEG signal. IEEE Access 2020, 8, 98255–98265. [Google Scholar] [CrossRef]
  43. Ozer, D.J. Correlation and the coefficient of determination. Psychol. Bull. 1985, 97, 307. [Google Scholar] [CrossRef]
  44. Wolpaw, J.R.; McFarland, D.J.; Vaughan, T.M.; Schalk, G. The Wadsworth Center brain-computer interface (BCI) research and development program. IEEE Trans. Neural Syst. Rehabil. Eng. 2003, 11, 1–4. [Google Scholar] [CrossRef]
  45. Perdikis, S.; Tonin, L.; Saeedi, S.; Schneider, C.; Millán, J.d.R. The Cybathlon BCI race: Successful longitudinal mutual learning with two tetraplegic users. PLoS Biol. 2018, 16, e2003787. [Google Scholar] [CrossRef]
  46. Müller-Gerking, J.; Pfurtscheller, G.; Flyvbjerg, H. Designing optimal spatial filters for single-trial EEG classification in a movement task. Clin. Neurophysiol. Off. J. Int. Fed. Clin. Neurophysiol. 1999, 110, 787–798. [Google Scholar] [CrossRef] [PubMed]
  47. Blankertz, B.; Tomioka, R.; Lemm, S.; Kawanabe, M.; Muller, K.R. Optimizing spatial filters for robust EEG single-trial analysis. IEEE Signal Process. Mag. 2007, 25, 41–56. [Google Scholar] [CrossRef]
  48. Gaur, P.; McCreadie, K.; Pachori, R.B.; Wang, H.; Prasad, G. An automatic subject specific channel selection method for enhancing motor imagery classification in EEG-BCI using correlation. Biomed. Signal Process. Control 2021, 68, 102574. [Google Scholar] [CrossRef]
  49. Tiwari, A.; Chaturvedi, A. Automatic EEG channel selection for multiclass brain-computer interface classification using multiobjective improved firefly algorithm. Multimed. Tools Appl. 2022, 1–29. [Google Scholar] [CrossRef]
  50. Mahamune, R.; Laskar, S.H. An automatic channel selection method based on the standard deviation of wavelet coefficients for motor imagery based brain–computer interfacing. Int. J. Imaging Syst. Technol. 2022, 1–15. [Google Scholar] [CrossRef]
  51. James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning; Springer: Berlin/Heidelberg, Germany, 2013; Volume 112. [Google Scholar]
  52. Bishop, C.M.; Nasrabadi, N.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4. [Google Scholar]
  53. Chen, M.; Liu, Y.; Zhang, L. Classification of stroke patients’ motor imagery EEG with autoencoders in BCI-FES rehabilitation training system. In Proceedings of the 21st International Conference, ICONIP 2014, Kuching, Malaysia, 3–6 November 2014; Volume 8836, pp. 202–209. [Google Scholar] [CrossRef]
  54. Rodŕiguez-Beŕmudez, G.; Gárcia-Laencina, P.J. Automatic and adaptive classification of electroencephalographic signals for brain computer interfaces. J. Med. Syst. 2012, 36, S51–S63. [Google Scholar] [CrossRef]
  55. Ortner, R.; Irimia, D.C.; Scharinger, J.; Guger, C. A motor imagery based brain-computer interface for stroke rehabilitation. Annu. Rev. CyberTherapy Telemed. 2012, 10, 319–323. [Google Scholar]
  56. Sebastián-Romagosa, M.; Cho, W.; Ortner, R.; Murovec, N.; Von Oertzen, T.; Kamada, K.; Allison, B.Z.; Guger, C. Brain Computer Interface Treatment for Motor Rehabilitation of Upper Extremity of Stroke Patients—A Feasibility Study. Front. Neurosci. 2020, 14, 591435. [Google Scholar] [CrossRef]
  57. Vourvopoulos, A.; Badia, S.B.I. Usability and cost-effectiveness in brain-computer interaction: Is it user throughput or technology related? In Proceedings of the 7th Augmented Human International Conference, Geneva, Switzerland, 25–27 February 2016. [Google Scholar] [CrossRef]
  58. Vourvopoulos, A.; Niforatos, E.; Bermudez i Badia, S.; Liarokapis, F. Brain–Computer Interfacing with Interactive Systems—Case Study 2. In Intelligent Computing for Interactive System Design; ACM: New York, NY, USA, 2021; pp. 237–272. [Google Scholar] [CrossRef]
  59. Shenoy, H.V.; Vinod, A.P. An iterative optimization technique for robust channel selection in motor imagery based brain computer interface. In Proceedings of the 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), San Diego, CA, USA, 5–8 October 2014; Volume 2014, pp. 1858–1863. [Google Scholar] [CrossRef]
  60. Garcia, G.N.; Ebrahimi, T.; Vesin, J.M. Correlative exploration of EGG signals for direct brain-computer communication. In Proceedings of the 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP ’03), Hong Kong, China, 6–10 April 2003; Volume 5, pp. 816–819. [Google Scholar] [CrossRef] [Green Version]
  61. Hamedi, M.; Salleh, S.H.; Noor, A.M.; Mohammad-Rezazadeh, I. Neural network-based three-class motor imagery classification using time-domain features for BCI applications. In Proceedings of the 2014 IEEE REGION 10 SYMPOSIUM, Kuala Lumpur, Malaysia, 14–16 April 2014; pp. 204–207. [Google Scholar] [CrossRef]
  62. Millán, J.D.R.; Renkens, F.; Mouriño, J.; Gerstner, W. Noninvasive brain-actuated control of a mobile robot by human EEG. IEEE Trans. Biomed. Eng. 2004, 51, 1026–1033. [Google Scholar] [CrossRef] [Green Version]
  63. Wang, H.; Zhang, Y. Detection of motor imagery EEG signals employing Naïve Bayes based learning process. Measurement 2016, 86, 148–158. [Google Scholar]
  64. Bhaduri, S.; Khasnobish, A.; Bose, R.; Tibarewala, D.N. Classification of lower limb motor imagery using K Nearest Neighbor and Naïve-Bayesian classifier. In Proceedings of the 2016 3rd International Conference on Recent Advances in Information Technology (RAIT), Dhanbad, India, 3–5 March 2016; pp. 499–504. [Google Scholar] [CrossRef]
  65. Agarwal, S.K.; Shah, S.; Kumar, R. Classification of mental tasks from EEG data using backtracking search optimization based neural classifier. Neurocomputing 2015, 166, 397–403. [Google Scholar] [CrossRef]
  66. Sagee, G.S.; Hema, S. EEG feature extraction and classification in multiclass multiuser motor imagery brain computer interface u sing Bayesian Network and ANN. In Proceedings of the 2017 International Conference on Intelligent Computing, Instrumentation and Control Technologies, ICICICT 2017, Kerala, India, 6–7 July 2018; Volume 2018, pp. 938–943. [Google Scholar] [CrossRef]
  67. Sakhavi, S.; Guan, C.; Yan, S. Learning Temporal Information for Brain-Computer Interface Using Convolutional Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 5619–5629. [Google Scholar] [CrossRef] [PubMed]
  68. Lee, H.K.; Choi, Y.S. Application of continuous wavelet transform and convolutional neural network in decoding motor imagery brain-computer Interface. Entropy 2019, 21, 1199. [Google Scholar] [CrossRef]
  69. Safitri, A.; Djamal, E.C.; Nugraha, F. Brain-Computer Interface of Motor Imagery Using ICA and Recurrent Neural Networks. In Proceedings of the 2020 3rd International Conference on Computer and Informatics Engineering (IC2IE), Yogyakarta, Indonesia, 15–16 September 2020; pp. 118–122. [Google Scholar] [CrossRef]
  70. jian Luo, T.; le Zhou, C.; Chao, F. Exploring spatial-frequency-sequential relationships for motor imagery classification with recurrent neural network. BMC Bioinform. 2018, 19, 344. [Google Scholar] [CrossRef] [Green Version]
  71. Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A compact convolutional neural network for EEG-based brain–computer interfaces. J. Neural Eng. 2018, 15, 056013. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  72. Lu, N.; Li, T.; Ren, X.; Miao, H. A Deep Learning Scheme for Motor Imagery Classification based on Restricted Boltzmann Machines. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 566–576. [Google Scholar] [CrossRef]
  73. Zhang, J.; Yan, C.; Gong, X. Deep convolutional neural network for decoding motor imagery based brain computer interface. In Proceedings of the 2017 IEEE International Conference on Signal Processing, Communications and Computing, ICSPCC 2017, Xiamen, China, 22–25 October 2017; Volume 2017, pp. 1–5. [Google Scholar] [CrossRef]
  74. Wang, P.; Jiang, A.; Liu, X.; Shang, J.; Zhang, L. LSTM-based EEG classification in motor imagery tasks. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 2086–2095. [Google Scholar] [CrossRef]
  75. Herman, P.; Prasad, G.; McGinnity, T.M. Design and on-line evaluation of type-2 fuzzy logic system-based framework for handling uncertainties in BCI classification. In Proceedings of the 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008; pp. 4242–4245. [Google Scholar] [CrossRef]
  76. Pan, Y.; Goh, Q.Z.; Ge, S.S.; Tee, K.P.; Hong, K.S. Mind robotic rehabilitation based on motor imagery brain computer interface. In Proceedings of the Second International Conference on Social Robotics, ICSR 2010, Singapore, 23–24 November 2010; pp. 161–171. [Google Scholar] [CrossRef]
  77. Xu, B.; Song, A.; Zhao, G.; Xu, G.; Pan, L.; Yang, R.; Li, H.; Cui, J.; Zeng, H. Robotic neurorehabilitation system design for stroke patients. Adv. Mech. Eng. 2015, 7, 1–12. [Google Scholar] [CrossRef]
  78. Irimia, D.C.; Cho, W.; Ortner, R.; Allison, B.Z.; Ignat, B.E.; Edlinger, G.; Guger, C. Brain-Computer Interfaces With Multi-Sensory Feedback for Stroke Rehabilitation: A Case Study. Artif. Organs 2017, 41, E178–E184. [Google Scholar] [CrossRef]
  79. Zhao, L.; Li, X.; Bian, Y. Real time system design of motor imagery brain-computer interface based on multi band CSP and SVM. AIP Conf. Proc. 2018, 1955, 040053. [Google Scholar] [CrossRef]
  80. Irimia, D.C.; Ortner, R.; Poboroniuc, M.S.; Ignat, B.E.; Guger, C. High classification accuracy of a motor imagery based brain-computer interface for stroke rehabilitation training. Front. Robot. AI 2018, 5, 130. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  81. Tayeb, Z.; Fedjaev, J.; Ghaboosi, N.; Richter, C.; Everding, L.; Qu, X.; Wu, Y.; Cheng, G.; Conradt, J. Validating deep neural networks for online decoding of motor imagery movements from eeg signals. Sensors 2019, 19, 210. [Google Scholar] [CrossRef] [Green Version]
  82. Karácsony, T.; Hansen, J.P.; Iversen, H.K.; Puthusserypady, S. Brain computer interface for neuro-rehabilitation with deep learning classification and virtual reality feedback. In Proceedings of the 10th Augmented Human International Conference 2019, Reims, France, 11–12 March 2019. [Google Scholar] [CrossRef] [Green Version]
  83. Vidaurre, C.; Ramos Murguialday, A.; Haufe, S.; Gómez, M.; Müller, K.R.; Nikulin, V.V. Enhancing sensorimotor BCI performance with assistive afferent activity: An online evaluation. NeuroImage 2019, 199, 375–386. [Google Scholar] [CrossRef] [PubMed]
  84. Raza, H.; Chowdhury, A.; Bhattacharyya, S. Deep Learning based Prediction of EEG Motor Imagery of Stroke Patients’ for Neuro-Rehabilitation Application. In Proceedings of the International Joint Conference on Neural Networks, Glasgow, UK, 19–24 July 2020. [Google Scholar] [CrossRef]
  85. Mousavi, M.; Krol, L.R.; De Sa, V.R. Hybrid brain-computer interface with motor imagery and error-related brain activity. J. Neural Eng. 2020, 17, 056041. [Google Scholar] [CrossRef] [PubMed]
  86. Benzy, V.K.; Vinod, A.P.; Subasree, R.; Alladi, S.; Raghavendra, K. Motor Imagery Hand Movement Direction Decoding Using Brain Computer Interface to Aid Stroke Recovery and Rehabilitation. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 3051–3062. [Google Scholar] [CrossRef]
  87. Vasilyev, A.N.; Nuzhdin, Y.O.; Kaplan, A.Y. Does real-time feedback affect sensorimotor eeg patterns in routine motor imagery practice? Brain Sci. 2021, 11, 1234. [Google Scholar] [CrossRef]
  88. Ron-Angevin, R.; Díaz-Estrella, A. Brain-computer interface: Changes in performance using virtual reality techniques. Neurosci. Lett. 2009, 449, 123–127. [Google Scholar] [CrossRef]
  89. Bhattacharyya, S.; Clerc, M.; Hayashibe, M. A Study on the Effect of Electrical Stimulation as a User Stimuli for Motor Imagery Classification in Brain-Machine Interface. Eur. J. Transl. Myol. 2016, 26, 165–168. [Google Scholar] [CrossRef] [Green Version]
  90. Höller, Y.; Thomschewski, A.; Uhl, A.; Bathke, A.C.; Nardone, R.; Leis, S.; Trinka, E.; Höller, P. HD-EEG Based Classification of Motor-Imagery Related Activity in Patients With Spinal Cord Injury. Front. Neurol. 2018, 9, 955. [Google Scholar] [CrossRef] [Green Version]
  91. Blanco-Mora, D.A.; Aldridge, A.; Jorge, C.; Vourvopoulos, A.; Figueiredo, P.; Bermúdez, S.; Badia, I. Impact of age, VR, immersion, and spatial resolution on classifier performance for a MI-based BCI. Brain-Comput. Interfaces 2022, 9, 169–178. [Google Scholar] [CrossRef]
  92. Meng, J.; Edelman, B.J.; Olsoe, J.; Jacobs, G.; Zhang, S.; Beyko, A.; He, B. A study of the effects of electrode number and decoding algorithm on online EEG-based BCI Behavioral Performance. Front. Neurosci. 2018, 12, 227. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  93. Farquhar, J.; Hill, N.J. Interactions between pre-processing and classification methods for event-related-potential classification: Best-practice guidelines for brain-computer interfacing. Neuroinformatics 2013, 11, 175–192. [Google Scholar] [CrossRef] [PubMed]
  94. Graimann, B.; Allison, B.; Pfurtscheller, G. Brain–Computer Interfaces: A Gentle Introduction. In Brain-Computer Interfaces. The Frontiers Collection; Springer: Berlin/Heidelberg, Germany, 2009; pp. 1–27. [Google Scholar] [CrossRef]
Figure 1. Feature extraction, classifier and neurofeedback influence on classification accuracy. (a) Evolution of classification accuracy over time. Each dot represents the average classification accuracy of traditional classifiers, and deep-learning classifiers have an asterisk. Different colors designate the algorithm used. (b) Feature extraction, classification algorithm and neurofeedback modality in relation to classification performance. Each line denotes the modalities used for each study. Solid lines represent the studies with classification accuracy above the average of all the papers included in the analysis (0.77), while dashed lines indicate the algorithms with accuracy below the average. (c) Comparison of the classification performance for different neurofeedback modalities used in stroke rehabilitation. (d) No statistical significant difference is identified between the traditional machine-learning algorithms and the deep-learning methods.
Figure 1. Feature extraction, classifier and neurofeedback influence on classification accuracy. (a) Evolution of classification accuracy over time. Each dot represents the average classification accuracy of traditional classifiers, and deep-learning classifiers have an asterisk. Different colors designate the algorithm used. (b) Feature extraction, classification algorithm and neurofeedback modality in relation to classification performance. Each line denotes the modalities used for each study. Solid lines represent the studies with classification accuracy above the average of all the papers included in the analysis (0.77), while dashed lines indicate the algorithms with accuracy below the average. (c) Comparison of the classification performance for different neurofeedback modalities used in stroke rehabilitation. (d) No statistical significant difference is identified between the traditional machine-learning algorithms and the deep-learning methods.
Signals 04 00004 g001
Figure 2. Impact of user’s characteristics to the BCI performance. (a) No statistical significant difference is found between the classification performance between the healthy and the patient group. (b) Relation between the average age of the subjects and the classification performance.
Figure 2. Impact of user’s characteristics to the BCI performance. (a) No statistical significant difference is found between the classification performance between the healthy and the patient group. (b) Relation between the average age of the subjects and the classification performance.
Signals 04 00004 g002
Figure 3. Effect of BCI system parameters on the classification-accuracy relation between (a) the number of electrodes, and (b) the number of total trials used for the BCI training. The color of each point indicates if the data originates from healthy subjects (blue) or patients (red).
Figure 3. Effect of BCI system parameters on the classification-accuracy relation between (a) the number of electrodes, and (b) the number of total trials used for the BCI training. The color of each point indicates if the data originates from healthy subjects (blue) or patients (red).
Signals 04 00004 g003
Table 1. Summary of papers included in the review.
Table 1. Summary of papers included in the review.
AuthorClassifierPerformanceFeature
Extraction
Number of
Electrodes
Number of
Subjects
Number of
Sessions
Number of
Trials
Feedback
Modality
ParticipantsYears
Herman [75]T2FLS69%PSD267160ScreenHealthy-
Prasad [19]T2FLS69%PSD2512160ScreenPatients59
Pan [76]QDA67%CSP+AR331230ScreenHealthy-
Chen [53]Autoencoder74%CSP16414415Screen+ FESPatients62
Xu [77]LDA86%WT+AR28340RoboticHealthy27
Irimia [78]LDA95%CSP45210240Screen+FESPatients50
Zhao [79]SVM74%CSP415140ScreenHealthy-
Irimia [80]LDA87%CSP64518160Screen+FESPatients60
Tayeb [81]CNN84%FT320290RoboticHealthy31
Karacsony [82]CNN72%-1610--VRHealthy25
Vidaurre [83]LDA82%CSP64151300RoboticHealthy-
Raza [84]CNN70%CSP12101120RoboticPatients41
Mousavi [85]LR62%CSP64121180ScreenHealthy20
Benzy [86]NB68%PLV6416250ScreenPatients50
Achanccaray [21]SVM93%CSP1620--VR+FESHealthy26
Gaur [22]LDA80%CSP1210340RoboticPatients41
Vasilyev [87]NB80%CSP30116-ScreenHealthy26
Zhang [16]LDA75%WT+AR1673200ScreenPatients60
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vavoulis, A.; Figueiredo, P.; Vourvopoulos, A. A Review of Online Classification Performance in Motor Imagery-Based Brain–Computer Interfaces for Stroke Neurorehabilitation. Signals 2023, 4, 73-86. https://doi.org/10.3390/signals4010004

AMA Style

Vavoulis A, Figueiredo P, Vourvopoulos A. A Review of Online Classification Performance in Motor Imagery-Based Brain–Computer Interfaces for Stroke Neurorehabilitation. Signals. 2023; 4(1):73-86. https://doi.org/10.3390/signals4010004

Chicago/Turabian Style

Vavoulis, Athanasios, Patricia Figueiredo, and Athanasios Vourvopoulos. 2023. "A Review of Online Classification Performance in Motor Imagery-Based Brain–Computer Interfaces for Stroke Neurorehabilitation" Signals 4, no. 1: 73-86. https://doi.org/10.3390/signals4010004

Article Metrics

Back to TopTop