Next Article in Journal
A Solar Irradiance Forecasting Framework Based on the CEE-WGAN-LSTM Model
Next Article in Special Issue
A Novel OpenBCI Framework for EEG-Based Neurophysiological Experiments
Previous Article in Journal
State-of-the-Art Review on Wearable Obstacle Detection Systems Developed for Assistive Technologies and Footwear
Previous Article in Special Issue
Investigating User Proficiency of Motor Imagery for EEG-Based BCI System to Control Simulated Wheelchair
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

EEG-Based BCIs on Motor Imagery Paradigm Using Wearable Technologies: A Systematic Review

Department of Informatics, Systems and Communication, University of Milano-Bicocca, Viale Sarca 336, 20126 Milano, Italy
NeuroMI, Milan Center for Neuroscience, Piazza dell’Ateneo Nuovo 1, 20126 Milano, Italy
Department of Theoretical and Applied Sciences, University of Insubria, Via J. H. Dunant 3, 21100 Varese, Italy
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2023, 23(5), 2798;
Submission received: 1 February 2023 / Revised: 21 February 2023 / Accepted: 28 February 2023 / Published: 3 March 2023
(This article belongs to the Special Issue Real-Life Wearable EEG-Based BCI: Open Challenges)


In recent decades, the automatic recognition and interpretation of brain waves acquired by electroencephalographic (EEG) technologies have undergone remarkable growth, leading to a consequent rapid development of brain–computer interfaces (BCIs). EEG-based BCIs are non-invasive systems that allow communication between a human being and an external device interpreting brain activity directly. Thanks to the advances in neurotechnologies, and especially in the field of wearable devices, BCIs are now also employed outside medical and clinical applications. Within this context, this paper proposes a systematic review of EEG-based BCIs, focusing on one of the most promising paradigms based on motor imagery (MI) and limiting the analysis to applications that adopt wearable devices. This review aims to evaluate the maturity levels of these systems, both from the technological and computational points of view. The selection of papers has been performed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), leading to 84 publications considered in the last ten years (from 2012 to 2022). Besides technological and computational aspects, this review also aims to systematically list experimental paradigms and available datasets in order to identify benchmarks and guidelines for the development of new applications and computational models.

1. Introduction

From the pioneering work of Hans Berger, which recorded the first human electroencephalographic signal (EEG) in 1924 [1], the research devoted to detecting and analyzing brain waves has increased exponentially over the years, especially in medical contexts for both diagnostic and health care applications.
The automatic recognition and interpretation of brain waves permit the development of systems that allow subjects to interact and control devices through brain signals and thus provide new forms for human–machine interactions through systems called brain–computer interfaces (BCIs).
Several applications have been developed, especially for assistive and rehabilitative purposes [2]. However, in recent decades, the rapid development of neurotechnologies, particularly wearable devices, has opened new perspectives and applications outside the medical field, including education, entertainment, civil, industrial, and military fields [3]. Among the different BCI paradigms, motor imagery (MI) deserves particular attention, given that it can be used for a variety of applications and knowing that the research community has achieved promising results in terms of performance [4].
Starting from these premises, this paper systematically reviews EEG-based MI-BCIs by following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) suggestions [5].
The main research question of this review paper is as follows:
RQ: Are wearable technologies mature for EEG-based MI-BCI applications in uncontrolled environments?
To properly answer this question, four different sub-questions are considered as follows:
  • RQ1: Is there a significant amount of EEG-based MI-BCI studies using wearable technologies in the literature that implies a promising future development of this research field, especially in uncontrolled environments and outside the medical and clinical settings?
  • RQ2: Are there common pipelines of processing that can be adopted from signal acquisition to feedback generation?
  • RQ3: Are there consolidated experimental paradigms for wearable EEG-based MI-BCI applications?
  • RQ4: Are there datasets available for the research community to properly compare classification models and data analysis?
To face these questions, the work is structured as follows. Section 2 describes how the 84 papers for the proposed systematic review have been selected, following the PRISMA suggestions. Section 3 reports basic knowledge concepts of electroencephalographic signals, brain–computer interfaces, motor imagery, and wearable technologies to provide a proper background for the comprehension of the next sections. To properly identify the contribution of this work, an overview of other survey articles present in the literature and concerning BCI systems is reported in Section 4. Section 5 is the core of this review paper, reporting systematically the motor imagery brain–computer interface wearable systems found in the state of the art, in terms of applications and employed technologies. A detailed description of signal processing, feature engineering, classification, and data analysis is also reported with the description of all the datasets and experimental paradigms adopted. Particular attention has been given in some sections to those papers that can be reproducible, either because the analyzed dataset is available, or because the computational models adopted are described with proper technical details. All information gathered from the 84 publications considered will be made available as supplementary material, organized in a detailed table (Table S1).
Finally, the answers to the presented research questions have been provided in Section 6, taking into consideration the detailed analyses of the different aspects discussed in the previous sections. Conclusions and future perspectives are presented in Section 7.

2. Systematic Review Search Method

The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [5] was employed to conduct this systematic review.

2.1. Eligibility Criteria

The papers included in the systematic review needed to present studies on EEG-based BCIs considering motor imagery paradigms and the usage of wearable technologies for signal acquisition. Moreover, the following inclusion criteria were applied:
  • Studies published in the last 10 years (from 1 January 2012 to 22 June 2022);
  • Studies published as journal articles, conference proceedings, and dataset reports.
Papers falling in the following criteria were excluded:
  • Non-English articles;
  • Studies published as meeting abstracts, book chapters, posters, reviews, and Master’s and PhD dissertations.
Following these criteria, studies were initially identified by the search engines defined in Section 2.2 by applying their filters, when present. Afterward, a manual scanning was conducted on titles, keywords, abstracts, and data description sections of the publications by dividing the studies extracted from the search engines equally among the review authors.

2.2. Information Sources

Seven search engines have been used to collect works on EEG-based BCIs concerning motor imagery paradigms and wearable technologies: IEEE Xplore, Mendeley, PubMed, ScienceOPEN, Sematic Scholar, Scopus, and Web of Science.
Google Scholar was also consulted to retrieve more information on some of the publications outputted from the other search engines and papers that were already stored in private repositories of the different research groups. For the final set of reviewed papers, the last consultation date will be reported for completeness. Table 1 reports information for each search engine specifying the authors (with initials) that used it and its last consultation date.

2.3. Search Strategy

The works included for manual screening were the outcome of the search conducted by querying the information sources described in Section 2.2 with the following keywords: (EEG OR eeg OR electroencephalographic OR electroencephalography) AND (BCI OR brain–computer interface OR bci) AND (wearable OR wireless) AND (motor imagery OR motor-imagery OR MI).
However, before considering this final group of keywords and to understand the interest of the research community on the specific topic of EEG-based MI-BCIs with wearable technologies, we restricted the queries to a combination of the provided keywords, obtaining the results reported in Table 2. Notice that besides the first field identifying the search engines, the others report the main keywords of the considered subsets.
Notice that Google Scholar is not reported, being used only to retrieve information on papers present in private repositories or additional details on the scraped ones.
Observing the table and making a comparison between the refined searches with respect to the more general one on EEG and BCI-related papers, it appears that
  • A mean (std) of 4.76% (4.98%) works of the EEG and BCI search are present in the EEG and BCI and wearable field;
  • A mean (std) of 26.01% (9.38%) works of the EEG and BCI search are present in the EEG and BCI and MI field;
  • A mean (std) of 0.71% (0.65%) works of the EEG and BCI search are present in the EEG and BCI and wearable and MI field, the target of the present survey.
Therefore, the interest in EEG-based MI-BCIs with wearable technologies has yet to have broader dissemination in the research community. Wearable technologies seem to be discussed, while the field of MI is extremely prolific.
All the queries followed the eligibility criteria described in Section 2.1, but where not searched for duplicates and manually screened. These procedures were applied only to the last target query.

2.4. Search Outcome

Following the previously described search strategies and filtering the results according to the established inclusion and exclusion criteria, 84 papers were included in the present review.
Figure 1 depicts the flow diagram obtained by following the PRISMA guidelines. Notice that the diagram is divided into two main sections. The first concerns the works retrieved through the search engines listed in Section 2.2, while the second pertains to the studies in the research group’s personal repositories.
Duplicates were removed from the papers outputted by the search engines before the screening. Afterward, abstracts were manually checked by the authors ensuring that they present references to the keywords reported in Section 2.3. The number of abstracts to check were equally distributed among the authors. Reports were not retrieved if they were (i) already present in the personal repositories, (ii) not available online, or (iii) they were review papers, posters, or thesis. Finally, the papers were excluded after a thorough reading of the works if unrelated to the BCI field and/or not presenting analyses on wearable devices.
Notice that each paper was read by at least two authors.
For completeness, Figure 2 depicts the number of papers remaining after the screening process per each considered year. According to the plot, most of the 84 works were published between 2019 and 2020, highlighting the rise in interest of the EEG community towards MI-BCI systems employing wearable technologies and thus further justifying our interest in the reviewed topic.
Notice that the scarce number of papers related to the 2022 label may not be representative of the hypothesized publication trend, since the scraping on 2022 publications stopped on 22 June.
Detailed notes on the reviewed papers are included in Table S1 as supplementary material to the present review. The notes are organized to provide a clear reference to the papers and to follow the core section of this review (Section 5), analyzing (i) field of applications, (ii) employed technologies, (iii) signal processing and analysis methodologies, (iv) BCI feedback, and (v) dataset information.

3. Background Information

This section is devoted to the explanation of some basic knowledge related to the target of this survey. Therefore, an overview of EEG signals, BCIs, MI, and wearable technologies is provided.

3.1. Electroencephalogram

In this review, the considered applications are based on the use of the electroencephalogram (EEG) as a non-invasive technology to acquire brain signals. In fact, the EEG is able to record the brain’s electric potentials deriving from the activation of the cerebral cortex in response to neural processes [6], which could be spontaneous or evoked by external stimuli [7]. The resulting signal is a time series characterized by time, frequency (Section 3.1.1), and spatial information, and acquired with non-invasive sensors (called electrodes or channels) placed on the scalp of a subject [8,9,10,11]. Moreover, the electrode positioning usually follows standard placement systems, which define the distance between adjacent electrodes taking into consideration specific anatomical landmarks, such as the distance from the nasion to the inion [12]. The most commonly used systems are the 10-20 and 10-10 International ones [12,13], which are depicted in Figure 3 and Figure 4. Notice that the electrodes are identified by letters, which refer to the brain area pertaining to a specific electrode, and a number (odd for the left hemisphere, even for the right one). The midline placement is identified by a z.
Therefore, the sensors are usually placed following standard locations that respect the brain’s anatomical structures. However, it could happen that the neural activations are not recorded uniformly or have irregular samples [14], and that the volume conduction effect may provide indirect and imprecise recordings [15].
Moreover, the EEG signals are [16] easily affected by noise, and thus usually have low Signal-to-Noise Ratio (SNR) [7,17].
EEG data are also heterogeneous,
  • being non-stationary signals varying across time [18];
  • being subject-specific, due to the natural physiological differences between subjects [16];
  • varying in the same subject depending on their physiological and psychological conditions, and changing from trial to trial [19];
  • being influenced by experimental protocols and environmental conditions [7,16].
Therefore, coupling the EEG signals with specific brain activities is a difficult and ambiguous task [20].
In the following, details on the frequency information characterizing the EEG signals and the noise affecting them will be given to provide a better overview of these peculiar data.

3.1.1. EEG Rhythms

As previously introduced, the EEG signal is characterized by frequency information, which is provided by different frequency bands, called rhythms, associated with specific brain activities and functions [21]. Table 3 provides a brief overview of the EEG rhythms and their frequency ranges.
Starting from the lowest frequency band, the delta ( δ ) rhythm presents slow waves typical of infants or predominant during deep sleep [12,22]. The theta ( θ ) rhythm is instead elicited by emotional stress and is present in sleepy adults [12,21,22]. During a relaxed but awake state, the alpha ( α ) rhythm is elicited [21]. Notice that α and mu ( μ ) rhythms share similar frequency components, but μ is related to the motor cortex functionalities [22] and is usually non-sinusoidal [23]. The influence of the μ band on motor imagery tasks will be discussed in detail in Section 3.2.
Concerning the beta ( β ) rhythm, it is present in alert states due to active thinking or attention and may also be an effect of anxiety [12,21]. Finally, the gamma ( γ ) rhythm is intertwined with more complex cognitive functions, such as sensory stimulus recognition and two senses combination [12,22].
Knowing the EEG intrinsic frequency characteristics, it is possible to exploit them to design specific experimental paradigms and make assumptions on EEG data analyses.

3.2. Motor Imagery

Motor imagery (MI) is the imagined rehearsal of an actual movement [24,25]. During motor imagination, a person seems to consciously access motor representation, i.e., the intention of making a movement and a preparation for it, which is usually an operation performed unconsciously [26,27].
Moreover, the imagination can be performed using a first (internal) or third (external) person perspective [28]. In the first case, the person should feel like they are performing the imagined movement; in the external perspective, the person should feel like watching themselves while performing the movement [28].
Furthermore, numerous research studies have found that motor imagery and representation present the same functional relationships and mechanisms [23,26,29,30], which results in the activation of the same brain areas when performing the actual movement and imagining it [27,31,32].
In fact, analyzing the EEG signals recorded from the primary sensorimotor cortex during the executed or imagined movement of specific body parts, variations in amplitude, mainly of the μ and β rhythms, can be observed [29,30]. These non-phase-locked-to-the-event variations are called Event-Related Desynchronization and Synchronization (ERD/ERS), corresponding to a decrease or increase in the rhythm’s amplitude, respectively, [33].
Before performing a movement, the μ and β rhythms are subject to an ERD [34,35]. Instead, the deactivation of the motor cortex due to movement stopping elicits an ERS on the β frequency band [36].
These pieces of evidence justify the supporting role of MI training in motor execution improvement [27] and generally in the enhancement of neuroplasticity [24,37], i.e., the brain’s ability to change in response to new conditions [38]. Moreover, the MI-related phenomena can be easily exploited to provide an MI-based control of a BCI system [34].
However, the MI ability is subjective due to each individual difference, and thus needs to be assessed before being exploited in experiments/applications, or to be trained [28,31]. Particularly in the field of MI-based BCIs, proper motor imagery task completion may require a long time, and Kaiser et al. [39] affirm that a good BCI control is usually achieved when the subject is at least able to perform 70% of the required tasks accurately.

3.3. Brain–Computer Interfaces

In recent years, neurotechnologies faced great developments and have brought important solutions in the collection and analysis of physiological data in several fields. In particular, they allowed an advance in the identification and treatment of neurological diseases [40] in the medical sector.
Starting from these premises, Brain–Computer Interfaces (BCIs) were born and progressed to provide online brain–machine communication systems [41] that allow the control of devices or applications by recording and analyzing brain waves.
The life cycle of current BCI systems is based on a standard architecture, defined by three main modules (Figure 5): the signal acquisition, processing, and application modules [42].
The signal acquisition module deals with the input of the BCI system and thus is responsible for recording the physiological signals, which are amplified and digitized. Afterward, these data become the inputs of the data processing module, which processes the signal in order to convert it into commands for an external device or application. Moreover, this module mainly performs (i) signal preprocessing, with the aim of increasing the signal-to-noise ratio [43] by removing artifacts; (ii) feature engineering to extract characterizing and significant information from the signal; and (iii) classification to translate the signals and their features into machine-readable commands.
Finally, the application module provides feedback to the BCI user.
Considering all the EEG-based BCI applications, distinctions can be made among motor imagery (described in Section 3.2), external stimulation paradigm, Error-Related Potentials, inner speech recognition, and hybrid paradigms [44,45].
The external stimulation paradigm refers to the fact that brain waves can be modified by external auditory, visual or somatosensory stimuli. Examples of this paradigm are the Event-Related Potential (ERP) and the steady-state visual evoked potential (SSVEP). ERPs are characterized by components that identify a certain shape of the signal wave. One of the most studied ERPs in the BCI field is the P300, a component related to a positive signal deflection that creates a peak starting at 300 ms when an unexpected stimulus appears [46].
The SSVEP is instead a phenomenon consisting of a periodic component of the same frequency that occurs when a subject looks at a light source that flickers at a constant frequency. This occurs at frequencies in the range of 5–20 Hz and is mainly detectable in the occipital and temporal lobes [47]. The SSVEP paradigm has a great advantage: since the relative stimuli are exogenous, it is a no-training paradigm and thus should not require long training times. However, SSVEP may lead to fatigue or trigger epileptic seizures [44], and thus, enhancements to the BCI systems based on this paradigm should be developed [48].
When an error or a mismatch arises between the user performing a task and the response provided by the BCI system, an Error-Related Potential occurs (ErrP). ErrPs are potentials that represent the user’s perception of an occurrence of an error in the BCI system. ErrPs are reliably detected on a single-trial basis, and thus have immense potential in real-time BCI applications [49].
Moreover, new paradigms are progressively appearing in the literature concerning inner speech recognition, which is devoted to the identification of an internalized process where a person thinks in pure meanings, generally associated with auditory imagery of their own inner voice [45].
Even thought systems based on these paradigms seem to have a great number of advantages, BCI systems usually present some limitations, and thus, hybrid BCIs could be considered to overcome them [50]. Hybrid paradigms are usually employed by exploiting two or more of the described paradigms. In fact, as demonstrated by several studies [51,52,53], the combination of two or more paradigms can lead to a significant improvement in the performance of the BCI system. For example, in [51], the authors combine P300 and SSVEP to create a high-speed speller BCI system with more than 100 command codes.
As previously stated, EEG-based BCI systems are born for medical purposes, from neurorehabilitation to prevention, up to the identification of pathologies and diagnoses. For example, EEG signals are used for the identification and prevention of epileptic seizures, and systems have been developed allowing their high detection and prediction accuracy, as well as a better localization of epileptic foci [40]. BCI systems are also used in the neurorehabilitation field, for example, for the treatment of patients who suffer from motor disabilities after a stroke [54,55,56] or who are affected by Parkinson’s disease [57,58]. In particular, motor imagery-based BCIs have proven to be an effective tool for post-stroke rehabilitation therapy through the use of different MI-BCI strategies, such as functional electric stimulation, robotics assistance, and hybrid virtual reality-based models [55]. BCIs are also used to detect health issues such as tumors, seizure disorders, sleep disorders, dyslexia, and brain swelling such as encephalitis [59].
In addition to these medical and neurological rehabilitation uses, there are many other fields of application of BCIs [59,60,61,62].
For example, the marketing sector uses EEG data to evaluate advertisements in terms of consumer attention and memory [59].
Moreover, the study of brain signals in the field of education has provided more insights into the degree of studied information and how the studying experience could be tailored to a single student. This could provide improvement in students’ skills in real-life scenarios, instead of focusing only on information memorization. Moreover, the students could better develop competencies such as adaptive thinking, sense making, design mindset, transdisciplinary approaches, and computational skills [63].
Further development of BCI systems is also represented by their integration into the world of the Internet of Things. In fact, BCIs could be assistants able to analyze data such as mental fatigue, levels of stress, frustration, and attention [64], while being embedded in smart devices.
Therefore, the study areas that make use of BCI systems are diverse and increasingly numerous. Outside the medical sector, EEG-based BCI technologies have been receiving more attention due to the development of non-medical wearable EEG devices. In fact, these systems guarantee access to a wide range of users, given their high portability and low cost.
Generally, BCI technologies need to go under the scrutiny of experts to provide appropriate user-centered systems and allow a proper insight into their clinical usefulness and practicality [65]. These necessities can be detrimental in terms of efficient BCI system management costs, even though the initial technologies may be sufficiently low-cost [65].

3.4. Wearable Technologies

As introduced in the previous Section 3.3, considering the new demands from the general public and the need to move from laboratories and research centers to in-home and real-life environments, EEG-based BCI technologies are moving to low-cost, portable, and easy-to-use for non-expert solutions.
Traditional laboratory EEG devices usually have helmet structures with holes in which the sensors (electrodes) are installed. The helmet is placed on the patient’s scalp, and each electrode is connected to the recording tool by using cables. This may translate into a very large amount of time for preparing a test. e.g., as described by Fiedler et al. [66] usually a 256-channel device requires long times to connect each electrode to the registration system and for patient preparation. This also makes it impossible for the patient to wear the device at any time of the day at home or anywhere else. Fiedler et al. [66] also propose a 256 dry electrode cap that counteracts these issues.
Starting from this awareness and to overcome the problem of the discomfort of wired instruments, EEG wearable models were born. Wearable devices represent an evolution of the classic EEG tools. They are smaller devices that can be used potentially in any place and at any time. In fact, Casson et al. [67] defined wearable devices as follows:
the evolution of ambulatory EEG units from the bulky, limited lifetime devices available today to small devices present only on the head that can record EEG for days, weeks, or months at a time.
By removing the cumbersome recording units and connecting wires and replacing them with microchip electrodes with embedded amplifiers, quantizers, and wireless transmitters, it was possible to achieve the goal of creating devices that recorded and sent data wirelessly.
Although wired EEGs are more stable and can transmit more data in less time [68], wearable devices are more comfortable and can be used wherever, thanks to their portability. Moreover, it is possible to avoid the artifacts caused by the movement of cables and electrodes of wired EEG by using wearable EEG devices.
Notice that the electrodes of a wearable device need an adequate electrical connection with the scalp of the subject. The currently available devices are equipped with three types of electrodes: dry, wet (gel-based or saline solution-based), and semi-dry. Dry electrodes of an EEG do not need to use a gel nor a saline solution to be connected to the scalp. This implies a greater simplicity of positioning, also reducing the setup time at the expense of good signal conduction. Wet sensors can be divided into two categories: gel-based electrodes or saline solution-based electrodes. Gel-based electrodes make use of an electroconductive gel applied on each electrode. Gels generally reduce movement and skin surface artifacts by creating a more stable conductive connection between the electrode and the scalp compared to saline solutions [69]. However, the time for preparing the subject could increase, and the subject may be uncomfortable due to the gel residues on the hairs and scalp. The cap and electrodes should also be carefully cleaned after usage.
On the other hand, some devices have electrodes imbued with a conductive saline solution in order to help make low-impedance electrical contact between the skin and the sensors [68].
It is also important to consider that wet electrodes-based devices have some limitations [69]: the skin must be prepared to reduce the impedance of the scalp, and the procedure can be annoying or sometimes painful. Moreover, if too much gel is used, it could affect signal transmission by causing a short circuit between two adjacent electrodes.
On the opposite, semi-dry electrodes require only a small amount of electrolyte fluid, combining the advantages of both wet and dry electrodes while addressing their respective drawbacks. Their setup is as fast and convenient as their dry counterparts [70].
Even though the evolution of wearable technologies seems to be rapidly escalating, numerous issues remain to be addressed, especially if the final goal of these EEG wearable devices is to be used in any context. Firstly, the non-expert user may find it difficult to wear the electrodes correctly. As a result, the electrodes may be damaged or not correctly record the signal. A good device should guide the novice user to a fast and effective positioning on the scalp, indicating whether the contact of the electrodes is stable or not. On the other hand, it is essential to make devices smaller and less bulky, while making them more resistant to artifacts [71]. Another sore point concerns the battery life, which is still too low for the instrument to be comfortably used outside research centers. Most wearable devices have been realized for general functions and applications that lack support for signal processing and feedback generation [72], resulting in low-quality signal processing.
In light of these limitations, new designs are emerging for new generations of wearable EEG devices. Thanks to several studies showing that the EEG signal can be recorded on the forehead [73] and behind the ears [74,75], tattoo-like devices are emerging. They offer better wearability and versatility [76]. In fact, they can be connected directly to the skin through adhesive materials without the use of gels or external supports. With the advantage of having features such as ultra-thin thickness, ultra-softness, and high adherence to the skin, the electrodes can comply with skin deformation, providing a stable signal quality [77]. The downside of using such devices is that they may not be able to cover all areas of interest, such as hair-covered areas, making them unsuitable for some uses of EEG monitoring.
To access more details on the evolution of hardware components, we refer the readers to [71,78].

4. Overview of Survey Articles on EEG-Based BCIs

In this section, an overview of survey articles present in the literature and concerning BCI systems is reported to provide a general assessment of the topics related but not superimposed to the target of the present paper.
In fact, most of the analyzed surveys address different application research areas and experimental paradigms. None of them focus exclusively on MI tasks, except the works by Palumbo et al. (2021) [79], who consider MI tasks in the sole field of wheelchair control, and the review study by Al-Saegh et al. (2021) [4], that concerns the use of deep neural networks in the context of MI EEG-based BCIs.
In particular, Palumbo et al. [79] provide a systematic survey of EEG-based BCIs for wheelchair control through motor imagination by including 16 papers published since 2010. The authors focused on (i) the MI paradigms and the type of commands provided to move the wheelchair, (ii) the employed EEG system (presenting sensors for other biomedical signal recording and the number of positioned electrodes) and the wheelchair components, and (iii) the EEG signal management procedures. The authors want to especially provide a clear assessment of the limitations of current biomedical devices when the end user is affected by any kind of motor disability. Moreover, they point out the main challenges arising when having to face the development of an efficient and reliable BCI to control a wheelchair. Firstly, multiple commands are required to allow correct control of the wheelchair, and thus a multi-objective problem characterizes the system. However, adding more commands may affect the performance of the BCI both in terms of accuracy and time consumption. Secondly, the BCI performance is ultimately dependent on the user, who may fail to perform the MI tasks. Finally, wheelchair control requires constant concentration on the task and thus increases the users’ mental workload.
Considering instead the work of Al-Saegh et al. [4], the use of deep neural networks in the context of EEG-based MI-BCIs is surveyed. The authors retrieved 40 papers published between 1 January 2015 and 31 March 2020. An analysis of the employed datasets was performed, and the authors found information on 15 datasets, of which 7 are publicly available. They notice that they vary significantly in terms of electrodes, subjects, number of MI tasks, and trials. However, most of the datasets seem to rely on experimental paradigms concerning the MI of the right/left hand, feet, and tongue. Moreover, the authors provide an assessment of the most used frequency ranges, extracted features, deep network architectures, and input formulations, highlighting the variety of deep learning (DL) model designs.
In what follows, the review articles are reported according to their topics (experimental paradigms and applications, technological aspects, signal processing, and analyses) and in chronological order of publication.
Considering experimental paradigms and applications, a comprehensive survey of different BCI experimental paradigms can be found in [44], published in 2019.
A general survey on EEG technologies and their application is instead presented by Soufineyestani et al. [68] (2020).
Moving to the technological aspects, a brief review of wearable technologies for smart environments is presented in [80], where the authors dedicate two sections to devices and applications for BCI systems. Considering these last topics, the technologies available at the time of the review (2016) are precisely listed, and advancements in EEG devices are easily detectable, especially considering the use of dry sensors and the presence of products for the general public. This information provides a good starting point to compare wearable technologies.
Instead, a detailed overview of the hardware components of wearable EEG devices was provided by [71] in 2019.
The survey by TajDini et al. [81] (2020) focuses on wireless sensors and, in particular, on the assessment of consumer-grade EEG devices for non-medical research. The authors compare 18 products in terms of sensor type (dry, wet, semi-dry), number of channels, sampling rate, accessibility to raw data, operation time, and price. The analysis of the literature is explored based on the different application domains: cognition (emotion recognition and classification, attention, mental workload, memory), BCI (ERP, SSVEP, MI, and other), educational research, and gaming.
The review by Portillo-Lara et al. [82] (2021) starts with an overview of the neurophysiological mechanisms that underlie the generation of EEG signals and then focuses on the state-of-the-art technologies and applications of EEG-based BCIs. Different electrode interfaces and EEG platforms are analyzed and compared in terms of electrode type and density, functionality, portability, and device performance. The advantages and disadvantages of different electrode designs are enumerated. The technical specifications of 18 commercially available EEG platforms are also compared in terms of electrodes, channel count, sampling rate, weight, battery life, resolution, and price. Both medical and non-medical uses are reviewed in the article.
Instead, the review by Jamil et al. [83] (2021) aims to identify the main application areas that use EEG-based BCI devices and the most common types of EEG-based equipment, considering both wired and wireless devices. They present a systematic review using four search engines (PubMed, IEEE, Scopus, and ScienceDirect). The search strings used were (BCI OR Brain–computer interface OR BMI OR brain–machine interface) AND (EEG OR electroencephalogram) AND (rehab* OR assist* OR adapt*). The inclusion criteria were limited to publication years 2016–2020. After the screening, 238 articles were selected and classified according to the following four research areas: education, engineering, entertainment, and medicine. They found that the medical area is the most frequently used (80%). Wired devices were used in the studies by 121/238 articles, while the remaining 117 reviewed manuscripts employed wireless technologies.
Concerning signal processing and analyses, the feature extraction techniques widely used in the literature were reviewed in 2019 by Aggarwal et al. [43]. Moreover, a survey on EEG data collection and management (processing, feature exaction, and classification) is provided by Reaves et al. [40] (2021), considering 48 papers from high-impact journals. The authors also include lists of devices and publicly available datasets.
A comprehensive review was presented by Gu et al. [84] in 2021. The authors provide a broad overview of BCI systems and their application areas. Moreover, they make a general presentation of invasive, partially, and non-invasive brain imaging techniques and subsequently focus on EEG-based BCIs. Their review is organized to provide a consistent survey on (i) advances in sensors/sensing technologies, (ii) signal enhancement and real-time processing, (iii) machine learning (especially transfer learning and fuzzy models) and deep learning algorithms for BCI applications, and (iv) evolution of healthcare systems and applications in BCIs (e.g., concerning epilepsy, Parkinson’s/Alzheimer’s disease, and neurorehabilitation). Their analysis was performed on about 200 papers, considering publication years between 2015 and 2019.

5. EEG-Based MI-BCIs through Wearable Systems

In this section, we discuss and analyze the main issues and characteristics of the works considered for the present review. A detailed table that summarizes the collected information is made available as supplementary material (Table S1).

5.1. BCI Application and Feedback

As a first assessment of the reviewed papers, an overview of the applications and feedback types of these EEG-based MI-BCI systems is provided in this section.
The main aim is to better understand the wide application of BCIs and the shift of experimental paradigms due to the need for consumer-grade applications in real-life environments.
By analyzing the reviewed papers (Table S1), it is clear that the field of application and the feedback are extremely interconnected.
Moreover, motor imagery seems to be particularly used to provide control for external devices, especially for rehabilitation purposes. Therefore, the feedback usually consists of movements of robots, wheelchairs, prostheses, drones, and exoskeletons.
There are cases in which the feedback is visually provided on a monitor or exploiting virtual reality systems [85].
Moreover, feedback can help in modulating the MI ability of users. For instance, Jiang et al. [86] deal with the creation of a BCI system that uses discrete and continuous feedback in order to improve practicability and training efficiency. The results show that continuous feedback successfully improves imagery ability and decreases the control time.
Starting an in-depth analysis of the application fields, 30 of 84 research studies have medical purposes [87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,115].
Considering the stroke rehabilitation field, Mattia et al. [87] built a BCI system to enhance post-stroke hand functional motor recovery by using ecological feedback, i.e., the visual representation of the patient’s hands, as a feedback tool for the user. A BCI system for recovering people who suffer from post-stroke hand disability has also been proposed by [116]. Firstly, an offline analysis was performed. Afterward, the online system was initially tested on healthy subjects and then on 10 stroke patients affected by a hand disability. The subjects received virtual feedback by controlling a virtual exoskeleton.
Another post-stroke rehabilitation BCI system is presented by Barria et al. [88], who propose a paradigm consisting of motor imagination with visual stimulation and motor imagination with visual-haptic inducement to control an ankle exoskeleton.
Moving to other motor disorders, [89] assumes that people with motor disorders need the help of a caregiver to start a BCI. Therefore, they aim to identify movement intention to initiate BCI systems. Instead, [90] examined neurofeedback manipulation to ensure self-regulation of brain activity as a potential treatment for post-traumatic stress disorder, considering both offline and online analyses. The feedback was shown through a video game, and the participants were asked to manipulate their brain activity to control the game.
Finally, Looned et al. [91] demonstrated the feasibility of a system that assists individuals with neurological disorders to, for example, drink a glass of water independently. The application of this system is implemented through the movement of a robotic arm that assists the movement of the human arm.
Li et al. [92] created a hybrid BCI system to control a lower extremity exoskeleton. By combining EEG data with electromyographic data, they developed an exoskeleton able to help subjects while climbing stairs.
BCI applications are also developed in the entertainment, game, and device (vehicle, robots) control areas [117,118,119,120,121,122,123,124,125,126,127,128].
In particular, [117] focused on the BCI performance in a competitive multi-user condition. In fact, users had to control a humanoid robot in a race against an AI. The authors believed that the game conditions could help the users maintain high motivation and thus increase the effectiveness of the BCI system. However, they found out that there is no significant difference between competitive multi-user conditions and single-user conditions.
Device control is also proposed in [118]. A robotic quadcopter was intended to be controlled in three-dimensional physical space by a BCI user. After a training phase with a virtual drone, subjects modulated their sensorimotor rhythms and controlled a physical drone. Visual feedback was provided via a forward-facing camera on the hull of the drone.
Instead, Alanis et al. [119] created an immersive BCI system. In fact, the users could control the movement of a humanoid robot in a first-person perspective, as if the movement of the robot was their own.
Another application is the one presented by Xu et al. [120], who proposed a motor imagery EEG-based continuous teleoperation robot control system with tactile feedback. The imagination of the user’s hand movements was translated into a continuous two-dimensional control signal and transmitted to the remote robotic arm (using the TCP/IP protocol), allowing it to move remotely through a wireless connection. The user received feedback through a vibrotactile stimulus based on the tactile information of the robotic arm. The authors demonstrated that vibrotactile stimulation can improve operator telepresence and task performance.
For completeness, Figure 6 reports the reviewed paper distribution according to different fields of application and aims.
With respect to the previously discussed solutions, the reported papers mainly focus on rehabilitation systems (17.9%), assistive BCIs (17.9%), and entertainment and device control solutions (14.3%).
The remaining works provide

5.2. Employed Technologies

This section is devoted to the revision of the technologies employed by the reviewed papers. Particular attention is given to EEG-related devices, which are one of the main focuses of this review and of which some information is reported in Table 4.
Notice that the first (Device) column of Table 4 provides the product names and presents an asterisk (*) when the devices are amplifiers (BrainMaster Discovery 24E g.USBamp, g.Nautilus Multi-Purpose, NuAmps, Synamps 2/RT) or data acquisition (Cyton Biosensing Board) tools. The link to the product is also reported as a citation. The producers are instead presented in the second column, and a good number of devices from Emotiv can be detected. Three g.Nautilus entries are reported due to the absence of a clear indication of the devices used by the related works [89,90,105,125,130,145,146,151,153,155].
The Electrodes field presents firstly the number of sensors or recorded channels and if they are wet (W), dry (D), or semi-dry (S) electrodes. Notice that for the Starstim device, the electrode type is marked as tES-EEG, having that its sensors allow both transcranial electrical stimulation and EEG monitoring. Observe that 9 of 21 devices can be used with both wet and dry electrodes and six products do not provide clear information on the sensor types.
Additional information on device pricing (updated to 3 October 2022) is reported in column 4. The need to ask the producers for the price of the majority of the products is immediately detectable. Finally, the Papers field provides the list of reviewed works employing a specific device. A total of 13 of 84 papers [86,123,128,134,138,143,154,158,159,160,161,162,163] do not appear in this column, having that the authors design and use custom EEG headsets. Finally, Khan et al. [135] and Vourvopoulos et al. [162] compare their custom devices with Emotiv EPOC+ and Enobio 8, respectively.
After having given a brief overview of the synthetic information regarding EEG-related devices, it is possible to move to a deeper analysis of how these technologies have been effectively used by the authors according to the table present in the Supplementary Materials (Table S1).
A total of 14 of 84 works prefer reducing the number of electrodes provided by the device producers [88,92,104,106,110,126,142,162] and especially make a selection on the channel placed over the central cortical area. In particular, only the C{3,4} [139,147], Cz [92,143,148], and C{1,2} [102] electrodes are considered.
Moreover, some of the authors clearly specify the sampling rate used for signal acquisition that goes from 125 Hz [113], 128 Hz [91,110,122,133,135,139], 250 Hz [98,99,112,145,149], 256 Hz [135], and 500 Hz [88,92,93,97,103,117,137,142,147,148,152,171,172] to 512 Hz [124].
Finally, considering the other technologies presented in the supplementary Table S1, the massive use of Bluetooth technology for wireless communication between wearable devices and acquisition/control tools is immediately observable. Moreover, some sensors have been used together with EEG devices like the electromyogram [87,92,113,175,177], functional electrical stimulation devices [102], near-infrared spectroscopy tools [104], and magnetic resonance imaging devices [150].
Notice that feedback was also provided by considering different tools, such as exoskeletons [88,92,100,103], wheelchairs and vehicles [98,99,106,108,110,122], rehabilitation and assistive robotic tools [91,97,105,107], eye tracking devices [171], movement tracking devices [171], virtual reality devices [93,101,119], robots [114,117,125,126], and simulators [127].

5.3. Signal Processing and Analysis

This section is devoted to the second step of the BCI life-cycle and, in particular, to signal preprocessing, feature engineering and channel selection, and data classification and analyses.

5.3.1. EEG Data Preprocessing

EEG signals contain artifacts from internal sources due to physiological activities, e.g., ocular and muscle movements, cardiac activity, respiration, and perspiration. Moreover, artifacts can be generated by external sources related to environmental and experimental noise, e.g., power line and mobile phone interference, electrode movements, and electromagnetic components [7]. Each of these types of noise has its own frequency band and can affect brain rhythms differently.
Due to the non-stationarity and non-linearity of EEG signals, it is difficult to remove these artifacts without the loss of neural information [192].
In particular, EOG artifacts (related to eye movements) and EMG artifacts (related to muscle movements) are considered among the most common sources of physiological artifacts in BCI systems [193].
In the case of wearable devices, the possibility of introducing other artifacts (e.g., data transmission fault, electrical interference), with respect to traditional wired devices, increases [194]. Therefore, the preprocessing module of a typical BCI life-cycle becomes fundamental to provide better data as inputs to the subsequent modules.
From the analysis of the literature summarized in Table S1, we observe that 60 out of 84 articles indicate some details on the preprocessing step of the BCI life cycle. From this subset, we list the main techniques adopted, enumerating only those papers that explicitly reported them:
  • Nine works assumed that source signals are statistically independent of each other and instantaneously mixed, and apply Independent Component Analysis (ICA) to remove noise, mainly due to eye movements and eye blinks [93,95,103,141,147,150,152,158,171]. EEGLAB toolbox [195] is frequently employed by the authors to implement ICA;
  • A total of 21 papers explicitly indicate that notch filtering is applied to eliminate the power line interference at 50/60 Hz [89,92,97,98,107,111,113,119,131,132,136,140,142,143,144,151,153,155,157,158,174];
  • A number of 11 works applied temporal filtering approaches such as Butterworth of different orders and cutoffs: third order filter in 0.5–30 Hz [148] or in 4–33 Hz [123], fourth order in 16–24 Hz [88], fifth order in 8–30 Hz [96,114,132,142,143] or in 1–400 Hz [113], biquad tweaked Butterworth in 8–13 Hz [138], and sixth order in 8–30 Hz [153];
  • A total of 13 papers applied spatial filtering approaches such as the Common Average Reference (CAR) filter [99,120,132,138,139,148] and Laplacian ones [88,97,101,103,140,172,177].

5.3.2. Feature Engineering

The next step after the signal preprocessing is the feature engineering stage, where the relevant information about the EEG signals is extracted and analyzed.
As detailed in Section 3.2, during motor movement or imagination, μ and β rhythms are interested in a Desynchronization (ERD). Instead, the deactivation of the motor cortex goes against a Synchronization (ERS) of the β rhythm.
Therefore, the power changes due to ERD/ERS encodes relevant information on MI. In general, ERD (ERS) is observed in the contralateral (ipsilateral) sensorimotor area. Taking into account these phenomena, Common Spatial Pattern (CSP) and band power features (that represent the power of EEG signals for a given frequency band averaged over a time window) are natural choices with respect to feature extraction methods and are applied by most of the works within the field of EEG analysis. The lateralization index between hemispheres is also used to describe the asymmetry of neural activation intensity [93].
Following the analysis of Table S1 provided as supplementary material and taking into account the articles that explicitly indicate the feature extraction steps performed, we have that
  • In the frequency domain, the authors of 25 papers compute the Power Spectral Density (PSD) of the signal, usually through Fast Fourier Transform (FFT) or Welch’s method [86,88,89,92,93,94,97,104,109,111,114,123,124,136,138,139,140,144,145,154,160,161,162,171,177];
  • In the time–frequency domain, wavelet transform-based methods are employed in 8 studies [89,122,129,141,147,154,158,163];
  • In the spatial domain, CSP-based approaches are applied in 17 articles [90,92,93,96,99,100,110,111,134,135,137,146,150,151,153,168,175]. Variations of the CSP are found in [132], which considers local mean decomposition CSP, and [146], which exploits filter bank CSP, among others;
  • In the temporal domain, statistical features such as the standard deviation, skewness, kurtosis, entropy, and energy are considered [89,141,183].
Notice that most of the authors combine features from multiple domains in order to obtain a final, more robust feature vector able to improve the classification accuracy.
Nearly all the reviewed articles work with different combinations of hand-crafted features, but recently deep features extracted by convolutional neural networks (CNNs) have also been considered [110,115,126,131,143].
With respect to feature selection, different methods are applied, such as ICA [146], joint mutual information [90,133], generalized sparse discriminant analysis (that is employed to perform feature reduction and classification contemporaneously) [113,168], and sequential backward floating selection techniques [135].
To reduce the data dimensionality, the traditional channel set C{3,4,z}, located over the central cortical area, is often considered [92,147,148,158,160,175]. However, in several cases, the choice of the electrodes mainly depends on the used device [86,109,112,141,144]. Electrodes positioned over the fronto-central and central-parietal areas are also frequently taken into account. Electrode subsets are also added to the C{3,4,z} group: C{1,2} [102,142], FC{3,4} [154], and Fpz and Pz [162]. Daeglau et al. [117] selected the electrode set constituted by Cz and CP{1,z,2}.
Moreover, different groups of eight electrodes are selected by some of the reported studies: FC{1,2} + C{3,z,4} + CP{1,2} + Pz [98], F{3,4} + C{3,z,4} + T{7,8} + Pz [97], C{1,3,z} + CP{1,5} + FC{1,5} + P3 [171], FC{5,6} + C{1,2,3,4} + CP{5,6} [93], and Fp{1,2} + Fz + C{z,3,4} + O{1,2} [157]. Instead, groups of 9 electrodes have been considered by [103,140] (C{z,1,2,3,4} + CP{1,2} + FC{1,2}), while 10 electrodes have been selected by [101] (C{1,2,3,4,5,6} + FC{3,4} + CP{3,4}).

5.3.3. Classification and Data Analysis

As a final processing step, classification and data analysis are performed to provide a specific assessment of the EEG data or to allow the correct feedback execution.
Concerning the reviewed papers, notice that the main strategies employed can be divided into (i) traditional Machine Learning (ML) models, (ii) DL architectures, (iii) other supervised learning techniques considering ensemble and transfer learning approaches with the possible additional application of evolutionary algorithms, and (iv) statistical analysis, quality assessment, and functional connectivity.
The strategies are distributed as depicted in Figure 7. Notice that details on the proposed ML and DL models have been reported, while there are no details for the other two categories, due to the great variety of approaches.
Besides some papers that employ different strategies at the same time [89,101,115,119,121,122,134] or multiple ML techniques [89,111,133,135,144,146,152,157,183], the authors usually prefer to concentrate on a specific technique.
The traditional ML models seem to be the most used, and among them, the Linear Discriminant Analysis (LDA) classifier employed in its basic version [89,90,93,101,104,111,117,119,134,150,151,152,153,158], in combination with Common Spatial Pattern (CSP) [96], or considering multiple LDA models combined through fuzzy integration, considering optimal feature selection and classification through generalized sparse LDA [168], and presenting confidence levels assigned with particle swarm optimization [128] appears to be the most used. Likewise, the Support Vector Machine (SVM) classifier is widely employed [86,89,99,111,121,122,123,129,135,140,146,152,154,155,157,183].
The K-Nearest Neighbor (KNN) model is used by a good number of works [89,102,111,122,133,135,146,183], while the other techniques, i.e., Naive Bayes (NB) [89,111] Parzen Window [135], Multi-Layer Perceptron (MLP) [109,145,183], decision tree [89,133,183], Random Forest (RF) [157,183], neural network (NN) [111,161], Logistic Regression (LR) [100,144], and Quadratic Discriminant Analysis (QDA) [144], have been employed by a restricted number of works and together with other techniques.
Considering the DL approaches, the most used architectures have been based on the use of convolutional neural networks (CNNs) [112,121,123,126,131,142,147,163] sometimes combined with Long Short Term Memory (LSTM) layers [112,126,131]. Ref. [110] provided analyses by employing 1DCNN, 2DCCN, and one-dimensional multi-scale convolutional neural network (1DMSCNN). Instead, [143] specified the use of pre-trained CNNs, i.e., AlexNet, ResNet50, and InceptionV3. Finally, two works used a back-propagation NN [92] and an autoregressive model [120].
The Other AI (artificial intelligence) techniques depicted in Figure 7 refer to miscellaneous approaches that could use ensemble techniques [121], transfer learning [143] models, or other supervised learning approaches such as adaptive Riemannian classifier [130], eXtreme Gradient Boosting (XGBoost) [115], echo state network with genetic algorithm for parameter optimization and Gaussian readouts to create direction preferences [136], fuzzy integral with particle swarm optimization [134], multi-objective grey wolf optimization twin support vector machine [132], and Markov switching model [137].
Finally, a great number of statistical analysis, quality assessment, and functional connectivity studies, or other MI detection techniques, falls under the Other analysis label.
The applied strategies may be summarized as follows:
  • A t-test analysis has been applied to provide alpha wave testing for comparison between systems [163], to compare average ERDs derived from different devices [162], and to compare different types of experiment [97] and experimental conditions [171];
  • Questionnaire analyses have also been performed for quality assessment, by considering the opinions given by the subjects concerning a specific device [97], or employing the Quebec User Evaluation of Satisfaction with Assistive Technology test to evaluate patients’ satisfaction [88] and for subject MI ability assessment [113];
  • Correlation analyses have been used to compare different electrode types [159] and to quantify subjects’ intent [115];
  • MI detection through ERD only [138] or multivariate distribution [160];
  • Analysis of the change in functional connectivity [103,119].
Other analyses performed by the reviewed works are the BCI-use success rate assessment based on the beta power rebound threshold [88], learning vector quantization to predict character control [124], a two-command certainty evaluation algorithm proposed by [122], the use of SPSS tool for statistic analysis [89], and the application of a transfer rate metric to determine an asynchronous real-world BCI [118].
Table 5 summarizes the information related to the classification and other analyses presented by works reporting comparisons with benchmark datasets (Section 5.4) and/or having their proprietary datasets available upon request or present in public repositories.
For each table entry, the (i) reference work, (ii) the benchmark datasets (which are detailed in Section 5.4) and their own dataset with a minimal description of the considered experimental paradigm, (iii) the classification tasks and models (if any) and other analyses (if any), (iv) the performance of the best reported model comprising the evaluation measures, the validation strategies, and the results on each of the employed datasets, and (v) if the system was tested online and/or offline, are reported.
Notice that the analyses are performed by considering the subjects one at a time, if not otherwise stated. Moreover, an asterisk (*) has been applied to all the datasets that will be detailed in the following Section 5.4, at their first appearance.
The fields of application of the reported works are very diverse; however, the classification tasks usually involve the binary classification related to the MI of hands grasping, opening/closing, or moving.
Moreover, most of the works report accuracy (usually higher than 70% for both binary and multi-class tasks) as the only performance measure to evaluate the proposed classification models, and the validation strategy is not always specified.
Finally, most of the entries present offline analyses, and just a few try to work in an online modality.

5.4. Dataset and Experimental Paradigms

Of the 84 cited papers, 7 report that their proprietary datasets are available upon request [93,111,119,123,133,146,174], and only [113] providing the MI-OpenBCI dataset has a publicly available resource.
Some articles present links to their data, which does not seem to be available [126,134].
Instead, the publicly available third parties’ BCI Competition III dataset IIIa [133], BCI Competition III dataset IVa [168], BCI Competition IV dataset 2a [130,133,143], 2b [110], EEG Motor Movement/Imagery Dataset [115], and the EEG BCI dataset [126] have been employed by the reviewed papers as benchmarks before proprietary data testing or to directly test the proposed approaches.
Table 6 presents a summary of the publicly available datasets, which will be described in detail in the following subsections. Notice that the reported citations in the Dataset field refer only to the publications presenting the dataset description or required by the dataset authors. Links to the repositories, brief information on the used technologies, and the experimental paradigms are summarized in the remaining fields.
Considering the other papers, notice that the experimental paradigms usually concern the motor imagination of left/right hand/fist [86,90,93,94,95,96,99,101,110,112,115,119,120,123,128,129,130,131,134,135,137,138,141,142,143,146,147,148,149,150,151,153,154,158,161,163,172,183], both hands [99,119,120,130,142,172], dominant or single hand movements [87,102,113,160,168], finger tapping [175], shoulder flexion, extension, and abduction [132,133], the motion of upper/lower limbs [91,97,103,104,145,177], foot/feet movement [88,100,114,119,123,129,130,142,148,172], pedaling [98,140,152], tongue movement [129], game character/robot/machinery movement control [105,106,107,108,109,117,118,119,121,122,124,125,126,127,156,174], and generic motor intention [89,155].
Peculiar experimental conditions are presented by [144,157]. Tiwari et al. [144] propose the imagination of eight cognitive tasks, i.e., forward, backward, left, right, hungry, food, water, and sleep, with the perspective of developing an efficient assistive tool for disabled people. Angrisani et al. [157] design a complex experimental paradigm of performed and imagined soft ball squeeze, dorsiflexion of the ankle, flex-extension of the forearm, finger mobilization by clenching a clothespin, and flex-extension of the leg, to validate their BCI instrumentation.
Besides the experimental paradigms considered by the reviewed works, it is interesting to have an overview of the subjects involved in the experiments.
Excluding [115,134], which employ third parties’ datasets, framework proposals [87,106], and simulated environments [105], in the remaining 79, analysis on the involved subjects can be synthetically listed as follows:
  • A total of 7/79 papers do not provide any information regarding the involved subjects;
  • A total of 36/79 papers specify the biological gender of the subjects and report most of the time the number of subjects divided per male and female;
  • A total of 50/79 papers recruited healthy subjects, while only 5 considered patients affected by specific pathologies;
  • A total of 21/79 papers present information regarding the previous experience of the subjects with EEG, BCI, or MI-based experiments;
  • A total of 35/79 papers report no information on the participants’ age, while the other works consider subjects aged around 20–30 years. Only [88,93,112] ask for the participation of adults over 30 years to a maximum of 60 years of age;
  • Almost 50% of the works reporting information on the subjects perform their experiment on a maximum of 5 participants; 27% recruit a maximum of 10 subjects, and very few works consider more than 20 participants. A detailed infographic is depicted in Figure 8.
Finally, notice that of the 84 papers, 39 present an ethical statement regarding the approval of the proposed experiment, and 33 confirm that written or oral informed consent was given by the subjects.

5.4.1. BCI Competition III Dataset IIIa

The BCI Competition III dataset IIIa [196] collects the data recorded from three subjects while performing a cue-based experiment of MI tasks, i.e., left/right hand, foot, or tongue movement randomly presented in six runs of 40 trials each.
The EEG signals have been acquired on 60 electrodes (the montage is depicted in the official dataset description available at accessed on 31 January 2023) through a wired 64-channel Neuroscan device (250 Hz sampling rate). The reference and ground electrodes were placed on the left and right mastoids, respectively.
The output signal has been bandpass filtered (1–50 Hz), and the notch filter was enabled.

5.4.2. BCI Competition III Dataset IVa

The BCI Competition III dataset IVa [196] presents the recording of five subjects, who were asked to perform left/right hand and right foot MI according to two types of visual stimulations. Each subject had to respond to 280 cues.
BrainAmp amplifiers and 128 channel Ag/AgCl electrode cap from ECI were employed for the EEG signal collection. Notice that of the 128 channels, 118 were measured at positions compliant with the extended international 10-20 system (more details on the official dataset description available at accessed on 31 January 2023).
The acquired signals were bandpass (0.05–200 Hz) filtered and digitized at 1000 Hz with 16-bit (0.1 uV) accuracy. A data version presenting downsampled signals at 100 Hz was also provided.

5.4.3. BCI Competition IV Dataset 2a

The widely known BCI Competition IV dataset 2a [197] contains continuous multi-class motor imagery data acquired from nine subjects. The participants were asked to perform a cue-based MI-BCI considering the imagination of the left/right hand, both feet, and tongue movements. The subjects had to participate in two experimental sessions (six runs of 48 trials each) on different days.
Notice that the signals were recorded from 22 Ag/AgCl wired electrodes (please, consult the original publication for the montage details). Left and right mastoids presented the ground and reference electrodes, while two EOG channels were positioned to allow artifact removal. The signal presented a sampling rate of 250 Hz and was bandpass (0.5–100 Hz) and notch (50 Hz) filtered. Moreover, experts performed a manual screening of the signals and marked the trials containing artifacts.

5.4.4. BCI Competition IV Dataset 2b

The Session-to-Session Transfer of a Motor Imagery BCI under Presence of Eye Artifacts dataset, widely known as the BCI Competition IV dataset 2b [197], was intended to provide the classification of EEG signals in the presence of ocular artifacts. Therefore, it collects EEG (on C{3,4,z} electrodes) and electrooculogram signals previously acquired by [200]. Nine right-handed healthy subjects had to perform an experiment during which they were guided to produce specific ocular artifacts, and this also presented a cue-based MI-BCI paradigm consisting of the motor imagination of left- and right-hand movements. Two sessions (each of six runs with 10 trials per run) without feedback were recorded separately for each subject. Afterward, three sessions with online feedback were performed, given that each session was constituted by four runs of 40 trials each. The feedback was in the form of a smiley changing expression and color depending on the good outcome of the MI task.

5.4.5. EEG Motor Movement/Imagery Dataset

The EEG Motor Movement/Imagery Dataset available on the PhysioNet repository ( accessed on 31 January 2023) [42,198] presents data acquired using a BCI2000 system and considering 64 EEG channels positioned according to the 10-10 International System (excluding electrodes Nz, F9, F10, FT9, FT10, A1, A2, TP9, TP10, P9, and P10). The recording was performed with a sampling rate of 160 Hz.
Each of the 109 subjects undertook a cue-based experiment of 14 runs consisting of two baseline recordings and three recordings per experimental task, i.e., MI and executed opening/closing of left/right fist, MI and executed opening/closing of both fists/feet.

5.4.6. MI-OpenBCI

The MI-OpenBCI dataset [113] presents the recordings acquired through a consumer-grade MI-BCI system based on OpenBCI Cyton and Daisy Module. The EEG signal was recorded by using the Electrocap System II. Moreover, an electromyographic signal was acquired through the OpenBCI Ganglion board connected to the Myoware sensors. OpenViBE and OpenBCI GUI were used for EEG and electromyographic data recording, respectively.
The experiment was approved by the Comité Asesor de Ética y Seguridad en el Trabajo Experimental and performed by 12 (four female) healthy right-handed subjects (mean age ± SD = 25.9 ± 3.7 years) who did not have any previous experiences with BCIs. The subjects gave their informed consent. Regarding the sole EEG wireless data recording (125 Hz sampling rate), the F{z,3,4,7,8}, C{z,3,4}, T{3,4,5,6}, P{z,3,4} electrodes were employed. The reference and ground electrodes were placed at the left/right ear lobes.
The experimental protocol consisted of a cue-based motor imagination of the dominant hand grasping and a resting condition. The tasks were presented randomly 20 times (4 s) each for four runs. A 20 s baseline was acquired before the protocol started.

5.4.7. EEG BCI Dataset

With the aim of providing a large and uniform dataset to design and evaluate processing strategies, [199] provides the EEG BCI dataset. The data were acquired after the approval of the Ethics Committees of Toros University and Mersin University in the city of Mersin (Turkey) and after having received the informed consent form signed by the subjects.
The data were acquired through a standard medical EEG station (EEG-1200 JE-921A EEG system, Nihon Kohden, Japan) considering 19 electrodes of the Electrocap System II, with varying sampling rates and an in-built filtering application.
The 13 healthy participants (five females and eight males, aged 20–35) were asked to perform different MI paradigms consisting of left/right hand, left/right leg, tongue, and finger movements.

6. Discussion

In this systematic review, 84 papers published in the last ten years have been deeply analyzed with the aim of answering the following main research question:
Are wearable technologies mature for EEG-based MI-BCI applications in uncontrolled environments?
However, several aspects should be considered to properly address this point, and thus, four sub-questions have been defined, as introduced in Section 1.
Important conclusions can be drawn to answer the first research sub-question,
RQ1: Is there a significant amount of EEG-based MI-BCI studies using wearable technologies in the literature that implies a promising future development of this research field, especially in uncontrolled environments and outside the medical and clinical settings?
by analyzing the results obtained through the extensive search initially performed considering different EEG, MI, and BCI related keyword combinations (Section 2.3) detailed in Table 2 and the final paper pool identified through the PRISMA flow (Figure 1).
In fact, according to the results reported in Table 2, the MI paradigm is particularly used in the EEG domain. About 26% of the works retrieved by considering only the EEG-based BCI keywords present MI paradigms, while only 0.71% present the use of wearable technologies for MI experiments.
Considering the timeline of the final filtered publications (Figure 2), most of the reviewed works have been published between 2019 and 2020, denoting the relatively new interest in wearable devices and a recent increase in the availability of these technologies to the EEG community.
Notice that around 20 different devices (Section 5.2) have been adopted in the applications reported by the 84 papers here analyzed, with different spatial resolutions (from 1 to 64 electrodes) and characteristics. This huge number of tools and variety of technical properties denote the increasing interest in this technology but make it difficult to qualitatively compare them.
Research directions have been clearly paved to provide new EEG-based MI-BCI wearable solutions with the aim of being employed for applications in heterogeneous and real-life environments.
One-third of the applications found in the reviewed literature are related to rehabilitation and assistive purposes, where the feedback of the systems plays a significant role in controlling external devices. Nearly 25% of the reviewed papers focus on methodological testing, presenting either new frameworks or particular signal processing and analysis techniques. Several works (15%) describe BCI applications developed in the entertainment field, while nearly 17% of contributions address the evaluation of new technical solutions and paradigm proposals.
Therefore, uncontrolled environments have been scrutinized by researchers to propose new EEG-based MI-BCIs wearable solutions.
Another interesting datum on the research production of the last ten years regards the development and study of BCI life-cycle pipelines, which concerns the second sub-question.
RQ2: Are there common pipelines of processing that can be adopted from signal acquisition to feedback generation?
Data acquisition, signal preprocessing, feature engineering and channel selection, data classification, and analyses, as well as feedback modalities of the 84 papers considered here were extensively analyzed in this review and synthesized in Section 5, and in particular in Table 5 and Figure 7. To summarize this analysis and answer RQ2, we observe that the first crucial point, especially using wireless technologies and wearable devices, is related to noise removal. To address this point, considering both internal and external noise sources, preprocessing algorithms can benefit from the knowledge of the frequencies of both the artifacts to be removed and the rhythms that should be preserved. However noise and signal frequencies often interfere.
Three main approaches can be identified, namely the use of blind source separation techniques, filters in the frequency domain, and spatial filters. The first approach usually presents the application of ICA, which separates a mixed signal into different components, assuming the presence of different signal sources. The second type of preprocessing relies on filters in the frequency domain, especially Butterworth filters, to select the brain rhythms of interest, and at the same time, remove noise. The last type of approach applies spatial filtering, like CAR filtering, taking into account the spatial correlation of the brain waves.
Even if a unique strategy is not adopted by all the applications, the noise removal procedures are quite similar among the considered publications.
For what concerns the feature engineering steps (which usually come after the preprocessing one), the variability among different papers is relatively low. In general, the ERD/ERS phenomenon is widely studied considering μ and β rhythms, exploiting time, frequency, and time–frequency handcrafted features. Only in recent years have deep learning methods begun to be used to automatically extract features from the raw signals.
Moreover, working with wearable devices and potentially in uncontrolled environments with low computational power, the reduction in data dimensionality is particularly important, especially considering the need for a low number of input data to be considered in a classification task. To this end, besides traditional feature reduction and feature selection strategies, a good number of works focus only on specific channels that are usually chosen among the central cortical area, which is coherent with the neuroscientific literature on MI.
The last processing step, represented by data analysis and classification, appears to be more heterogeneous with respect to the other ones, as depicted in Figure 7. In particular, regarding models used to perform different classification tasks, most of the works (about 54%) rely on traditional machine learning techniques, especially LDA and SVM; about 10% adopt ensemble techniques, transfer learning models, or other supervised learning approaches, while only 15% of them adopt deep learning strategies. It is also worth noting that 21% of the works do not face classification problems, but present statistical analysis, quality assessment, and functional connectivity studies.
From these considerations, we can conclude that there is a low variability in the initial steps of the whole BCI life cycle, while for what concerns data analysis and classification a higher variability can be identified. In particular, the adoption of deep learning models is at its early stage, and it is not outstanding with respect to traditional machine learning strategies.
A clear comparison and assessment of the efficacy of different classification models would benefit from the application of these strategies on data acquired using a similar experimental paradigm or on benchmark datasets.
This observation is strictly related to RQ3 and RQ4 sub-question answering. Starting from the third sub-question,
RQ3: Are there consolidated experimental paradigms for wearable EEG-based MI-BCI applications?
notice that the experimental paradigm adopted by most of the considered works (39 out of 84) concerns MI of left/right hand/fist movement. However a high number of different types of other MI paradigms are considered: single hand/both hands, foot/feet or tongue movement, shoulder flexion, extension and abduction, the motion of upper/lower limbs, pedaling, game character/robot/machinery movement control or generic motor intention and even the imagination of cognitive tasks. Moreover, single task duration, task order, administration modality, and experimental settings are also very heterogeneous.
From this variety of MI paradigms, several datasets have been collected or employed by the authors of the reviewed papers, allowing them to answer the last research sub-question:
RQ4: Are there datasets available for the research community to properly compare classification models and data analysis?
Considering data acquisition, 79 out of 84 works collect their own dataset, involving, in most of the cases (76%), less than 10 subjects. In particular, about 49% of these 79 works consider less than five participants. Notice that only seven proprietary datasets are available upon request. Moreover, among all the publicly available datasets reported in Table 6, which can be considered benchmarks, only one is acquired using wearable devices [113].
Among the 84 papers considered, only 9 adopted these benchmark datasets to evaluate the proposed models, of which 8 employed the third-party datasets acquired using wired systems.
Furthermore, notice that even if the same benchmark dataset is adopted, the classification tasks may vary from different types of binary classification: one-vs.-one (for instance, left versus right hand) or one-vs.-rest (for example, right-hand imagined movement versus resting state), and multiclass classification, with a range between three and five classes. Classification models and their performance, as well as other types of analysis, are reported in Table 5 only for those works (13 out of 84) that present results either on benchmark datasets or on available proprietary ones, making the proposed analysis reproducible. As a final important note, among these 13 publications, only 5 declare having performed online analysis.
Considering the answers to the provided sub-questions, the main research question concerning the maturity of EEG-based MI-BCI applications in uncontrolled environments can be addressed.
Having a closer look at the applications reported by the reviewed papers, it seems that most of them pertain to the medical and rehabilitation fields and are mostly employed in controlled environments. However, the EEG-based MI-BCI systems using wearable technologies in real-life scenarios seem to provide reliable assistance to their users and to be well received in case of assistive employment. They also seem promising in the case of entertainment, gaming, and other applications. The scenario of wearable devices available in the market is wide, also offering a huge variability in terms of electrodes, features, and costs. Even if several different computational models have been presented in the analyzed literature with promising results, the lack of reference experimental paradigms and of publicly and validated benchmark datasets acquired using wearable devices make the analysis of the model performance and the feasibility of real-time applications not completely accessible. It is unclear whether proposed strategies, often tested offline on wired benchmark datasets, can be effectively translated into online real-life wearable contexts.
Many concerns remain regarding the ethical aspects that permeate the use of these systems in environments managed by experts and in consumer-grade platforms. Concerning this point, note that among the 84 considered works, only 39 provide an ethical statement on the approval of the performed experiments.

7. Conclusions and Future Perspectives

The interest in EEG-based MI-BCI systems using wearable technologies has been rising in the last few years. Moreover, very different devices have been used in the analyzed studies for very diverse applications.
The experimental paradigms concerning MI tasks usually involve the motor imagination of left- and right-hand movements, even though new paradigms have been presented to address the specific needs of patients and researchers. Therefore, numerous datasets have been collected to face these demands. However, most of them are not publicly available, and testing is usually performed on recordings acquired through wired devices.
Surely, the typical steps of the BCI life-cycle appear to be maintained by most of the analyzed works. However, it is not quite clear if strategies applied to offline wired benchmark datasets can be translated into an online wireless environment.
An example that may clarify this point regards the pervasive use of ICA for signal preprocessing, which is usually performed in offline analyses. In fact, due to its methodological aspects, ICA requires an attentive evaluation of the outputted components and the identification of the artifactual ones. New ICA-based strategies have been recently proposed [201,202] to provide real-time usage of such methodologies. Therefore, future works should focus on the assessment of specific techniques developed for online analyses and concerning all the data processing steps.
Other concerns pertain to the beginning and end of the BCI life-cycle, i.e., how the non-stationarity of the EEG signal is handled and the responsiveness of the system.
In fact, the performance of EEG-based BCIs is heavily influenced by the variations due to the signal non-stationarity, especially during trial-to-trial and session-to-session transfers [203], and transitioning from the training to the feedback phase [116,204]. However, most of the available studies present insufficient information regarding the time between the system training phase and its real-time application. The reliability of the BCI in a real-world scenario should increase if the test phase shows good performances, even if taken after a long time from the training phase. Therefore, future works should consider these aspects to guarantee the applicability of BCIs in real-life contexts.
The reliability of these systems is also dependent on their responsiveness, which becomes particularly important in self-paced BCIs [205]. Feedback should be provided almost instantly to the users, who are usually trained to perform specific metal tasks [206].
The responsiveness concerning the classification and feedback time, as well as the users’ proficiency, should be documented in works concerning real-time BCIs.
Regarding other future research directions, two main fields can be identified. On the one hand, edge computing is fast evolving to improve data processing speed in real-time applications. For example, [207] overviews adaptive edge computing in wearable biomedical devices (in general, not only EEG ones), highlighting the pathway from wearable sensors to their application through intelligent learning. The authors state the following:
The ultimate goal toward smart wearable sensing with edge computing capabilities relies on a bespoke platform embedding sensors, front-end circuit interface, neuromorphic processor and memristive devices.
Furthermore, [208] investigates the possibility of addressing the drawbacks of wearable devices with edge computing.
The other frontier research field regards the application of quantum computing to BCI. Although efforts are only at the initial stage, some hybrid applications of quantum computing and BCI have been found, as reviewed by [209]. Recently, the authors in [210,211] discuss Quantum Brain Networks, a new interdisciplinary field integrating knowledge and methods from neurotechnology, artificial intelligence, and quantum computing. In [211], brain signals are detected utilizing electrodes placed on the scalp of a person who learns how to produce the required mental activity to issue instructions to rotate and measure a qubit, proposing an approach to interface the brain with quantum computers.

Supplementary Materials

The following supporting information can be downloaded at Table S1: table reporting detailed notes on the 84 reviewed papers. The notes are organized to provide a clear reference to the papers and to follow the core section of the review (Section 5), analyzing (i) field of applications, (ii) employed technologies, (iii) signal processing and analysis methodologies, (iv) BCI feedback, and (v) dataset information.

Author Contributions

Conceptualization, A.S., M.C., S.C. and F.G.; methodology, A.S., M.C., S.C. and F.G.; validation, A.S., M.C., S.C. and F.G.; formal analysis, A.S., M.C., S.C. and F.G.; investigation, A.S., M.C., S.C. and F.G.; resources, A.S., M.C., S.C. and F.G.; data curation, A.S., M.C., S.C. and F.G.; writing—original draft preparation, A.S., M.C., S.C. and F.G.; writing—review and editing, A.S., M.C., S.C. and F.G.; visualization, A.S., M.C., S.C. and F.G.; supervision, A.S., M.C., S.C. and F.G. All authors have read and agreed to the published version of the manuscript.


This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.


The following abbreviations are used in this manuscript:
1DMSCNNOne-Dimensional Multi-Scale Convolutional Neural Network
AIArtificial intelligence
ANNArtificial neural network
AUCArea Under the Curve
BCIBrain–computer interface
CARCommon Average Reference
CNNConvolutional neural network
CSPCommon Spatial Pattern
DBNDeep Belief Network
DLDeep learning
ERDEvent-Related Desynchronization
ERPEvent-Related Potentials
ErrPError-Related Potential
ERSEvent-Related Synchronization
FFTFast Fourier Transform
FGMDRMFramework with filter geodesic minimum distance to Riemannian mean
FMRIFunctional Magnetic Resonance Imaging
ICAIndependent Component Analysis
KNNK-Nearest Neighbor
LDALinear Discriminant Analysis
LPALeft pre-auricolar point
LRLogistic Regression
LSTMLong-Short Term Memory
MIMotor imagery
MLMachine Learning
MLPMulti-Layer Perceptron
MSCNNMulti-Scale Convolutional Neural Network
NBNaive Bayes
NNNeural network
PRISMAPreferred Reporting Items for Systematic Reviews and Meta-Analyses
PSDPower Spectral Density
PTFBCSPPenalized Time–Frequency Band Common Spatial Pattern
QDAQuadratic Discriminant Analysis
RFRandom forest
RNNRecurrent neural network
RPARight pre-auricolar point
RQResearch question
SJGDASemisupervised Joint mutual information with General Discriminate Analysis
SNRSignal to Noise Ratio
SSDTSubject specific decision tree
SSVEPSteady-state visual evoked potential
SVMSupport vector machine
TESTranscranial electrical stimulation
VRVirtual reality
XGBoostExtreme Gradient Boosting


  1. Millett, D. Hans Berger: From psychic energy to the EEG. Perspect. Biol. Med. 2001, 44, 522–542. [Google Scholar] [CrossRef] [PubMed]
  2. Shih, J.J.; Krusienski, D.J.; Wolpaw, J.R. Brain-computer interfaces in medicine. Mayo Clin. Proc. 2012, 87, 268–279. [Google Scholar] [CrossRef] [Green Version]
  3. Kögel, J.; Schmid, J.R.; Jox, R.J.; Friedrich, O. Using brain–computer interfaces: A scoping review of studies employing social research methods. BMC Med. Ethics 2019, 20, 1–17. [Google Scholar] [CrossRef] [PubMed]
  4. Al-Saegh, A.; Dawwd, S.A.; Abdul-Jabbar, J.M. Deep learning for motor imagery EEG-based classification: A review. Biomed. Signal Process. Control 2021, 63, 102172. [Google Scholar] [CrossRef]
  5. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Syst. Rev. 2021, 10, 1–11. [Google Scholar] [CrossRef] [PubMed]
  6. Srinivasan, R.; Nunez, P. Electroencephalography. In Encyclopedia of Human Behavior, 2nd ed.; Ramachandran, V., Ed.; Academic Press: San Diego, CA, USA, 2012; pp. 15–23. [Google Scholar] [CrossRef] [Green Version]
  7. Zhang, J.; Yin, Z.; Chen, P.; Nichele, S. Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review. Inf. Fusion 2020, 59, 103–126. [Google Scholar] [CrossRef]
  8. Rojas, G.M.; Alvarez, C.; Montoya, C.E.; de la Iglesia-Vayá, M.; Cisternas, J.E.; Gálvez, M. Study of resting-state functional connectivity networks using EEG electrodes position as seed. Front. Neurosci. 2018, 12, 235. [Google Scholar] [CrossRef] [Green Version]
  9. Craik, A.; He, Y.; Contreras-Vidal, J.L. Deep learning for electroencephalogram (EEG) classification tasks: A review. J. Neural Eng. 2019, 16, 031001. [Google Scholar] [CrossRef]
  10. Hosseini, M.P.; Hosseini, A.; Ahi, K. A Review on Machine Learning for EEG Signal Processing in Bioengineering. IEEE Rev. Biomed. Eng. 2020, 14, 204–218. [Google Scholar] [CrossRef]
  11. LaRocco, J.; Le, M.D.; Paeng, D.G. A systemic review of available low-cost EEG headsets used for drowsiness detection. Front. Neuroinform. 2020, 14, 553352. [Google Scholar] [CrossRef]
  12. Wan, X.; Zhang, K.; Ramkumar, S.; Deny, J.; Emayavaramban, G.; Ramkumar, M.S.; Hussein, A.F. A review on electroencephalogram based brain computer interface for elderly disabled. IEEE Access 2019, 7, 36380–36387. [Google Scholar] [CrossRef]
  13. Oostenveld, R.; Praamstra, P. The five percent electrode system for high-resolution EEG and ERP measurements. Clin. Neurophysiol. 2001, 112, 713–719. [Google Scholar] [CrossRef] [PubMed]
  14. Paranjape, R.; Mahovsky, J.; Benedicenti, L.; Koles, Z. The electroencephalogram as a biometric. In Proceedings of the Canadian Conference on Electrical and Computer Engineering 2001. Conference Proceedings (Cat. No. 01TH8555), Toronto, ON, Canada, 13–16 May 2001; IEEE: Piscataway, NJ, USA, 2001; Volume 2, pp. 1363–1366. [Google Scholar]
  15. Xygonakis, I.; Athanasiou, A.; Pandria, N.; Kugiumtzis, D.; Bamidis, P.D. Decoding motor imagery through common spatial pattern filters at the EEG source space. Comput. Intell. Neurosci. 2018, 2018, 7957408. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Roy, Y.; Banville, H.; Albuquerque, I.; Gramfort, A.; Falk, T.H.; Faubert, J. Deep learning-based electroencephalography analysis: A systematic review. J. Neural Eng. 2019, 16, 051001. [Google Scholar] [CrossRef] [PubMed]
  17. Bigdely-Shamlo, N.; Mullen, T.; Kothe, C.; Su, K.M.; Robbins, K.A. The PREP pipeline: Standardized preprocessing for large-scale EEG analysis. Front. Neuroinform. 2015, 9, 16. [Google Scholar] [CrossRef]
  18. Gramfort, A.; Strohmeier, D.; Haueisen, J.; Hämäläinen, M.S.; Kowalski, M. Time-frequency mixed-norm estimates: Sparse M/EEG imaging with non-stationary source activations. NeuroImage 2013, 70, 410–422. [Google Scholar] [CrossRef] [Green Version]
  19. Lee, H.; Choi, S. Group nonnegative matrix factorization for EEG classification. In Proceedings of the Artificial Intelligence and Statistics, Virtual, 16–18 April 2009; pp. 320–327. [Google Scholar]
  20. Zhang, D.; Yao, L.; Chen, K.; Wang, S.; Chang, X.; Liu, Y. Making sense of spatio-temporal preserving representations for EEG-based human intention recognition. IEEE Trans. Cybern. 2019, 50, 3033–3044. [Google Scholar] [CrossRef]
  21. Vaid, S.; Singh, P.; Kaur, C. EEG signal analysis for BCI interface: A review. In Proceedings of the 2015 fifth International Conference on Advanced Computing & Communication Technologies, Washington, DC, USA, 21–22 February 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 143–147. [Google Scholar]
  22. Blinowska, K.; Durka, P. Electroencephalography (eeg). In Wiley Encyclopedia of Biomedical Engineering; Wiley: Hoboken, NJ, USA, 2006. [Google Scholar]
  23. McFarland, D.J.; Miner, L.A.; Vaughan, T.M.; Wolpaw, J.R. Mu and beta rhythm topographies during motor imagery and actual movements. Brain Topogr. 2000, 12, 177–186. [Google Scholar] [CrossRef]
  24. Decety, J. The neurophysiological basis of motor imagery. Behav. Brain Res. 1996, 77, 45–52. [Google Scholar] [CrossRef]
  25. Beisteiner, R.; Höllinger, P.; Lindinger, G.; Lang, W.; Berthoz, A. Mental representations of movements. Brain potentials associated with imagination of hand movements. Electroencephalogr. Clin. Neurophysiol. Potentials Sect. 1995, 96, 183–193. [Google Scholar] [CrossRef]
  26. Jeannerod, M. Mental imagery in the motor context. Neuropsychologia 1995, 33, 1419–1432. [Google Scholar] [CrossRef] [PubMed]
  27. Lotze, M.; Halsband, U. Motor imagery. J.-Physiol.-Paris 2006, 99, 386–395. [Google Scholar] [CrossRef] [PubMed]
  28. McAvinue, L.P.; Robertson, I.H. Measuring motor imagery ability: A review. Eur. J. Cogn. Psychol. 2008, 20, 232–251. [Google Scholar] [CrossRef]
  29. Pfurtscheller, G.; Neuper, C. Motor imagery activates primary sensorimotor area in humans. Neurosci. Lett. 1997, 239, 65–68. [Google Scholar] [CrossRef] [PubMed]
  30. Jeon, Y.; Nam, C.S.; Kim, Y.J.; Whang, M.C. Event-related (De) synchronization (ERD/ERS) during motor imagery tasks: Implications for brain–computer interfaces. Int. J. Ind. Ergon. 2011, 41, 428–436. [Google Scholar] [CrossRef]
  31. Munzert, J.; Lorey, B.; Zentgraf, K. Cognitive motor processes: The role of motor imagery in the study of motor representations. Brain Res. Rev. 2009, 60, 306–326. [Google Scholar] [CrossRef] [PubMed]
  32. Dose, H.; Møller, J.S.; Iversen, H.K.; Puthusserypady, S. An end-to-end deep learning approach to MI-EEG signal classification for BCIs. Expert Syst. Appl. 2018, 114, 532–542. [Google Scholar] [CrossRef]
  33. Pfurtscheller, G.; Da Silva, F.L. Event-related EEG/MEG synchronization and desynchronization: Basic principles. Clin. Neurophysiol. 1999, 110, 1842–1857. [Google Scholar] [CrossRef]
  34. Pfurtscheller, G.; Brunner, C.; Schlögl, A.; Da Silva, F.L. Mu rhythm (de) synchronization and EEG single-trial classification of different motor imagery tasks. NeuroImage 2006, 31, 153–159. [Google Scholar] [CrossRef]
  35. Nam, C.S.; Jeon, Y.; Kim, Y.J.; Lee, I.; Park, K. Movement imagery-related lateralization of event-related (de) synchronization (ERD/ERS): Motor-imagery duration effects. Clin. Neurophysiol. 2011, 122, 567–577. [Google Scholar] [CrossRef]
  36. Dai, M.; Zheng, D.; Na, R.; Wang, S.; Zhang, S. EEG classification of motor imagery using a novel deep learning framework. Sensors 2019, 19, 551. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Wriessnegger, S.C.; Brunner, C.; Müller-Putz, G.R. Frequency specific cortical dynamics during motor imagery are influenced by prior physical activity. Front. Psychol. 2018, 9, 1976. [Google Scholar] [CrossRef] [PubMed]
  38. Demarin, V.; Morović, S. Neuroplasticity. Period. Biol. 2014, 116, 209–211. [Google Scholar]
  39. Kaiser, V.; Bauernfeind, G.; Kreilinger, A.; Kaufmann, T.; Kübler, A.; Neuper, C.; Müller-Putz, G.R. Cortical effects of user training in a motor imagery based brain–computer interface measured by fNIRS and EEG. Neuroimage 2014, 85, 432–444. [Google Scholar] [CrossRef] [PubMed]
  40. Reaves, J.; Flavin, T.; Mitra, B.; Mahantesh, K.; Nagaraju, V. Assessment Furthermore, Application of EEG: A Literature Review. Appl. Bioinf. Comput. Biol. 2021, 7, 2. [Google Scholar]
  41. Wolpaw, J.R.; Birbaumer, N.; McFarland, D.J.; Pfurtscheller, G.; Vaughan, T.M. Brain–computer interfaces for communication and control. Clin. Neurophysiol. 2002, 113, 767–791. [Google Scholar] [CrossRef] [PubMed]
  42. Schalk, G.; McFarland, D.J.; Hinterberger, T.; Birbaumer, N.; Wolpaw, J.R. BCI2000: A general-purpose brain–computer interface (BCI) system. IEEE Trans. Biomed. Eng. 2004, 51, 1034–1043. [Google Scholar] [CrossRef]
  43. Aggarwal, S.; Chugh, N. Signal processing techniques for motor imagery brain computer interface: A review. Array 2019, 1, 100003. [Google Scholar] [CrossRef]
  44. Abiri, R.; Borhani, S.; Sellers, E.W.; Jiang, Y.; Zhao, X. A comprehensive review of EEG-based brain–computer interface paradigms. J. Neural Eng. 2019, 16, 011001. [Google Scholar] [CrossRef]
  45. Gasparini, F.; Cazzaniga, E.; Saibene, A. Inner speech recognition through electroencephalographic signals. arXiv 2022, arXiv:2210.06472. [Google Scholar]
  46. Ramele, R.; Villar, A.J.; Santos, J.M. EEG Waveform Analysis of P300 ERP with Applications to Brain Computer Interfaces. Brain Sci. 2018, 8, 199. [Google Scholar] [CrossRef] [Green Version]
  47. Friman, O.; Volosyak, I.; Graser, A. Multiple Channel Detection of Steady-State Visual Evoked Potentials for Brain-Computer Interfaces. IEEE Trans. Biomed. Eng. 2007, 54, 742–750. [Google Scholar] [CrossRef] [PubMed]
  48. Baek, H.J.; Chang, M.H.; Heo, J.; Park, K.S. Enhancing the Usability of Brain-Computer Interface Systems. Comput. Intell. Neurosci. 2019, 2019, 12. [Google Scholar] [CrossRef] [PubMed]
  49. Bhattacharyya, S.; Konar, A.; Tibarewala, D.N. Motor imagery and error related potential induced position control of a robotic arm. IEEE/CAA J. Autom. Sin. 2017, 4, 639–650. [Google Scholar] [CrossRef]
  50. Kawala-Sterniuk, A.; Browarska, N.; Al-Bakri, A.; Pelc, M.; Zygarlicki, J.; Sidikova, M.; Martinek, R.; Gorzelanczyk, E.J. Summary of over fifty years with brain–computer interfaces—A review. Brain Sci. 2021, 11, 43. [Google Scholar] [CrossRef]
  51. Xu, M.; Han, J.; Wang, Y.; Jung, T.P.; Ming, D. Implementing over 100 command codes for a high-speed hybrid brain–computer interface using concurrent P300 and SSVEP features. IEEE Trans. Biomed. Eng. 2020, 67, 3073–3082. [Google Scholar] [CrossRef]
  52. Ma, T.; Li, H.; Deng, L.; Yang, H.; Lv, X.; Li, P.; Li, F.; Zhang, R.; Liu, T.; Yao, D.; et al. The hybrid BCI system for movement control by combining motor imagery and moving onset visual evoked potential. J. Neural Eng. 2017, 14, 026015. [Google Scholar] [CrossRef]
  53. Duan, F.; Lin, D.; Li, W.; Zhang, Z. Design of a multimodal EEG-based hybrid BCI system with visual servo module. IEEE Trans. Auton. Ment. Dev. 2015, 7, 332–341. [Google Scholar] [CrossRef]
  54. Mane, R.; Chouhan, T.; Guan, C. BCI for stroke rehabilitation: Motor and beyond. J. Neural Eng. 2020, 17, 041001. [Google Scholar] [CrossRef]
  55. Khan, M.A.; Das, R.; Iversen, H.K.; Puthusserypady, S. Review on motor imagery based BCI systems for upper limb post-stroke neurorehabilitation: From designing to application. Comput. Biol. Med. 2020, 123, 103843. [Google Scholar] [CrossRef]
  56. Yang, S.; Li, R.; Li, H.; Xu, K.; Shi, Y.; Wang, Q.; Yang, T.; Sun, X. Exploring the Use of Brain-Computer Interfaces in Stroke Neurorehabilitation. BioMed Res. Int. 2021, 2021, 12. [Google Scholar] [CrossRef] [PubMed]
  57. Möller, J.C.; Zutter, D.; Riener, R. Technology-Based Neurorehabilitation in Parkinson’s Disease – A Narrative Review. Clin. Transl. Neurosci. 2021, 5, 23. [Google Scholar] [CrossRef]
  58. Miladinović, A.; Ajčević, M.; Busan, P.; Jarmolowska, J.; Silveri, G.; Deodato, M.; Mezzarobba, S.; Battaglini, P.P.; Accardo, A. Evaluation of Motor Imagery-Based BCI methods in neurorehabilitation of Parkinson’s Disease patients. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 3058–3061. [Google Scholar] [CrossRef]
  59. Abdulkader, S.N.; Atia, A.; Mostafa, M.S.M. Brain computer interfacing: Applications and challenges. Egypt. Inform. J. 2015, 16, 213–230. [Google Scholar] [CrossRef] [Green Version]
  60. Kerous, B.; Skola, F.; Liarokapis, F. EEG-based BCI and video games: A progress report. Virtual Real. 2018, 22, 119–135. [Google Scholar] [CrossRef]
  61. Sanchez-Fraire, U.; Parra-Vega, V.; Martinez-Peon, D.; Sepúlveda-Cervantes, G.; Sánchez-Orta, A.; Muñoz-Vázquez, A.J. On the brain computer robot interface (bcri) to control robots. IFAC-PapersOnLine 2015, 48, 154–159. [Google Scholar] [CrossRef]
  62. Perrin, X.; Chavarriaga, R.; Colas, F.; Siegwart, R.; Millán, J.d.R. Brain-coupled interaction for semi-autonomous navigation of an assistive robot. Robot. Auton. Syst. 2010, 58, 1246–1255. [Google Scholar] [CrossRef] [Green Version]
  63. Balderas, D.; Ponce, P.; Lopez-Bernal, D.; Molina, A. Education 4.0: Teaching the Basis of Motor Imagery Classification Algorithms for Brain-Computer Interfaces. Future Internet 2021, 13, 202. [Google Scholar] [CrossRef]
  64. Myrden, A.; Chau, T. A passive EEG-BCI for single-trial detection of changes in mental state. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 345–356. [Google Scholar] [CrossRef]
  65. Wolpaw, J.R. Brain–computer interfaces. In Handbook of Clinical Neurology; Elsevier: Amsterdam, The Netherlands, 2013; Volume 110, pp. 67–74. [Google Scholar]
  66. Fiedler, P.; Fonseca, C.; Supriyanto, E.; Zanow, F.; Haueisen, J. A high-density 256-channel cap for dry electroencephalography. Human Brain Mapping 2022, 43, 1295–1308. [Google Scholar] [CrossRef]
  67. Casson, A.J.; Yates, D.C.; Smith, S.J.; Duncan, J.S.; Rodriguez-Villegas, E. Wearable electroencephalography. IEEE Eng. Med. Biol. Mag. 2010, 29, 44–56. [Google Scholar] [CrossRef] [Green Version]
  68. Soufineyestani, M.; Dowling, D.; Khan, A. Electroencephalography (EEG) technology applications and available devices. Appl. Sci. 2020, 10, 7453. [Google Scholar] [CrossRef]
  69. Hu, L.; Zhang, Z. EEG Signal Processing and Feature Extraction; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  70. Li, G.L.; Wu, J.T.; Xia, Y.H.; He, Q.G.; Jin, H.G. Review of semi-dry electrodes for EEG recording. J. Neural Eng. 2020, 17, 051004. [Google Scholar] [CrossRef] [PubMed]
  71. Casson, A.J. Wearable EEG and beyond. Biomed. Eng. Lett. 2019, 9, 53–71. [Google Scholar] [CrossRef]
  72. Mihajlović, V.; Grundlehner, B.; Vullers, R.; Penders, J. Wearable, wireless EEG solutions in daily life applications: What are we missing? IEEE J. Biomed. Health Inform. 2014, 19, 6–21. [Google Scholar] [CrossRef]
  73. Blum, S.; Emkes, R.; Minow, F.; Anlauff, J.; Finke, A.; Debener, S. Flex-printed forehead EEG sensors (fEEGrid) for long-term EEG acquisition. J. Neural Eng. 2020, 17, 034003. [Google Scholar] [CrossRef] [PubMed]
  74. You, S.; Cho, B.H.; Yook, S.; Kim, J.Y.; Shon, Y.M.; Seo, D.W.; Kim, I.Y. Unsupervised automatic seizure detection for focal-onset seizures recorded with behind-the-ear EEG using an anomaly-detecting generative adversarial network. Comput. Methods Programs Biomed. 2020, 193, 105472. [Google Scholar] [CrossRef] [PubMed]
  75. Do Valle, B.G.; Cash, S.S.; Sodini, C.G. Wireless behind-the-ear EEG recording device with wireless interface to a mobile device (iPhone/iPod touch). In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 5952–5955. [Google Scholar]
  76. Wang, Y.; Yin, L.; Bai, Y.; Liu, S.; Wang, L.; Zhou, Y.; Hou, C.; Yang, Z.; Wu, H.; Ma, J.; et al. Electrically compensated, tattoo-like electrodes for epidermal electrophysiology at scale. Sci. Adv. 2020, 6, eabd0996. [Google Scholar] [CrossRef]
  77. Wang, H.; Wang, J.; Chen, D.; Ge, S.; Liu, Y.; Wang, Z.; Zhang, X.; Guo, Q.; Yang, J. Robust tattoo electrode prepared by paper-assisted water transfer printing for wearable health monitoring. IEEE Sens. J. 2022, 22, 3817–3827. [Google Scholar] [CrossRef]
  78. Casson, A.J.; Abdulaal, M.; Dulabh, M.; Kohli, S.; Krachunov, S.; Trimble, E. Electroencephalogram. In Seamless Healthcare Monitoring; Springer: Berlin/Heidelberg, Germany, 2018; pp. 45–81. [Google Scholar]
  79. Palumbo, A.; Gramigna, V.; Calabrese, B.; Ielpo, N. Motor-imagery EEG-based BCIs in wheelchair movement and control: A systematic literature review. Sensors 2021, 21, 6285. [Google Scholar] [CrossRef]
  80. Udovičić, G.; Topić, A.; Russo, M. Wearable technologies for smart environments: A review with emphasis on BCI. In Proceedings of the 2016 24th International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Split, Croatia, 22–24 September 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–9. [Google Scholar]
  81. TajDini, M.; Sokolov, V.; Kuzminykh, I.; Shiaeles, S.; Ghita, B. Wireless sensors for brain activity—A survey. Electronics 2020, 9, 2092. [Google Scholar] [CrossRef]
  82. Portillo-Lara, R.; Tahirbegi, B.; Chapman, C.A.; Goding, J.A.; Green, R.A. Mind the gap: State-of-the-art technologies and applications for EEG-based brain–computer interfaces. APL Bioeng. 2021, 5, 031507. [Google Scholar] [CrossRef] [PubMed]
  83. Jamil, N.; Belkacem, A.N.; Ouhbi, S.; Lakas, A. Noninvasive Electroencephalography Equipment for Assistive, Adaptive, and Rehabilitative Brain–Computer Interfaces: A Systematic Literature Review. Sensors 2021, 21, 4754. [Google Scholar] [CrossRef] [PubMed]
  84. Gu, X.; Cao, Z.; Jolfaei, A.; Xu, P.; Wu, D.; Jung, T.P.; Lin, C.T. EEG-based brain–computer interfaces (BCIs): A survey of recent studies on signal sensing technologies and computational intelligence approaches and their applications. IEEE/ACM Trans. Comput. Biol. Bioinform. 2021, 18, 1645–1666. [Google Scholar] [CrossRef] [PubMed]
  85. Neuper, C.; Pfurtscheller, G. Neurofeedback training for BCI control. In Brain-Computer Interfaces; Springer: Berlin/Heidelberg, Germany, 2009; pp. 65–78. [Google Scholar]
  86. Jiang, Y.; Hau, N.T.; Chung, W.Y. Semiasynchronous BCI using wearable two-channel EEG. IEEE Trans. Cogn. Dev. Syst. 2017, 10, 681–686. [Google Scholar] [CrossRef]
  87. Mattia, D.; Pichiorri, F.; Colamarino, E.; Masciullo, M.; Morone, G.; Toppi, J.; Pisotta, I.; Tamburella, F.; Lorusso, M.; Paolucci, S.; et al. The Promotoer, a brain–computer interface-assisted intervention to promote upper limb functional motor recovery after stroke: A study protocol for a randomized controlled trial to test early and long-term efficacy and to identify determinants of response. BMC Neurol. 2020, 20, 254. [Google Scholar] [CrossRef]
  88. Barria, P.; Pino, A.; Tovar, N.; Gomez-Vargas, D.; Baleta, K.; Díaz, C.A.; Múnera, M.; Cifuentes, C.A. BCI-Based Control for Ankle Exoskeleton T-FLEX: Comparison of Visual and Haptic Stimuli with Stroke Survivors. Sensors 2021, 21, 6431. [Google Scholar] [CrossRef]
  89. Karakullukcu, N.; Yilmaz, B. Detection of Movement Intention in EEG-Based Brain-Computer Interfaces Using Fourier-Based Synchrosqueezing Transform. Int. J. Neural Syst. 2022, 32, 2150059. [Google Scholar] [CrossRef]
  90. Du Bois, N.; Bigirimana, A.D.; Korik, A.; Kéthina, L.G.; Rutembesa, E.; Mutabaruka, J.; Mutesa, L.; Prasad, G.; Jansen, S.; Coyle, D. Neurofeedback with low-cost, wearable electroencephalography (EEG) reduces symptoms in chronic Post-Traumatic Stress Disorder. J. Affect. Disord. 2021, 295, 1319–1334. [Google Scholar] [CrossRef]
  91. Looned, R.; Webb, J.; Xiao, Z.G.; Menon, C. Assisting drinking with an affordable BCI-controlled wearable robot and electrical stimulation: A preliminary investigation. J. Neuroeng. Rehabil. 2014, 11, 51. [Google Scholar] [CrossRef] [Green Version]
  92. Li, Z.; Yuan, Y.; Luo, L.; Su, W.; Zhao, K.; Xu, C.; Huang, J.; Pi, M. Hybrid brain/muscle signals powered wearable walking exoskeleton enhancing motor ability in climbing stairs activity. IEEE Trans. Med. Robot. Bionics 2019, 1, 218–227. [Google Scholar] [CrossRef]
  93. Vourvopoulos, A.; Jorge, C.; Abreu, R.; Figueiredo, P.; Fernandes, J.C.; Bermudez i Badia, S. Efficacy and brain imaging correlates of an immersive motor imagery BCI-driven VR system for upper limb motor rehabilitation: A clinical case report. Front. Hum. Neurosci. 2019, 13, 244. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  94. Kong, W.; Fu, S.; Deng, B.; Zeng, H.; Zhang, J.; Guo, S. Embedded BCI Rehabilitation System for Stroke. J. Beijing Inst. Technol. 2019, 28, 35–41. [Google Scholar]
  95. Athanasiou, A.; Arfaras, G.; Xygonakis, I.; Kartsidis, P.; Pandria, N.; Kavazidi, K.R.; Astaras, A.; Foroglou, N.; Polyzoidis, K.; Bamidis, P.D. Commercial BCI Control and functional brain networks in spinal cord injury: A proof-of-concept. In Proceedings of the 2017 IEEE 30th International Symposium on Computer-Based Medical Systems (CBMS), Thessaloniki, Greece, 22–24 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 262–267. [Google Scholar]
  96. Simon, C.; Ruddy, K.L. A wireless, wearable Brain-Computer Interface for neurorehabilitation at home; A feasibility study. In Proceedings of the 2022 10th International Winter Conference on Brain-Computer Interface (BCI), Gangwon, Republic of Korea, 21–23 February 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–6. [Google Scholar]
  97. Quiles, E.; Suay, F.; Candela, G.; Chio, N.; Jiménez, M.; Álvarez-Kurogi, L. Low-cost robotic guide based on a motor imagery brain–computer interface for arm assisted rehabilitation. Int. J. Environ. Res. Public Health 2020, 17, 699. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  98. Cardoso, V.F.; Delisle-Rodriguez, D.; Romero-Laiseca, M.A.; Loterio, F.A.; Gurve, D.; Floriano, A.; Valadão, C.; Silva, L.; Krishnan, S.; Frizera-Neto, A.; et al. Effect of a Brain–Computer Interface Based on Pedaling Motor Imagery on Cortical Excitability and Connectivity. Sensors 2021, 21, 2020. [Google Scholar] [CrossRef] [PubMed]
  99. Wang, H.; Bezerianos, A. Brain-controlled wheelchair controlled by sustained and brief motor imagery BCIs. Electron. Lett. 2017, 53, 1178–1180. [Google Scholar] [CrossRef]
  100. Lisi, G.; Hamaya, M.; Noda, T.; Morimoto, J. Dry-wireless EEG and asynchronous adaptive feature extraction towards a plug-and-play co-adaptive brain robot interface. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 959–966. [Google Scholar]
  101. Carrino, F.; Dumoulin, J.; Mugellini, E.; Abou Khaled, O.; Ingold, R. A self-paced BCI system to control an electric wheelchair: Evaluation of a commercial, low-cost EEG device. In Proceedings of the 2012 ISSNIP Biosignals and Biorobotics Conference: Biosignals and Robotics for Better and Safer Living (BRC), Manaus, Brazil, 9–11 January 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 1–6. [Google Scholar]
  102. Gant, K.; Guerra, S.; Zimmerman, L.; Parks, B.A.; Prins, N.W.; Prasad, A. EEG-controlled functional electrical stimulation for hand opening and closing in chronic complete cervical spinal cord injury. Biomed. Phys. Eng. Express 2018, 4, 065005. [Google Scholar] [CrossRef]
  103. Gaxiola-Tirado, J.A.; Iáñez, E.; Ortíz, M.; Gutiérrez, D.; Azorín, J.M. Effects of an exoskeleton-assisted gait motor imagery training in functional brain connectivity. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 429–432. [Google Scholar]
  104. Khan, M.J.; Hong, K.S.; Naseer, N.; Bhutta, M.R. Motor imagery performance evaluation using hybrid EEG-NIRS for BCI. In Proceedings of the 2015 54th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), Hangzhou, China, 28–30 July 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1150–1155. [Google Scholar]
  105. Freer, D.; Yang, G.Z. MIndGrasp: A New Training and Testing Framework for Motor Imagery Based 3-Dimensional Assistive Robotic Control. arXiv 2020, arXiv:2003.00369. [Google Scholar]
  106. Jameel, H.F.; Mohammed, S.L.; Gharghan, S.K. Electroencephalograph-based wheelchair controlling system for the people with motor disability using advanced brainwear. In Proceedings of the 2019 12th International Conference on Developments in eSystems Engineering (DeSE), Kazan, Russia, 7–10 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 843–848. [Google Scholar]
  107. Ketola, E.; Lloyd, C.; Shuhart, D.; Schmidt, J.; Morenz, R.; Khondker, A.; Imtiaz, M. Lessons Learned from the Initial Development of a Brain Controlled Assistive Device. In Proceedings of the 2022 IEEE 12th Annual Computing and Communication Workshop and Conference (CCWC), Virtual, 26–29 January 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 0580–0585. [Google Scholar]
  108. Permana, K.; Wijaya, S.; Prajitno, P. Controlled wheelchair based on brain computer interface using Neurosky Mindwave Mobile 2. Aip Conf. Proc. 2019, 2168, 020022. [Google Scholar]
  109. Priyatno, S.B.; Prakoso, T.; Riyadi, M.A. Classification of motor imagery brain wave for bionic hand movement using multilayer perceptron. Sinergi 2022, 26, 57–64. [Google Scholar] [CrossRef]
  110. Tang, X.; Li, W.; Li, X.; Ma, W.; Dang, X. Motor imagery EEG recognition based on conditional optimization empirical mode decomposition and multi-scale convolutional neural network. Expert Syst. Appl. 2020, 149, 113285. [Google Scholar] [CrossRef]
  111. Apicella, A.; Arpaia, P.; Frosolone, M.; Moccaldi, N. High-wearable EEG-based distraction detection in motor rehabilitation. Sci. Rep. 2021, 11, 5297. [Google Scholar] [CrossRef] [PubMed]
  112. Garcia-Moreno, F.M.; Bermudez-Edo, M.; Rodríguez-Fórtiz, M.J.; Garrido, J.L. A CNN-LSTM deep Learning classifier for motor imagery EEG detection using a low-invasive and low-Cost BCI headband. In Proceedings of the 2020 16th International Conference on Intelligent Environments (IE), Madrid, Spain, 20–23 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 84–91. [Google Scholar]
  113. Peterson, V.; Galván, C.; Hernández, H.; Spies, R. A feasibility study of a complete low-cost consumer-grade brain–computer interface system. Heliyon 2020, 6, e03425. [Google Scholar] [CrossRef] [PubMed]
  114. Tariq, M.; Trivailo, P.M.; Simic, M. Motor imagery based EEG features visualization for BCI applications. Procedia Comput. Sci. 2018, 126, 1936–1944. [Google Scholar] [CrossRef]
  115. Zhang, X.; Yao, L.; Sheng, Q.Z.; Kanhere, S.S.; Gu, T.; Zhang, D. Converting your thoughts to texts: Enabling brain typing via deep feature learning of eeg signals. In Proceedings of the 2018 IEEE international conference on pervasive computing and communications (PerCom), Athens, Greece, 19–23 March 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–10. [Google Scholar]
  116. Chowdhury, A.; Raza, H.; Meena, Y.K.; Dutta, A.; Prasad, G. Online covariate shift detection-based adaptive brain–computer interface to trigger hand exoskeleton feedback for neuro-rehabilitation. IEEE Trans. Cogn. Dev. Syst. 2017, 10, 1070–1080. [Google Scholar] [CrossRef] [Green Version]
  117. Daeglau, M.; Wallhoff, F.; Debener, S.; Condro, I.S.; Kranczioch, C.; Zich, C. Challenge accepted? Individual performance gains for motor imagery practice with humanoid robotic EEG neurofeedback. Sensors 2020, 20, 1620. [Google Scholar] [CrossRef] [Green Version]
  118. LaFleur, K.; Cassady, K.; Doud, A.; Shades, K.; Rogin, E.; He, B. Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain–computer interface. J. Neural Eng. 2013, 10, 046003. [Google Scholar] [CrossRef] [Green Version]
  119. Alanis-Espinosa, M.; Gutiérrez, D. On the assessment of functional connectivity in an immersive brain–computer interface during motor imagery. Front. Psychol. 2020, 11, 1301. [Google Scholar] [CrossRef]
  120. Xu, B.; Li, W.; He, X.; Wei, Z.; Zhang, D.; Wu, C.; Song, A. Motor imagery based continuous teleoperation robot control with tactile feedback. Electronics 2020, 9, 174. [Google Scholar] [CrossRef] [Green Version]
  121. Zhuang, J.; Geng, K.; Yin, G. Ensemble learning based brain–computer interface system for ground vehicle control. IEEE Trans. Syst. Man, Cybern. Syst. 2019, 51, 5392–5404. [Google Scholar] [CrossRef]
  122. Liu, Y.; Habibnezhad, M.; Jebelli, H. Brain-computer interface for hands-free teleoperation of construction robots. Autom. Constr. 2021, 123, 103523. [Google Scholar] [CrossRef]
  123. Mahmood, M.; Kwon, S.; Kim, H.; Kim, Y.S.; Siriaraya, P.; Choi, J.; Otkhmezuri, B.; Kang, K.; Yu, K.J.; Jang, Y.C.; et al. Wireless Soft Scalp Electronics and Virtual Reality System for Motor Imagery-Based Brain–Machine Interfaces. Adv. Sci. 2021, 8, 2101129. [Google Scholar] [CrossRef] [PubMed]
  124. Djamal, E.C.; Abdullah, M.Y.; Renaldi, F. Brain computer interface game controlling using fast fourier transform and learning vector quantization. J. Telecommun. Electron. Comput. Eng. JTEC 2017, 9, 71–74. [Google Scholar]
  125. Mitocaru, A.; Poboroniuc, M.S.; Irimia, D.; Baciu, A. Comparison Between Two Brain Computer Interface Systems Aiming to Control a Mobile Robot. In Proceedings of the 2021 International Conference on Electromechanical and Energy Systems (SIELMEN), Chisinau, Moldova, 7–8 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–5. [Google Scholar]
  126. Mwata-Velu, T.; Ruiz-Pinales, J.; Rostro-Gonzalez, H.; Ibarra-Manzano, M.A.; Cruz-Duarte, J.M.; Avina-Cervantes, J.G. Motor imagery classification based on a recurrent-convolutional architecture to control a hexapod robot. Mathematics 2021, 9, 606. [Google Scholar] [CrossRef]
  127. Parikh, D.; George, K. Quadcopter Control in Three-Dimensional Space Using SSVEP and Motor Imagery-Based Brain-Computer Interface. In Proceedings of the 2020 11th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 4–7 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 0782–0785. [Google Scholar]
  128. Wu, S.L.; Liu, Y.T.; Chou, K.P.; Lin, Y.Y.; Lu, J.; Zhang, G.; Chuang, C.H.; Lin, W.C.; Lin, C.T. A motor imagery based brain–computer interface system via swarm-optimized fuzzy integral and its application. In Proceedings of the 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Vancouver, BC, Canada, 24–29 July 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 2495–2500. [Google Scholar]
  129. Abdulwahab, S.S.; Khleaf, H.K.; Jassim, M.H. EEG Motor-Imagery BCI System Based on Maximum Overlap Discrete Wavelet Transform (MODWT) and cubic SVM. J. Phys. Conf. Ser. 2021, 1973, 012056. [Google Scholar] [CrossRef]
  130. Freer, D.; Deligianni, F.; Yang, G.Z. Adaptive Riemannian BCI for enhanced motor imagery training protocols. In Proceedings of the 2019 IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Chicago, IL, USA, 19–22 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–4. [Google Scholar]
  131. Garcia-Moreno, F.M.; Bermudez-Edo, M.; Garrido, J.L.; Rodríguez-Fórtiz, M.J. Reducing response time in motor imagery using a headband and deep learning. Sensors 2020, 20, 6730. [Google Scholar] [CrossRef]
  132. Guan, S.; Li, J.; Wang, F.; Yuan, Z.; Kang, X.; Lu, B. Discriminating three motor imagery states of the same joint for brain–computer interface. PeerJ 2021, 9, e12027. [Google Scholar] [CrossRef]
  133. Guan, S.; Zhao, K.; Yang, S. Motor imagery EEG classification based on decision tree framework and Riemannian geometry. Comput. Intell. Neurosci. 2019, 2019, 5627156. [Google Scholar] [CrossRef] [Green Version]
  134. Jawanjalkar, A.R.; Padole, D.V. Development of soft computing technique for classification of EEG signal. In Proceedings of the 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), Coimbatore, India, 17–18 March 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
  135. Khan, J.; Bhatti, M.H.; Khan, U.G.; Iqbal, R. Multiclass EEG motor-imagery classification with sub-band common spatial patterns. Eurasip J. Wirel. Commun. Netw. 2019, 2019, 174. [Google Scholar] [CrossRef] [Green Version]
  136. Kim, H.H.; Jeong, J. Decoding electroencephalographic signals for direction in brain–computer interface using echo state network and Gaussian readouts. Comput. Biol. Med. 2019, 110, 254–264. [Google Scholar] [CrossRef]
  137. Lisi, G.; Rivela, D.; Takai, A.; Morimoto, J. Markov switching model for quick detection of event related desynchronization in EEG. Front. Neurosci. 2018, 12, 24. [Google Scholar] [CrossRef]
  138. Lo, C.C.; Chien, T.Y.; Chen, Y.C.; Tsai, S.H.; Fang, W.C.; Lin, B.S. A wearable channel selection-based brain–computer interface for motor imagery detection. Sensors 2016, 16, 213. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  139. Mladenov, T.; Kim, K.; Nooshabadi, S. Accurate motor imagery based dry electrode brain–computer interface system for consumer applications. In Proceedings of the 2012 IEEE 16th International Symposium on Consumer Electronics, Harrisburg, PA, USA, 4–6 June 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 1–4. [Google Scholar]
  140. Rodriguez-Ugarte, M.D.l.S.; Iáñez, E.; Ortiz-Garcia, M.; Azorín, J.M. Effects of tDCS on real-time BCI detection of pedaling motor imagery. Sensors 2018, 18, 1136. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  141. Riyadi, M.A.; Setiawan, I.; Amir, A. EEG Multiclass Signal Classification Based on Subtractive Clustering-ANFIS and Wavelet Packet Decomposition. In Proceedings of the 2021 International Conference on Electrical and Information Technology (IEIT), Malang, Indonesia, 14–15 September 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 81–86. [Google Scholar]
  142. Shajil, N.; Mohan, S.; Srinivasan, P.; Arivudaiyanambi, J.; Arasappan Murrugesan, A. Multiclass classification of spatially filtered motor imagery EEG signals using convolutional neural network for BCI based applications. J. Med. Biol. Eng. 2020, 40, 663–672. [Google Scholar] [CrossRef]
  143. Shajil, N.; Sasikala, M.; Arunnagiri, A. Deep learning classification of two-class motor imagery EEG signals using transfer learning. In Proceedings of the 2020 International Conference on e-Health and Bioengineering (EHB), Iasi, Romania, 29–30 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–4. [Google Scholar]
  144. Tiwari, S.; Goel, S.; Bhardwaj, A. Machine learning approach for the classification of EEG signals of multiple imagery tasks. In Proceedings of the 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Kharagpur, India, 1–3 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–7. [Google Scholar]
  145. Triana Guzmán, N.; Orjuela-Cañón, Á.D.; Jutinico Alarcon, A.L. Incremental training of neural network for motor tasks recognition based on brain–computer interface. In Proceedings of the Iberoamerican Congress on Pattern Recognition, Havana, Cuba, 28–31 October 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 610–619. [Google Scholar]
  146. Yang, B.; Tang, J.; Guan, C.; Li, B. Motor imagery EEG recognition based on FBCSP and PCA. In Proceedings of the International Conference on Brain Inspired Cognitive Systems, Xi’an, China, 7–8 July 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 195–205. [Google Scholar]
  147. Yang, D.; Nguyen, T.H.; Chung, W.Y. A Synchronized Hybrid Brain-Computer Interface System for Simultaneous Detection and Classification of Fusion EEG Signals. Complexity 2020, 2020, 4137283. [Google Scholar] [CrossRef]
  148. Yusoff, M.Z.; Mahmoud, D.; Malik, A.S.; Bahloul, M.R. Discrimination of four class simple limb motor imagery movements for brain–computer interface. Biomed. Signal Process. Control 2018, 44, 181–190. [Google Scholar]
  149. Zhou, B.; Wu, X.; Lv, Z.; Zhang, L.; Zhang, C. Independent component analysis combined with compressed sensing for EEG compression in BCI. In Proceedings of the 2015 10th International Conference on Information, Communications and Signal Processing (ICICS), Singapore, 2–4 December 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1–4. [Google Scholar]
  150. Zich, C.; Schweinitz, C.; Debener, S.; Kranczioch, C. Multimodal evaluation of motor imagery training supported by mobile EEG at home: A case report. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, Hong Kong, China, 9–12 October 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 3181–3186. [Google Scholar]
  151. Verma, P.; Heilinger, A.; Reitner, P.; Grünwald, J.; Guger, C.; Franklin, D. Performance investigation of brain–computer interfaces that combine EEG and fNIRS for motor imagery tasks. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 259–263. [Google Scholar]
  152. Rodríguez-Ugarte, M.; Angulo-Sherman, I.; Iáñez, E.; Ortiz, M.; Azorín, J. Preliminary study of pedaling motor imagery classification based on EEG signals. In Proceedings of the 2017 International Symposium on Wearable Robotics and Rehabilitation (WeRob), Houston, TX, USA, 5–8 November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–2. [Google Scholar]
  153. Hirsch, G.; Dirodi, M.; Xu, R.; Reitner, P.; Guger, C. Online classification of motor imagery using EEG and fNIRS: A hybrid approach with real time human–computer interaction. In Proceedings of the International Conference on Human–Computer Interaction, Sibiu, Romania, 22–23 October 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 231–238. [Google Scholar]
  154. Dehzangi, O.; Zou, Y.; Jafari, R. Simultaneous classification of motor imagery and SSVEP EEG signals. In Proceedings of the 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER), San Diego, CA, USA, 6–8 November 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1303–1306. [Google Scholar]
  155. Cha, K.; Lee, J.; Kim, H.; Kim, C.; Lee, S. Steady-State Somatosensory Evoked Potential based Brain-Computer Interface for Sit-to-Stand Movement Intention. In Proceedings of the 2019 7th International Winter Conference on Brain-Computer Interface (BCI), Gangwon, Republic of Korea, 18–20 February 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–3. [Google Scholar]
  156. Arfaras, G.; Athanasiou, A.; Pandria, N.; Kavazidi, K.R.; Kartsidis, P.; Astaras, A.; Bamidis, P.D. Visual versus kinesthetic motor imagery for BCI control of robotic arms (Mercury 2.0). In Proceedings of the 2017 IEEE 30th International Symposium on Computer-Based Medical Systems (CBMS), Thessaloniki, Greece, 22–24 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 440–445. [Google Scholar]
  157. Angrisani, L.; Arpaia, P.; Donnarumma, F.; Esposito, A.; Frosolone, M.; Improta, G.; Moccaldi, N.; Natalizio, A.; Parvis, M. Instrumentation for Motor Imagery-based Brain Computer Interfaces relying on dry electrodes: A functional analysis. In Proceedings of the 2020 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Dubrovnik, Croatia, 25–28 May 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar]
  158. Bista, S.; BikramAdhikari, N. Performance Analysis of Tri-channel Active Electrode EEG Device Designed for Classification of Motor Imagery Brainwaves for Brain ComputerInterface. In Proceedings of the 2018 International Conference on Advances in Computing, Communication Control and Networking (ICACCCN), Greater Noida, India, 12–13 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 662–667. [Google Scholar]
  159. Liao, L.D.; Wu, S.L.; Liou, C.H.; Lu, S.W.; Chen, S.A.; Chen, S.F.; Ko, L.W.; Lin, C.T. A novel 16-channel wireless system for electroencephalography measurements with dry spring-loaded sensors. IEEE Trans. Instrum. Meas. 2014, 63, 1545–1555. [Google Scholar] [CrossRef]
  160. Lin, C.L.; Chu, T.Y.; Wu, P.J.; Wang, C.A.; Lin, B.S. Design of wearable brain computer interface based on motor imagery. In Proceedings of the 2014 Tenth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Kitakyushu, Japan, 27–29 August 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 33–36. [Google Scholar]
  161. Lin, B.S.; Pan, J.S.; Chu, T.Y.; Lin, B.S. Development of a wearable motor-imagery-based brain–computer interface. J. Med. Syst. 2016, 40, 1–8. [Google Scholar] [CrossRef]
  162. Vourvopoulos, A.; Niforatos, E.; Giannakos, M. EEGlass: An EEG-eyeware prototype for ubiquitous brain–computer interaction. In Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, London, UK, 9–13 September 2019; pp. 647–652. [Google Scholar]
  163. Zhang, Y.; Zhang, X.; Sun, H.; Fan, Z.; Zhong, X. Portable brain–computer interface based on novel convolutional neural network. Comput. Biol. Med. 2019, 107, 248–256. [Google Scholar] [CrossRef]
  164. Advanced Brain Monitoring, I. B-Alert X-Series Wireless & Mobile EEG System. Available online: (accessed on 3 October 2022).
  165. bio BrainMaster Discovery 24E-24 Channel qEEG. Available online: (accessed on 3 October 2022).
  166. By Shopify, O.O.S.P. Cyton Biosensing Board (8-Channels)—OpenBCI Online Store. Available online: (accessed on 3 October 2022).
  167. Neuro, A. eego™rt|ANT Neuro. Available online: (accessed on 3 October 2022).
  168. Peterson, V.; Wyser, D.; Lambercy, O.; Spies, R.; Gassert, R. A penalized time-frequency band feature selection and classification procedure for improved motor intention decoding in multichannel EEG. J. Neural Eng. 2019, 16, 016019. [Google Scholar] [CrossRef] [Green Version]
  169. Neuroelectrics. Enobio 20|Solutions|Neuroelectrics. Available online: (accessed on 3 October 2022).
  170. Neuroelectrics. Enobio 8|Solutions|Neuroelectrics. Available online: (accessed on 3 October 2022).
  171. Venot, T.; Corsi, M.C.; Saint-Bauzel, L.; de Vico Fallani, F. Towards multimodal BCIs: The impact of peripheral control on motor cortex activity and sense of agency. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Virtual, 1–5 November 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 5876–5879. [Google Scholar]
  172. Abdalsalam, E.; Yusoff, M.Z.; Malik, A.; Kamel, N.S.; Mahmoud, D. Modulation of sensorimotor rhythms for brain–computer interface using motor imagery with online feedback. Signal Image Video Process. 2018, 12, 557–564. [Google Scholar] [CrossRef]
  173. EMOTIV. EMOTIV EPOC+ 14-Channel Wireless EEG Headset-EMOTIV. Available online: (accessed on 3 October 2022).
  174. Zhang, S.; Yuan, S.; Huang, L.; Zheng, X.; Wu, Z.; Xu, K.; Pan, G. Human mind control of rat cyborg’s continuous locomotion with wireless brain-to-brain interface. Sci. Rep. 2019, 9, 1321. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  175. Zich, C.; De Vos, M.; Kranczioch, C.; Debener, S. Wireless EEG with individualized channel layout enables efficient motor imagery training. Clin. Neurophysiol. 2015, 126, 698–710. [Google Scholar] [CrossRef] [PubMed]
  176. EMOTIV. EPOC Flex-32-Channel Wireless EEG Device-EMOTIV. Available online: (accessed on 3 October 2022).
  177. Paszkiel, S.; Dobrakowski, P. Brain–computer technology-based training system in the field of motor imagery. IET Sci. Meas. Technol. 2020, 14, 1014–1018. [Google Scholar] [CrossRef]
  178. EMOTIV. EMOTIV EPOC X-14 Channel Wireless EEG Headset—EMOTIV. Available online: (accessed on 3 October 2022).
  179. g.tec medical engineering GmbH Austria. g.USBAMP RESEARCH|EEG/Biosignal Amplifier | g.tec Medical Engineering GmbH Medical Engineering. Available online: (accessed on 3 October 2022).
  180. ab medica s.p.a. Dispositivi medici innovativi-Robotica-Telemedicina-ab Medica. Available online: (accessed on 3 October 2022).
  181. EMOTIV. Insight Brainwear® 5 Channel Wireless EEG Headset-EMOTIV. Available online: (accessed on 3 October 2022).
  182. NeuroSky. MindWave. Available online: (accessed on 3 October 2022).
  183. Kevric, J.; Subasi, A. The impact of Mspca signal de-noising in real-time wireless brain computer interface system. Southeast Eur. J. Soft Comput. 2015, 4, 43–47. [Google Scholar] [CrossRef] [Green Version]
  184. Muse. Muse 2: Brain Sensing Headband-Technology Enhanced Meditation. Available online: (accessed on 3 October 2022).
  185. g.tec medical engineering GmbH Austria. g.Nautilus Multiple Biosignal Recording|g.tec Medical Engineering GmbH. Available online: (accessed on 3 October 2022).
  186. g.tec medical engineering GmbH Austria. g.Nautilus PRO Wearable EEG|g.tec Medical Engineering GmbH. Available online: (accessed on 3 October 2022).
  187. g.tec medical engineering GmbH Austria. g.NAUTILUS RESEARCH|Wearable EEG Headset|g.tec Medical Engineering GmbH. Available online: (accessed on 3 October 2022).
  188. Neuroscan, C. Nuamps–Compumedics Neuroscan. Available online: (accessed on 3 October 2022).
  189. COGNIONICS, I. Quick-20 Dry EEG Headset (2). Available online: (accessed on 3 October 2022).
  190. Neuroelectrics. Starstim® tES-EEG Systems|Neuroelectrics. Available online: (accessed on 3 October 2022).
  191. AG, N. Synamps 2/RT | NEUROSPEC AG Research Neurosciences. Available online: (accessed on 3 October 2022).
  192. Rashmi, C.; Shantala, C. EEG artifacts detection and removal techniques for brain computer interface applications: A systematic review. Int. J. Adv. Technol. Eng. Explor. 2022, 9, 354. [Google Scholar]
  193. Fatourechi, M.; Bashashati, A.; Ward, R.K.; Birch, G.E. EMG and EOG artifacts in brain computer interface systems: A survey. Clin. Neurophysiol. 2007, 118, 480–494. [Google Scholar] [CrossRef] [PubMed]
  194. Seok, D.; Lee, S.; Kim, M.; Cho, J.; Kim, C. Motion artifact removal techniques for wearable EEG and PPG sensor systems. Front. Electron. 2021, 2, 685513. [Google Scholar] [CrossRef]
  195. Delorme, A.; Makeig, S. EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 2004, 134, 9–21. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  196. Blankertz, B.; Muller, K.R.; Krusienski, D.J.; Schalk, G.; Wolpaw, J.R.; Schlogl, A.; Pfurtscheller, G.; Millan, J.R.; Schroder, M.; Birbaumer, N. The BCI competition III: Validating alternative approaches to actual BCI problems. IEEE Trans. Neural Syst. Rehabil. Eng. 2006, 14, 153–159. [Google Scholar] [CrossRef]
  197. Tangermann, M.; Müller, K.R.; Aertsen, A.; Birbaumer, N.; Braun, C.; Brunner, C.; Leeb, R.; Mehring, C.; Miller, K.J.; Mueller-Putz, G.; et al. Review of the BCI competition IV. Front. Neurosci. 2012, 6, 55. [Google Scholar]
  198. Goldberger, A.L.; Amaral, L.A.; Glass, L.; Hausdorff, J.M.; Ivanov, P.C.; Mark, R.G.; Mietus, J.E.; Moody, G.B.; Peng, C.K.; Stanley, H.E. PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation 2000, 101, e215–e220. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  199. Kaya, M.; Binli, M.K.; Ozbay, E.; Yanar, H.; Mishchenko, Y. A large electroencephalographic motor imagery dataset for electroencephalographic brain computer interfaces. Sci. Data 2018, 5, 180211. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  200. Leeb, R.; Lee, F.; Keinrath, C.; Scherer, R.; Bischof, H.; Pfurtscheller, G. Brain–computer communication: Motivation, aim, and impact of exploring a virtual apartment. IEEE Trans. Neural Syst. Rehabil. Eng. 2007, 15, 473–482. [Google Scholar] [CrossRef] [PubMed]
  201. Lin, X.; Wang, L.; Ohtsuki, T. Online Recursive ICA Algorithm Used for Motor Imagery EEG Signal. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 502–505. [Google Scholar] [CrossRef]
  202. Tuţă, L.; Roşu, G.; Popovici, C.; Nicolaescu, I. Real-Time EEG Data Processing Using Independent Component Analysis (ICA). In Proceedings of the 2022 14th International Conference on Communications (COMM), Bucharest, Romania, 16–18 June 2022; pp. 1–4. [Google Scholar] [CrossRef]
  203. Raza, H.; Rathee, D.; Zhou, S.M.; Cecotti, H.; Prasad, G. Covariate shift estimation based adaptive ensemble learning for handling non-stationarity in motor imagery related EEG-based brain–computer interface. Neurocomputing 2019, 343, 154–166. [Google Scholar] [CrossRef]
  204. Miladinović, A.; Ajčević, M.; Jarmolowska, J.; Marusic, U.; Colussi, M.; Silveri, G.; Battaglini, P.P.; Accardo, A. Effect of power feature covariance shift on BCI spatial-filtering techniques: A comparative study. Comput. Methods Programs Biomed. 2021, 198, 105808. [Google Scholar] [CrossRef] [PubMed]
  205. Millan, J.; Mourino, J. Asynchronous BCI and local neural classifiers: An overview of the adaptive brain interface project. IEEE Trans. Neural Syst. Rehabil. Eng. 2003, 11, 159–161. [Google Scholar] [CrossRef] [Green Version]
  206. Roc, A.; Pillette, L.; Mladenović, J.; Benaroch, C.; N’Kaoua, B.; Jeunet, C.; Lotte, F. A review of user training methods in brain computer interfaces based on mental tasks. J. Neural Eng. 2020, 18, 011002. [Google Scholar] [CrossRef]
  207. Covi, E.; Donati, E.; Liang, X.; Kappel, D.; Heidari, H.; Payvand, M.; Wang, W. Adaptive extreme edge computing for wearable devices. Front. Neurosci. 2021, 15, 611300. [Google Scholar] [CrossRef]
  208. Jin, X.; Li, L.; Dang, F.; Chen, X.; Liu, Y. A survey on edge computing for wearable technology. Digit. Signal Process. 2022, 125, 103146. [Google Scholar] [CrossRef]
  209. Huang, D.; Wang, M.; Wang, J.; Yan, J. A Survey of Quantum Computing Hybrid Applications with Brain-Computer Interface. Cogn. Robot. 2022, 2, 164–176. [Google Scholar] [CrossRef]
  210. Miranda, E.R.; Martín-Guerrero, J.D.; Venkatesh, S.; Hernani-Morales, C.; Lamata, L.; Solano, E. Quantum Brain Networks: A Perspective. Electronics 2022, 11, 1528. [Google Scholar] [CrossRef]
  211. Miranda, E.R.; Venkatesh, S.; Martın-Guerrero, J.D.; Hernani-Morales, C.; Lamata, L.; Solano, E. An approach to interfacing the brain with quantum computers: Practical steps and caveats. arXiv 2022, arXiv:2201.00817. [Google Scholar]
Figure 1. Flow diagram obtained by following the PRISMA guidelines.
Figure 1. Flow diagram obtained by following the PRISMA guidelines.
Sensors 23 02798 g001
Figure 2. Number of papers remaining after the scraping process per each considered year (1 January 2012–22 June 2022).
Figure 2. Number of papers remaining after the scraping process per each considered year (1 January 2012–22 June 2022).
Sensors 23 02798 g002
Figure 3. 10-20 International system (adapted from accessed on 31 January 2023). The letter correspondences are as follows: frontopolar (Fp), frontal (F), central (C), parietal (P), occipital (O), temporal (T). Auricular (A) electrodes are also included.
Figure 3. 10-20 International system (adapted from accessed on 31 January 2023). The letter correspondences are as follows: frontopolar (Fp), frontal (F), central (C), parietal (P), occipital (O), temporal (T). Auricular (A) electrodes are also included.
Sensors 23 02798 g003
Figure 4. 10-10 International system (adapted from accessed on 31 January 2023). The letter correspondences are as follows: frontopolar (Fp), AF between Fp and F, frontal (F), FC between F and C, central (C), CP between C and P, parietal (P), PO between P and O, occipital (O), temporal (T), FT between F and T, TP between T and P. The system also presents the nasion (Nz), inion (Iz), left and right pre-auricular point (LPA and RPA).
Figure 4. 10-10 International system (adapted from accessed on 31 January 2023). The letter correspondences are as follows: frontopolar (Fp), AF between Fp and F, frontal (F), FC between F and C, central (C), CP between C and P, parietal (P), PO between P and O, occipital (O), temporal (T), FT between F and T, TP between T and P. The system also presents the nasion (Nz), inion (Iz), left and right pre-auricular point (LPA and RPA).
Sensors 23 02798 g004
Figure 5. BCI system standard life cycle. The three main modules are reported, i.e., the signal acquisition, data processing, and application modules.
Figure 5. BCI system standard life cycle. The three main modules are reported, i.e., the signal acquisition, data processing, and application modules.
Sensors 23 02798 g005
Figure 6. Reviewed paper distribution according to different fields of applications and aims.
Figure 6. Reviewed paper distribution according to different fields of applications and aims.
Sensors 23 02798 g006
Figure 7. Radial graphic depicting the distribution of the classification and data analysis techniques employed by the reviewed papers.
Figure 7. Radial graphic depicting the distribution of the classification and data analysis techniques employed by the reviewed papers.
Sensors 23 02798 g007
Figure 8. Number of subjects recruited by the remaining 79 works presenting information on the matter. The first number (bold black) represents the subject number range, while the second number (bold gray) is the percentage considering the total number of works. For example, 48% of the works presenting subject information recruit a maximum of five participants.
Figure 8. Number of subjects recruited by the remaining 79 works presenting information on the matter. The first number (bold black) represents the subject number range, while the second number (bold gray) is the percentage considering the total number of works. For example, 48% of the works presenting subject information recruit a maximum of five participants.
Sensors 23 02798 g008
Table 1. Search engine information summary. The authors names are reported with their initials.
Table 1. Search engine information summary. The authors names are reported with their initials.
Search EngineAuthorLast Consultation Date
IEEE XploreF.G.22 June 2022
MendeleyF.G.22 June 2022
PubMedS.C.22 June 2022
ScienceOPENA.S.21 June 2022
Semantic ScholarA.S.21 June 2022
ScopusA.S.22 June 2022
Web of ScienceS.C.22 June 2022
Google ScholarM.C. and A.S.20 June 2022
Table 2. Summary of the searches considering keyword subsets. The fields present the search engines and the main keywords related to the considered subsets.
Table 2. Summary of the searches considering keyword subsets. The fields present the search engines and the main keywords related to the considered subsets.
Search EngineEEG and BCIEEG, BCI,
and Wearable
EEG, BCI, and MIEEG, BCI, Wearable, and MI
IEEE Xplore358412154613
Semantic Scholar12,20018793630259
Web of Science6564180249140
Table 3. EEG rhythms overview. The frequency ranges and the occurrence of the EEG rhythms are reported.
Table 3. EEG rhythms overview. The frequency ranges and the occurrence of the EEG rhythms are reported.
RhythmFrequency Range (Hz)Occurrence
δ ≤4infants, deep sleep
θ 4–8emotional stress, drowsiness
α 8–13relaxed awake state
μ 8–13motor cortex functionalities
β 13–30alert state, active thinking/attention, anxiety
γ ≥31intensive brain activity
Table 4. Synthetic information on the wireless devices employed by the reviewed papers. For the Nautilus entries, it was unclear which device the authors used; thus, all the versions are reported. An asterisk (*) is present when the devices are amplifiers or data acquisition tools. Last information update, 3 October 2022.
Table 4. Synthetic information on the wireless devices employed by the reviewed papers. For the Nautilus entries, it was unclear which device the authors used; thus, all the versions are reported. An asterisk (*) is present when the devices are amplifiers or data acquisition tools. Last information update, 3 October 2022.
B-Alert X-Series [164]Advanced Brain Monitoringup to 20/W (gel)ask producer[102]
BrainMaster Discovery 24E * [165]bio-medical24/W, D (compatible)USD 5800.00[114]
Cyton Biosensing Board * [166]OpenBCI8/W, D (compatible)USD 999.00[98,113,162]
eego rt [167]ANT Neuro8-64/W, Dask producer[168]
Enobio 20 [169]Neuroelectrics20/W, Dask producer[88,142]
Enobio 8 [170]Neuroelectrics8/W, Dask producer[93,97,148,162,171,172]
EPOC+ [173]Emotiv14/W (saline)discontinued[95,110,115,121,126,127,129,132,133,135,136,139,156,174,175]
EPOC Flex [176]Emotivup to 32/W (gel, saline)USD 1699.00–2099.00[122,177]
EPOC X [178]Emotiv14/W (saline)USD 849.00[107]
g.USBamp * with(out) g.MOBIlab [179]g.tec medical engineering16/W, Dstarting from EUR 11,900.00[87,104,120,125]
Helmate [180]abmedicaNA/NAask producer[111,157]
Insight [181]Emotiv5/SUSD 499.00[106]
MindWave Mobile 2 [182]Neurosky1/NAUSD 109.99[108,183]
Muse headband 2 [184]InteraXon4/NAEUR 269.99[109,112,131,141,144]
g.Nautilus Multi-Purpose * [185]g.tec medical engineering8–64/W, D (compatible)changing according to configuration, starting from EUR 4990.00[89,90,105,125,130,145,146,151,153,155]
g.Nautilus PRO [186]g.tec medical engineering8–32/W, Dchanging according to configuration, starting from EUR 5500.00[89,90,105,125,130,145,146,151,153,155]
g.Nautilus RESEARCH [187]g.tec medical engineering8–64/W, Dchanging according to configuration, starting from EUR 4990.00[89,90,105,125,130,145,146,151,153,155]
NuAmps * [188]NeuroScan32/NAask producer[92]
Quick-20 Dry EEG Headset [189]Cognionics19/Dask producer[100,137]
Starstim [190]neuroelectrics8–32/tES-EEGask producer[103,140]
Synamps 2/RT * [191]Neuroscan64/NAask producer[118]
Table 5. Synthetic information on works presenting classification and other analyses, while comparing their results with benchmark datasets or their own available upon request or published datasets. An asterisk (*) has been applied to all the datasets that will be detailed in Section 5.4, at their first appearance.
Table 5. Synthetic information on works presenting classification and other analyses, while comparing their results with benchmark datasets or their own available upon request or published datasets. An asterisk (*) has been applied to all the datasets that will be detailed in Section 5.4, at their first appearance.
ReferenceDataset and Experimental ParadigmClassification and/or Other AnalysesPerformance of the Best MethodOnline and/or Offline
Tang et al. [110]Benchmark dataset: BCI Competition IV dataset 2b *.
Own dataset: not available. Five subjects performed an experiment consisting of 90 repetitions of each of the MI tasks (left/right hand).
Classification task: binary (left vs. right hand).
DBN, DWT-LSTM, 1DCNN, 2DCNN and one-dimensional multi-scale convolutional neural network (1DMSCNN).
Measures: average accuracy with 1DMSCNN.
Validation strategy: dataset division in training and test sets according to the 4:1 ratio.
BCI Competition IV dataset 2b (offline analysis): 82.61%.
Own dataset (online analysis): accuracy for each subject 76.78%, 91.78%, 70.00%.
Guan, Zhao, and Yang [133]Benchmark dataset: BCI Competition IV dataset 2a *, BCI Competition III dataset IIIa *.
Own dataset: available upon request. Seven subjects performed imagination of shoulder flexion, extension, and abduction. The acquisition were repeated for 20 trials, each lasting for 5 s of activity plus 5–7 s rest.
Classification tasks: one-vs.-one, one-vs.-rest.
1. Subject-specific decision tree (SSDT) framework with filter geodesic minimum distance to Riemannian mean (FGMDRM).
2. Feature extraction algorithm combining semisupervised joint mutual information with general discriminate analysis (SJGDA) to reduce the dimension of vectors in the Riemannian tangent plane and classification with KNN.
Measures: average accuracy and mean kappa value.
Validation strategy: k-fold cross validation.
BCI Competition IV dataset 2a:
- SSDT-FGMDRM 10-fold cross-validation average accuracy left vs. rest 82.00%, right vs. rest 81.28%, foot vs. rest 81.51%, tongue vs. rest 83.95%. SSDT-FGMDRM mean kappa value 0.589.
- SJGDA 10-fold cross-validation mean accuracy left vs. rest 84.3%, right vs. rest 83.54%, foot vs. rest 82.11%, tongue vs. rest 85.23%. SJGDA mean kappa value 0.607.
- SJGDA 10-fold cross-validation mean accuracy on left vs. right 79.41%, left vs. feet 87.14%, left vs. tongue 86.51%, right vs. feet 86.75%, right vs. tongue 87.00%, feet vs. tongue 82.04%.
BCI Competition III dataset IIIa: 5-fold cross-validation mean accuracy on 82.78%.
Own dataset: 5-fold cross-validation mean accuracy (rounded values taken from the provided bar graphics) flexion vs. rest 90.00%, extension vs. rest 80.00%, abduction vs. rest a bit more than 90.00% and flexion vs. extension 90.00%, flexion vs. abduction 95.00%, extension vs. abduction 90.00%.
Peterson et al. [168]Benchmark dataset: BCI competition III dataset IVa * and BCI competition IV dataset 2b.
Own dataset: 11 subjects performed imagination of dominant hand grasping and a resting condition in four runs constituted by 20 trials per class.
Classification task: binary (rest vs. dominant hand grasping).
Optimal feature selection and classification contemporaneously performed through generalized sparse LDA.
Measures: average accuracy (reported best).
Validation strategy: 10 × 10 fold cross validation.
BCI competition III: 90.94 (±1.06)%.
BCI competition IV: 81.23 (±2.46)%.
Own dataset: 82.26 (±2.98)%.
Yang, Nguyen, and Chung [147]Benchmark dataset: None
Own dataset: available upon request. Six subjects, 10 trials of right hand grasping imagination for 5 s. The experiment was repeated for 10 runs. Notice that SSVEP tasks were also included.
Classification task: multi-class both MI and SSVEP.
Measures: best average accuracy (MI task).
Own dataset: 91.73 (±1.55)%.
Freer, Deligianni, and Yang [130]Benchmark dataset: BCI competition IV dataset 2a.
Own dataset: three subjects performed a paradigm without and with feedback. A total of 20 trials for each of the four conditions (left/right hand, both hands/feet) are performed per run with MI of 2–3 s.
Classification task: multi-class (4 classes).
Adaptive Riemannian classifier.
Measures: accuracy.
BCI Competition IV dataset 2a: lower than 50%.
Own dataset: lower than 50%.
Barria et al. [88]Benchmark dataset: None
Own dataset: available ( accessed on 31 January 2023). Five subjects, five phases: calibration, real movement, stationary therapy, MI detection with visual stimulation, and MI detection with visual and haptic stimulation. Besides the first phases, the protocol consisted of 10 s alternations of knee flection task and rest.
Classification task: None.
Other analyses:
- Control of ankle exoskeleton through knee flection.
- Analysis of the success rate in using the BCI, based on the beta power rebound threshold.
- Quebec User Evaluation of Satisfaction with Assistive Technology test to evaluate patients’ satisfaction.
Peterson et al. [113]Benchmark dataset: None
Own dataset *: (accessed on 31 January 2023).
Classification task: binary (rest vs. dominant hand grasping).
Generalized sparse discriminant analysis is used for both feature selection and classification.
Other analyses:
- The motor imagery ability of a single subject has been accessed through the KVIQ-10 questionnaire.
- Analyses of temporal and frequency information.
Measures: average accuracy (extracted from bar plot).
Own dataset: around 85% with Penalized Time–Frequency Band Common Spatial Pattern (PTFBCSP).
Shajil, Sasikala, and Arunnagiri [143]Benchmark dataset: BCI competition IV dataset 2a.
Own dataset: nine subjects performed 80 trials per MI conditions: left and right hand.
Classification task: binary. AlexNet, ResNet50, and InceptionV3 (pre-trained CNN models) plus transfer learning.Measures: best average accuracy.
BCI competition IV dataset 2a: InceptionV3 82.78 (±4.87)%.
Own dataset: InceptionV3 83.79 (±3.49)%.
Zhang et al. [115]Benchmark dataset: used 10 subjects of Physionet EEG Motor Movement/Imagery Dataset *.
Own dataset: seven subjects. Five conditions: eyes closed, left/right hand, both hands/feet paradigm (as for the benchmark dataset).
Classification task: multi-class on five conditions. RNN, CNN, RNN + CNN.Measures: average accuracy. (Precision, Recall, F1, AUC and confusion matrix for both Physionet and own dataset are also provided).
- Benchmark dataset divided into training (21,000 samples) and test sets (7000 samples).
- Own dataset divided into training (25,920 samples) and test sets (8640 samples) for each subject.
Benchmark dataset: best model RNN+CNN 95.53% average accuracy.
Own dataset: best model RNN+CNN 94.27% average accuracy.
Mwata et al. [126]Benchmark dataset: EEG BCI dataset *.
Own dataset: four subjects. Experimental conditions: right and left hand, and the neutral action.
Classification task: multi-class on three conditions (neutral, left/right with corresponding robot command forward, backward, and neutral). Hybrid CNN-LSTM model.
Other analyses: Report different subject-combinations based-analysis.
Measures: average accuracy.
Validation strategy: 10-fold cross validation.
Benchmark dataset: 79.2%.
Own dataset: 84.69%.
Apicella et al. [111]Benchmark dataset: None.
Own dataset: 17 subjects. Motor task consists of maintaining attention focused only on (i) the squeeze movement (attentive-subject trial), or (ii) a concurrent distractor task (distracted-subject trial); in both trials, the participant must perform the squeeze-ball movement (three sessions, 30 trials per session). Total epochs: 4590. Half of the epochs were collected during the attentive-subject trials and were labeled as belonging to the first class. The remaining part was acquired during the distracted-subject trials and was labeled as belonging to the second class.
Classification task: binary (MI during attention vs. MI during distraction).
Measures: average accuracy (also provide precision, recall and F1 measure).
Validation strategy: 10-fold cross validation.
Own dataset: k-NN 92.8 (±1.6)%.
Alanis and Gutiérrez [119]Benchmark dataset: None.
Own dataset: available upon request, two subjects, four conditions: left or right hand, both hands, move up and down both feet. Five runs of forty trials.
Classification tasks: binary one-vs.-rest. LDA classifier using features extracted by BCI2000.
Other analyses: graph theory metrics to understand the differences in functional brain connectivity.
Best binary classification for both subjects: right hand open/close vs. rest. No classification results
Mahmood et al. [123]Benchmark dataset: None.
Own dataset: available upon request, four subjects. Experimental conditions: eyes closed, left/right hand, pedal pressing.
Classification tasks: multiclass (4 classes). Population-based approach. SVM and CNN classifiers.Measures: average accuracy.
Validation strategy: 5-fold cross validation. CNN real-time accuracy: 89.65% and 93.22% for Ag/AgCl and FMNEs electrodes
Table 6. Summary of the publicly available datasets employed by some of the reviewed papers. In the Dataset field are reported only the citations directly related to the dataset publication.
Table 6. Summary of the publicly available datasets employed by some of the reviewed papers. In the Dataset field are reported only the citations directly related to the dataset publication.
DatasetLinkDeviceExperimental Paradigm
BCI Competition III dataset IIIa [196], 64 channel EEG amplifier (wired)cue-based left/right hand, foot, tongue MI
BCI Competition III dataset IVa [196] and 128 channel ECI cap (wired)cue-based left/right hand, right foot MI
BCI Competition IV dataset 2a [197] electrodes (wired)cue-based MI-BCI left/right hand, both feet, tongue MI
BCI Competition IV dataset 2b [197] electrodes (wired)cue-based MI-BCI left/right hand MI
EEG Motor Movement/Imagery Dataset [42,198] electrodes (wired)cue-based motor execution/imagination left/right fist and both feet/fists opening/closing
MI-OpenBCI [113] Cyton and Daisy Module, Electrocap System II, 15 electrodes (wearable)cue-based dominant hand grasping MI
EEG BCI dataset [199] JE-921A EEG system, 19 electrodes (wired)left/right hand, left/right leg, tongue, and finger MI
All links accessed on 31 January 2023.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Saibene, A.; Caglioni, M.; Corchs, S.; Gasparini, F. EEG-Based BCIs on Motor Imagery Paradigm Using Wearable Technologies: A Systematic Review. Sensors 2023, 23, 2798.

AMA Style

Saibene A, Caglioni M, Corchs S, Gasparini F. EEG-Based BCIs on Motor Imagery Paradigm Using Wearable Technologies: A Systematic Review. Sensors. 2023; 23(5):2798.

Chicago/Turabian Style

Saibene, Aurora, Mirko Caglioni, Silvia Corchs, and Francesca Gasparini. 2023. "EEG-Based BCIs on Motor Imagery Paradigm Using Wearable Technologies: A Systematic Review" Sensors 23, no. 5: 2798.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop