Next Article in Journal / Special Issue
The Use of Computer Vision to Improve the Affinity of Rootstock-Graft Combinations and Identify Diseases of Grape Seedlings
Previous Article in Journal
Performance Analysis of Harmonic-Reduced Modified PUC Multi-Level Inverter Based on an MPC Algorithm
Previous Article in Special Issue
Analytical Model for Evaluating the Reliability of Vias and Plated Through-Hole Pads on PCBs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Brain–Computer-Interface-Based Smart-Home Interface by Leveraging Motor Imagery Signals

1
Department of Electrical Electronic and Computer (DIEEI), University of Catania, 95121 Catania, Italy
2
National Institute of Geophysics and Volcanology—INGV, 95121 Catania, Italy
3
EMMEVI S.r.l., 95121 Catania, Italy
*
Author to whom correspondence should be addressed.
Inventions 2023, 8(4), 91; https://doi.org/10.3390/inventions8040091
Submission received: 14 June 2023 / Revised: 10 July 2023 / Accepted: 14 July 2023 / Published: 18 July 2023
(This article belongs to the Special Issue Recent Advances and New Trends in Signal Processing)

Abstract

:
In this work, we propose a brain–computer-interface (BCI)-based smart-home interface which leverages motor imagery (MI) signals to operate home devices in real-time. The idea behind MI-BCI is that different types of MI activities will activate various brain regions. Therefore, after recording the user’s electroencephalogram (EEG) data, two approaches, i.e., Regularized Common Spatial Pattern (RCSP) and Linear Discriminant Analysis (LDA), analyze these data to classify users’ imagined tasks. In such a way, the user can perform the intended action. In the proposed framework, EEG signals were recorded by using the EMOTIV helmet and OpenVibe, a free and open-source platform that has been utilized for EEG signal feature extraction and classification. After being classified, such signals are then converted into control commands, and the open communication protocol for building automation KNX (“Konnex”) is proposed for the tasks’ execution, i.e., the regulation of two switching devices. The experimental results from the training and testing stages provide evidence of the effectiveness of the users’ intentions classification, which has subsequently been used to operate the proposed home automation system, allowing users to operate two light bulbs.

1. Introduction

The human brain has been studied for decades due to its interest as a dynamic and complex structure, and brain–computer interface systems have evolved to explore new ways to harness their power to improve human lives. Particularly, the goal of brain–computer interface systems is to establish a direct communication pathway between the brain and an external device, bypassing the body’s more typical pathways of nerves and muscles [1]. A huge number of studies have explored the potential of BCI systems in various applications, including rehabilitation [2], navigation and robotic control [3], environmental control [4], and gaming and entertainment [5].
Depending on the selected experimental approach and expected neurophysiological activation pattern, many forms of task-related information can be retrieved from brain waves. Significant examples include evoked potentials (EPs), steady-state evoked potentials (SSEPs) [6], event-related potentials (ERP) [7], and sensorimotor rhythms as motor imagery [8].
Motor-imagery-based BCI, in which users imagine performing a specific movement without actual execution, has emerged as a promising approach for facilitating communication and control both for individuals with motor disabilities [9] and for general-purpose applications. In stroke rehabilitation applications, robotic arms controlled by MI have been used to direct patients’ arm motions in stroke recovery [10], while a virtual reality has been employed for upper limb rehabilitation [11]. A continuous game control via MI-BCI has been developed in [12], while an immersive virtual-reality-based embodiable feedback has been implemented in [13] to improve MI-BCI control. In [14], a motor-imagery-based, adaptive BCI speller has been designed.
Moreover, as the number of smart devices in our homes continues to grow, so does the need for efficient and convenient control systems. In this direction, several approaches have been presented in the literature in recent years. A prototype of SSVEP-based BCI for home appliances control is presented in [15]. Surface electromyography (sEMG) readings from the occipitalis region have been used to drive a home automation system [16]. The “Neurophone” [17] uses hidden Markov models trained to recognize mental instructions via the Gamma feature band. Powered by a P300 control interface, the “BackHome” system assembles a suite of services—including smart home control, cognitive stimulation, online browsing, remote telemonitoring, and home support tools—to promote autonomy at home for users and carers without specialized training [18]. Among the offered approaches, to the author’s knowledge, the potential of MI-BCIs to provide intuitive means of controlling home devices, particularly for individuals with physical disabilities, has not been entirely explored, and this paper intends to fill this space.
MI-BCI-based systems require that the user’s intended movements are interpreted. Such a task is accomplished by first classifying the EEG signals and then converting them into commands. In recent years, several classifiers have emerged as popular techniques for addressing this challenge. Among these techniques include a Bayesian approach [19], pattern matching [20], neural networks [21], support vector machines [22], whitening techniques based on Gram–Schmidt orthogonalization [23], and Linear Discriminant Analysis [24]. LDA is a supervised classification algorithm that has been widely and successfully applied to BCI problems for its simplicity and high accuracy in classifying into different categories.
Furthermore, if the feature space dimension is large, a spatial filter could be employed to reduce the number of features, thus keeping the classifier from overfitting. The most straightforward approach is to manually select features from an a priori data inspection. Automatizing, however, could be accomplished by the use of statistical methodologies. A well-known method for extracting brain activity that is used in MI-BCIs is represented by common spatial patterns (CSP) [25], which is a feature extraction technique that optimizes the separation of different signal classes. In order to improve the performance of the CSP algorithm in high-dimensional data settings, the addition of a regularized term to the CSP algorithm represents a valid solution to improve robustness against noisy or incomplete data [26]. The combination of these two well-known methods, i.e., LDA and regularized CSP, leads to a more accurate and efficient classification of EEG signals.
This is why, in this work, we propose a novel use of an MI-based BCI system that makes use of LDA and RCSP to drive a home automation system. The entire software architecture is based on Konnex, a standardized protocol for home automation that facilitates the communication between hardware devices to control various home appliances in real-time. In addition, the proposed framework enables the control of two different devices at once by providing real-time information on the devices’ state.
Our findings demonstrate that the proposed BCI system achieves good classification accuracy and good response times, indicating its potential for use in home automation systems.
The paper is organized as follows: Section 2 introduces the participants, the experiments, the data acquisition, the pre-processing, and the classification phases. The experimental sections comprising the software and hardware architectures as well as the experimental results are detailed in Section 3. Finally, Section refsec:conc presents conclusions and future works.

2. Materials and Methods

2.1. Participants

The study was conducted on four subjects. Participants were asked to remain focused and attentive throughout the MI training session to obtain accurate and reliable results. The participants’ composure and focus during the training session contribute to the positive outcomes of the experiment, as it helps to reduce anxiety and stress levels, which can affect the performance of motor skills.

2.2. Experiments

In the standard paradigm for the discrimination of two mental states, the experimental task is to imagine either a right-hand or a left-hand movement depending on a visually presented stimulus. During the resting phase, the participants were introduced to the BCI system and instructed on what to do during the training and tests session. Therefore, it was crucial to provide them with thorough clarifications and instructions.
Specifically, participants were asked to stare intently at a screen located around 150 [ cm ] in front of them (see Figure 1).
Figure 2 shows the sequence in time of each trial for both the training and the test phases, beginning at time t 0 = 0 [ s ] . The sequence begins with an initial resting phase ( t r = 40 [ s ] ) during which the participant relaxes and concentrates on the session to be performed. After the initial resting phase, as shown in Figure 2, the trials start with the fixation cross that appears on the screen (see Figure 3b). Afterward, it is overlaid with an arrow at the center of the monitor for t a r r = 1.25 [ s ] , pointing either to the left or to the right (as shown in Figure 3a,c). In this phase, depending on the direction of the arrow, the subject is instructed to imagine a left- or right-hand movement for t M I = 3.75 [ s ] . Each trial lasts t t r i a l = 8 [ s ] in total. One entire training run includes twenty trials per class (forty in total), while one entire test run includes ten trials. Overall, in order to evaluate the performance of the training and test phases, subjects were required to stay focused for t t r a i n = 7 [ min ] for the training phase and t t e s t = 2 [ min ] for the testing phase.

2.3. Data Acquisition

In order to acquire EEG signals, an Emotiv EPOC X headset was used. Recordings were made using a total of 14 electrodes. The selected electrodes were AF3, F3, F7, FC5, T7, P7, and O1 (Left side); and AF4, F4, F8, FC6, T8, P8, and O2 (Right side). Figure 4 shows the EEG topography distribution for subject A in one MI task execution during the training phase. All electrodes are arranged according to the international 10–20 system seen in Figure 5.
The wireless device uses advanced sensor technology to capture brainwave signals with high accuracy and precision at a sampling rate of 128 [ Hz ] .

2.4. Pre-Processing and Classification

With the final aim of developing a complete framework to command home appliances, we selected some of the most reliable approaches to pre-process and classify EEG data. In particular, filtering, the application of an RCSP filter, and a final classification performed by an LDA classifier are the three main stages that comprise the post-processing analysis and classification, summarized in the following text. A graphical block representation of these three main phases is shown in Figure 6.
In the filtering phase, each trial was band-pass filtered in the range of 8–30 [Hz], using a 5th-order Butterworth filter. The objective of this filtering procedure was to eliminate any noise or unwanted artifacts from the EEG data that could potentially interfere with the subsequent analysis. The 8–30 [ Hz ] frequency range used in this study was chosen to highlight the most pertinent frequencies to the analysis of sensorimotor rhythms, which are known to be associated with movement and motor imagery.
Subsequently, after the band-pass filtering was performed, a regularized common spatial filter (RCSP) [26] was applied to the data, aiming at improving the signal-to-noise ratio of EEG data with a particular focus on inter-subject variability and spatial resolution improvement. In general, supposing that there are two classes, the CSP’s goal is to train spatial filters w CSP that minimize the variance of filtered EEG signals for one class, while maximizing it for the other, according to
J ( w CSP ) = arg max w CSP X 1 X 1 w CSP w CSP X 2 X 2 w CSP = arg max w CSP C 1 w CSP w CSP C 2 w CSP
where the superscript denotes the transpose matrix, X i is the data matrix, and C i is the spatial covariance matrix for the class i. Regularization is obtained by adding a penalty function P ( w CSP ) measuring how much the spatial filter w CSP satisfies a given prior according to
J P ( w CSP ) = arg max w CSP C 1 w CSP w CSP C 2 w CSP + α P ( w CSP )
where α R 0 is an a priori defined regularization parameter. The more the spatial filters w CSP satisfy it, the lower P ( w CSP ) is. Hence, to maximize J P ( w CSP ) , P ( w CSP ) should be minimized, thus ensuring that the spatial filters satisfy the prior. This approach has been selected since it collects the spatial filters with the highest discriminative power. The aim was to avoid possible CSP sensitivity to noise and overfitting. We refer the reader to Figure 7 for an illustration of the set of CSP scalp projections generated with the RCSP algorithm for one participant in the user study.
Spatial filtering was followed by a classification process using Linear Discriminant Analysis, a common approach for distinguishing between characteristics of different classes—more specifically, between two classes. The approach is fast and allows for improving the classification performance by finding a rotation that maximizes the (normalized) distance between the centers of the two sets of data. In particular, when two groups A and B are assumed to have independent Gaussian distributions, the methodology calculates a projection vector that minimizes the variances of the projected populations while maximizing the mean distance between them. Thus, defining the two classes as A and B, the mean difference and the pooled covariance matrices can be written, respectively, as
Δ μ = μ B μ A
Σ = 1 2 Σ B + Σ A .
The LDA projection vector can be defined as
w L D A = Δ μ Σ 1
Therefore, given an arbitrary input x , the symmetric LDA score is given by
p = w L D A x d
where the offset term is
d = 1 2 w L D A μ B + μ A
Using this method, the classification between two groups A and B can be performed.

3. Experimental Results

3.1. Experimental Setup

In this section, we present the experimental setup and results that show the effectiveness of the presented BCI control framework, which is schematically shown in Figure 8. The integration of several technologies, presented here, has allowed the challenging design of an MI-BCI-based system capable of driving home appliances devices to be performed.
First, EEG data were recorded through Emotiv EPOC X through the 14 sensors that subjects placed on their scalp. EMOTIVPRO allowed interactions with a fixed workstation. Then, OpenVIBE (http://openvibe.inria.fr/, accessed on 11 July 2023) was integrated to perform the signals’ post-processing, feature extraction, and classification for both the training and the testing phases. Lab Streaming Layer (LSL), an open-source software framework for real-time acquisition and synchronization of various types of data, facilitated the data exchange between EMOTIVPRO and OpenVIBE. Once the feature extraction and classification phases were complete, Node-RED, an immediate programming tool for interconnecting hardware devices, APIs, and online services over TCP/IP, used the gathered information for both commanding the different hardware devices according to the users’ intentions and received feedback about the status of the devices.
Ultimately, the KNX protocol enables the completion of the binding process that takes place between the Node-RED and the hardware interface. In particular, commands and devices’ statuses are sent and received via KNX by performing a gateway and port setting. In such a way, instructed actions such as switching a light bulb on or off or setting a dimmer can be performed. The advantage of the KNX standard is that it allows devices to be linked to the line in any topological installation configuration; in addition, it also allows the system to be controlled using a large range of interfaces, which are pivotal characteristics in our BCI implementation. To show the results, the implemented KNX architecture was composed of two switches that allow the control of two lighting systems that, from now on, will be referred to as LS-A and LS-B; these switches were connected through a twisted pair shared bus (see Figure 1). It should be mentioned that actuator relays and sensors, such as brightness or temperature sensors, could be integrated into the architecture.

3.2. Software Implementation

The designed software architecture makes use of Node-RED for the software association between the OpenVIBE outputs and the KNX real devices, fully using its advantage of being highly suitable for low-level programming of event-driven applications and for making stable connections between hardware components. A schematic representation is shown in Figure 9. The data transfer is performed over TCP via a custom Node-RED library which sends classified signals in string form. After an initial set of the port number within the Inject block, the TCP Client Node block can be set up in listening mode. This allows for listening of the data coming from OpenVIBE over TCP. Afterward, the users’ provided commands are decoded to drive the selected devices; the devices’ states are continuously fed back to the system and then sent to the real devices by using the KNXs blocks.
By using such a designed software architecture, two devices were handled by switching their states as needed by the imagined right and left motions. This indicates that the state of the light system LS-A changed when users considered a left-handed motion, and the status of the light system LS-B changed when users considered a right-handed motion. By using this strategy whose strength is in enabling users to handle not just one but two devices at once, MI can be used to manage more than one device at a time.

Results

This section is devoted to presenting the results of the efficacy of the proposed MI-BCI-based home automation system. In particular, trials included a training phase followed by a testing phase in which participants thought about left and right movements in order to simultaneously have control of the state of two devices, i.e., the KNX smart light LS-A and the KNX smart light LS-B. The interested reader is referred to the Supplementary Video to see the system in action (http://www.dees.unict.it/mbucolo/index.php/resources, accessed on 11 July 2023).
Four healthy subjects were involved in testing both the software and hardware architectures. The system’s effectiveness was first defined by determining how accurate the system was at recognizing the left and right commands in the recorded EEG signals. The results of this first phase are summarized in Table 1, which displays the right and left training accuracies for each individual across 40 trials. During the training phase, each subject performed the task described in Section 2.2, completing 20 trials for each class, i.e., left or right. The whole training phase (40 trials) lasted 7 min. This amount of time was considered sufficient in terms of computational efficiency, adequacy of the data, and overfitting mitigation. As a matter of fact, 7 min allowed for obtaining a good balance between reliable results, efficient computational resource management, and the risk of the model being overly specialized. Moreover, from an experimental perspective, the longer the training phase, the greater the risk of subjects’ loss of attention, which could affect the experiments’ performance. By looking at the results, they show that all the target end users involved in the evaluation were able to achieve satisfactory accuracies > 70%. Moreover, a small difference is observable between the best-obtained performance (subject 1) and the worst one (subject 4), which provides evidence that the approach is reliable under various conditions.
A second analysis was performed to analyze how accurate the proposed MI-BCI-based home automation system was at executing the users’ intentions in test sessions. Each participant completed 10 tasks which, in turn, resulted in a command generation to drive the proposed home automation system by changing the light system states. The outcomes of such an analysis are shown in Table 2. The testing phase lasted 2 min and took place exactly as the training phase, except for the fact that feedback was provided in the form of a blue bar decoding the intensity of the response to the visualized stimulus. In this case, the results show that all the target end users involved in the evaluation were able to gain control over the MI-BCI system with an average score of ≈72%. Furthermore, the fact that the training phase results are consistent among participants, with an upper bound of 80% and a lower bound of 70%, is further proof of the validity of the proposed approach. Overall, the solution proposed in this work exhibits satisfactory performance and holds significant potential for further development and implementation.

4. Conclusions

In this work, we addressed the challenging problem of designing an MI-BCI-based smart home interface to drive smart-home appliances in real time. Our work shows that the problem can be solved using a software and hardware architecture that was designed, implemented, and thoroughly tested with different subjects to show the effectiveness of the proposed solution. In particular, the integration of accessible programming tools, such as Node-RED, and reliable communication protocols for hardware devices’ setup and communication, such as “KNX”, made the realization of a control framework to drive two switching devices with sufficient accuracy possible.
Further study should be carried out to cope with daily fluctuations of EEG signals, and more has to be done to ensure a more reliable performance. In particular, applying such a system to a wider number of users and focusing on long-term experiments can also be crucial next steps to be performed. In addition, in light of the versatility of the proposed communication protocol, tests on more devices could be performed to allow users to extend the set of possible devices that users can effectively control.
However, despite the effectiveness of the proposed approach and in light of the complexity of the domain, additional challenges are still required to be faced. Additional investigations should be carried out to reduce the training phase time. Conspicuous subjects’ efforts are required in the current implementation. In addition, from a control perspective, further improvements should be made in the direction of enhancing the system outputs. In fact, users should be given more control degrees of freedom to perform regulation of more than two devices at a time. MI multi-class classifications can significantly help in this direction. From a technological point of view, further experiments need to be performed to see the approach reliability out of research laboratories, in real-world conditions, and across a wide variety of subjects. These experiments could raise additional challenges not yet considered, such as the impact that real-world environmental disturbances can have on such systems.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/inventions8040091/s1.

Author Contributions

S.C.: Conceptualization; Software; Validation; Formal Analysis; Writing—original draft; Writing—review & editing. D.S.: Methodology; Software; Validation; Writing—original draft; Writing—review & editing. A.M.: Supervision; Funding acquisition. A.B.: Data curation. M.B.: Conceptualization; Investigation; Writing—review & editing; Supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been funded by the project PON 2014-2020 “Ricerca e Innovazione”—Project: “4 FRAILTY–Sensoristica intelligente, infrastrutture e modelli gestionali per la sicurezza di soggetti fragili”—ARS01_00345—CUP: E66C18000200005 and by the project Sicilian MicronanoTech Research And Innovation Center (SAMOTHRACE)—CUP E63C22000900006.

Data Availability Statement

Data will be provided at the following link: http://www.dees.unict.it/mbucolo/index.php/resources, accessed on 11 July 2023.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MIMotor Imagery
BCIBrain Computer Interface
EEGElectroencephalogram
KNXKonnex
EPEvoked potentials
SSEPSteady-State evoked potentials
ERPEvent-Related potentials
EMGElectromyography
LDALinear Discriminant Analysis
CSPCommon Spatial Patterns
RCSPRegularized-CSP
TCPTransmission Control Protocol
IPInternet Protocol
LS-ALighting system-A
LS-BLighting system-B

References

  1. Shih, J.J.; Krusienski, D.J.; Wolpaw, J.R. Brain-Computer Interfaces in Medicine. In Proceedings of the Mayo Clinic; Elsevier: Rochester, NY, USA, 2012; Volume 87, pp. 268–279. [Google Scholar]
  2. Robinson, N.; Mane, R.; Chouhan, T.; Guan, C. Emerging trends in BCI-robotics for motor control and rehabilitation. Curr. Opin. Biomed. Eng. 2021, 20, 100354. [Google Scholar] [CrossRef]
  3. Zhang, J.; Wang, M. A survey on robots controlled by motor imagery brain-computer interfaces. Cogn. Robot. 2021, 1, 12–24. [Google Scholar] [CrossRef]
  4. Zhong, S.; Liu, Y.; Yu, Y.; Tang, J.; Zhou, Z.; Hu, D. A dynamic user interface based BCI environmental control system. Int. J. Hum. Comput. Interact. 2020, 36, 55–66. [Google Scholar] [CrossRef]
  5. Arora, H.; Agrawal, A.P.; Choudhary, A. Conceptualizing BCI and AI in Video Games. In Proceedings of the 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), Greater Noida, India, 18–19 October 2019; pp. 404–408. [Google Scholar]
  6. Sutter, E.E. The brain response interface: Communication through visually-induced electrical brain responses. J. Microcomput. Appl. 1992, 15, 31–45. [Google Scholar] [CrossRef]
  7. Pfurtscheller, G.; Da Silva, F.L. Event-related EEG/MEG synchronization and desynchronization: Basic principles. Clin. Neurophysiol. 1999, 110, 1842–1857. [Google Scholar] [CrossRef] [PubMed]
  8. Yuan, H.; He, B. Brain–computer interfaces using sensorimotor rhythms: Current state and future perspectives. IEEE Trans. Biomed. Eng. 2014, 61, 1425–1435. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Ortner, R.; Irimia, D.C.; Scharinger, J.; Guger, C. A motor imagery based brain-computer interface for stroke rehabilitation. Annu. Rev. Cyberther. Telemed. 2012, 181, 319–323. [Google Scholar]
  10. Aljalal, M.; Ibrahim, S.; Djemal, R.; Ko, W. Comprehensive review on brain-controlled mobile robots and robotic arms based on electroencephalography signals. Intell. Serv. Robot. 2020, 13, 539–563. [Google Scholar] [CrossRef]
  11. Wang, W.; Yang, B.; Guan, C.; Li, B. A VR Combined with MI-BCI Application for Upper Limb Rehabilitation of Stroke. In Proceedings of the 2019 IEEE MTT-S International Microwave Biomedical Conference (IMBioC), Nanjing, China, 6–8 May 2019; Volume 1, pp. 1–4. [Google Scholar]
  12. Prapas, G.; Glavas, K.; Tzallas, A.T.; Tzimourta, K.D.; Giannakeas, N.; Tsipouras, M.G. Motor Imagery Approach for BCI Game Development. In Proceedings of the 2022 7th South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM), Ioannina, Greece, 23–25 September 2022; pp. 1–5. [Google Scholar]
  13. Choi, J.W.; Huh, S.; Jo, S. Improving performance in motor imagery BCI-based control applications via virtually embodied feedback. Comput. Biol. Med. 2020, 127, 104079. [Google Scholar] [CrossRef] [PubMed]
  14. Perdikis, S.; Leeb, R.; d R Millán, J. Context-aware adaptive spelling in motor imagery BCI. J. Neural Eng. 2016, 13, 036018. [Google Scholar] [CrossRef] [PubMed]
  15. Anindya, S.F.; Rachmat, H.H.; Sutjiredjeki, E. A Prototype of SSVEP-Based BCI for Home Appliances Control. In Proceedings of the 2016 1st International Conference on Biomedical Engineering (IBIOMED), Bali, Indonesia, 5–6 October 2016; pp. 1–6. [Google Scholar]
  16. Luu, B.; Hansberger, B.; Chiu, M.; Shivappa, V.K.K.; George, K. Scalable Smart Home Interface Using Occipitalis sEMG Detection and Classification. In Proceedings of the 2018 9th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), New York, NY, USA, 8–10 November 2018; pp. 1002–1008. [Google Scholar]
  17. Kumar, P.; Saini, R.; Sahu, P.K.; Roy, P.P.; Dogra, D.P.; Balasubramanian, R. Neuro-Phone: An Assistive Framework to Operate Smartphone Using EEG Signals. In Proceedings of the 2017 IEEE Region 10 Symposium (TENSYMP), Cochin, India, 14–16 July 2017; pp. 1–5. [Google Scholar]
  18. Miralles, F.; Vargiu, E.; Dauwalder, S.; Solà, M.; Müller-Putz, G.; Wriessnegger, S.C.; Pinegger, A.; Kübler, A.; Halder, S.; Käthner, I.; et al. Brain computer interface on track to home. Sci. World J. 2015, 2015, 623896. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Chestek, C.A.; Gilja, V.; Blabe, C.H.; Foster, B.L.; Shenoy, K.V.; Parvizi, J.; Henderson, J.M. Hand posture classification using electrocorticography signals in the gamma band over human sensorimotor brain areas. J. Neural Eng. 2013, 10, 026002. [Google Scholar] [CrossRef] [PubMed]
  20. Kapeller, C.; Schneider, C.; Kamada, K.; Ogawa, H.; Kunii, N.; Ortner, R.; Prueckl, R.; Guger, C. Single Trial Detection of Hand Poses in Human ECoG Using CSP Based Feature Extraction. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 4599–4602. [Google Scholar]
  21. Solon, A.J.; Lawhern, V.J.; Touryan, J.; McDaniel, J.R.; Ries, A.J.; Gordon, S.M. Decoding P300 variability using convolutional neural networks. Front. Hum. Neurosci. 2019, 13, 201. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Yger, F.; Berar, M.; Lotte, F. Riemannian approaches in brain-computer interfaces: A review. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 25, 1753–1762. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Choi, H.; Park, J.; Yang, Y.M. Whitening technique based on gram–schmidt orthogonalization for motor imagery classification of brain–computer interface applications. Sensors 2022, 22, 6042. [Google Scholar] [CrossRef] [PubMed]
  24. Bishop, C.M.; Nasrabadi, N.M. Pattern Recognition and Machine Learning; Springer: Cham, Switzerland, 2006; Volume 4. [Google Scholar]
  25. Dornhege, G.; Millan, J.d.R.; Hinterberger, T.; McFarland, D.J.; Müller, K.R. Toward Brain-Computer Interfacing; MIT Press: Cambridge, MA, USA, 2007. [Google Scholar]
  26. Lotte, F.; Guan, C. Regularizing common spatial patterns to improve BCI designs: Unified theory and new algorithms. IEEE Trans. Biomed. Eng. 2010, 58, 355–362. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Experimental setup: a personal computer connected to two devices through KNX, i.e., two light bulbs.
Figure 1. Experimental setup: a personal computer connected to two devices through KNX, i.e., two light bulbs.
Inventions 08 00091 g001
Figure 2. Experimental paradigm: MI trial’s phases. The three key stages—fixation cross, arrow cue, and MI task—are depicted to highlight the time intervals between them.
Figure 2. Experimental paradigm: MI trial’s phases. The three key stages—fixation cross, arrow cue, and MI task—are depicted to highlight the time intervals between them.
Inventions 08 00091 g002
Figure 3. Arrows’ sequence during the training phase. (a) Right arrow; (b) Fixation cross; (c) Left arrow.
Figure 3. Arrows’ sequence during the training phase. (a) Right arrow; (b) Fixation cross; (c) Left arrow.
Inventions 08 00091 g003
Figure 4. EEG topographical distribution of subject A during the training phase. (a) Fixation cross 0 [s], (b) Arrow cue at 2.75 [s], (c) MI task starting at 4.25 [s], (d) MI task at 5.25 [s].
Figure 4. EEG topographical distribution of subject A during the training phase. (a) Fixation cross 0 [s], (b) Arrow cue at 2.75 [s], (c) MI task starting at 4.25 [s], (d) MI task at 5.25 [s].
Inventions 08 00091 g004
Figure 5. Neuroheadset headset and its spatial configuration. (a) Emotiv EPOC X; (b) Electrodes configuration.
Figure 5. Neuroheadset headset and its spatial configuration. (a) Emotiv EPOC X; (b) Electrodes configuration.
Inventions 08 00091 g005
Figure 6. Adopted classification methodology.
Figure 6. Adopted classification methodology.
Inventions 08 00091 g006
Figure 7. Common spatial pattern map. The figure illustrates the set of CSP filters of a single participant in the study. The CSPs are optimized for the discrimination of left-hand motor imagery.
Figure 7. Common spatial pattern map. The figure illustrates the set of CSP filters of a single participant in the study. The CSPs are optimized for the discrimination of left-hand motor imagery.
Inventions 08 00091 g007
Figure 8. System architecture overview: from the signals acquisition to hardware devices.
Figure 8. System architecture overview: from the signals acquisition to hardware devices.
Inventions 08 00091 g008
Figure 9. Node-RED flow chart of the proposed software implementation.
Figure 9. Node-RED flow chart of the proposed software implementation.
Inventions 08 00091 g009
Table 1. Training accuracy.
Table 1. Training accuracy.
SubjectsNumber of TrialsTraining Phase Accuracy
14073.32%
24071.20%
34070.01%
44071.50%
Table 2. Test phase accuracy.
Table 2. Test phase accuracy.
SubjectNumber of TasksTesting Phase Accuracy
11080%
21070%
31070%
41070%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cariello, S.; Sanalitro, D.; Micali, A.; Buscarino, A.; Bucolo, M. Brain–Computer-Interface-Based Smart-Home Interface by Leveraging Motor Imagery Signals. Inventions 2023, 8, 91. https://doi.org/10.3390/inventions8040091

AMA Style

Cariello S, Sanalitro D, Micali A, Buscarino A, Bucolo M. Brain–Computer-Interface-Based Smart-Home Interface by Leveraging Motor Imagery Signals. Inventions. 2023; 8(4):91. https://doi.org/10.3390/inventions8040091

Chicago/Turabian Style

Cariello, Simona, Dario Sanalitro, Alessandro Micali, Arturo Buscarino, and Maide Bucolo. 2023. "Brain–Computer-Interface-Based Smart-Home Interface by Leveraging Motor Imagery Signals" Inventions 8, no. 4: 91. https://doi.org/10.3390/inventions8040091

Article Metrics

Back to TopTop