Machine Learning for EEG Signal Processing

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: closed (31 December 2020) | Viewed by 53848

Special Issue Editor

Special Issue Information

Dear colleagues,

The 1st International Workshop on Machine Learning for EEG Signal Processing (MLESP 2018) will be held in Madrid, Spain, 3–6 December, 2018. The aim of this workshop is to present and discuss the recent advances in machine learning for EEG signal analysis and processing. For more information about the workshop, please use this link:

https://mlesp2018.sciencesconf.org/

Selected papers which presented at the workshop are invited to submit their extended versions to this Special Issue of the journal Computers after the conference. Submitted papers should be extended to the size of regular research or review articles, with at least 40% extension of new results. All submitted papers will undergo our standard peer-review procedure. Accepted papers will be published in open access format in Computers and collected together in this Special Issue website. There are no page limitations for this journal.

We are also inviting original research work covering novel theories, innovative methods, and meaningful applications that can potentially lead to significant advances in EEG data analytics.

The main topics include, but are not limited to:

  • EEG signal processing and analysis
  • Time-frequency EEG signal analysis
  • Signal processing for EEG Data
  • EEG feature extraction and selection
  • Machine learning for EEG signal processing
  • EEG classification and clustering
  • EEG abnormalities detection (e.g. Epileptic seizure, Alzheimer's disease, etc.)
  • Machine learning in EEG Big Data
  • Deep Learning for EEG Big Data
  • Neural Rehabilitation Engineering
  • Brain-Computer Interface
  • Neurofeedback
  • Biometrics with EEG data
  • Related applications

Dr. Larbi Boubchir
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Electroencephalography (EEG)
  • Biomedical signal processing
  • Machine learning
  • Biomedical engineering

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

28 pages, 4731 KiB  
Article
Online Multimodal Inference of Mental Workload for Cognitive Human Machine Systems
by Lars J. Planke, Alessandro Gardi, Roberto Sabatini, Trevor Kistan and Neta Ezer
Computers 2021, 10(6), 81; https://doi.org/10.3390/computers10060081 - 16 Jun 2021
Cited by 7 | Viewed by 2938
Abstract
With increasingly higher levels of automation in aerospace decision support systems, it is imperative that the human operator maintains the required level of situational awareness in different operational conditions and a central role in the decision-making process. While current aerospace systems and interfaces [...] Read more.
With increasingly higher levels of automation in aerospace decision support systems, it is imperative that the human operator maintains the required level of situational awareness in different operational conditions and a central role in the decision-making process. While current aerospace systems and interfaces are limited in their adaptability, a Cognitive Human Machine System (CHMS) aims to perform dynamic, real-time system adaptation by estimating the cognitive states of the human operator. Nevertheless, to reliably drive system adaptation of current and emerging aerospace systems, there is a need to accurately and repeatably estimate cognitive states, particularly for Mental Workload (MWL), in real-time. As part of this study, two sessions were performed during a Multi-Attribute Task Battery (MATB) scenario, including a session for offline calibration and validation and a session for online validation of eleven multimodal inference models of MWL. The multimodal inference model implemented included an Adaptive Neuro Fuzzy Inference System (ANFIS), which was used in different configurations to fuse data from an Electroencephalogram (EEG) model’s output, four eye activity features and a control input feature. The online validation of the ANFIS models produced good results, while the best performing model (containing all four eye activity features and the control input feature) showed an average Mean Absolute Error (MAE) = 0.67 ± 0.18 and Correlation Coefficient (CC) = 0.71 ± 0.15. The remaining six ANFIS models included data from the EEG model’s output, which had an offset discrepancy. This resulted in an equivalent offset for the online multimodal fusion. Nonetheless, the efficacy of these ANFIS models could be confirmed by the pairwise correlation with the task level, where one model demonstrated a CC = 0.77 ± 0.06, which was the highest among all of the ANFIS models tested. Hence, this study demonstrates the suitability for online multimodal fusion of features extracted from EEG signals, eye activity and control inputs to produce an accurate and repeatable inference of MWL. Full article
(This article belongs to the Special Issue Machine Learning for EEG Signal Processing)
Show Figures

Figure 1

15 pages, 937 KiB  
Article
Machine-Learning-Based Emotion Recognition System Using EEG Signals
by Rania Alhalaseh and Suzan Alasasfeh
Computers 2020, 9(4), 95; https://doi.org/10.3390/computers9040095 - 30 Nov 2020
Cited by 58 | Viewed by 7950
Abstract
Many scientific studies have been concerned with building an automatic system to recognize emotions, and building such systems usually relies on brain signals. These studies have shown that brain signals can be used to classify many emotional states. This process is considered difficult, [...] Read more.
Many scientific studies have been concerned with building an automatic system to recognize emotions, and building such systems usually relies on brain signals. These studies have shown that brain signals can be used to classify many emotional states. This process is considered difficult, especially since the brain’s signals are not stable. Human emotions are generated as a result of reactions to different emotional states, which affect brain signals. Thus, the performance of emotion recognition systems by brain signals depends on the efficiency of the algorithms used to extract features, the feature selection algorithm, and the classification process. Recently, the study of electroencephalography (EEG) signaling has received much attention due to the availability of several standard databases, especially since brain signal recording devices have become available in the market, including wireless ones, at reasonable prices. This work aims to present an automated model for identifying emotions based on EEG signals. The proposed model focuses on creating an effective method that combines the basic stages of EEG signal handling and feature extraction. Different from previous studies, the main contribution of this work relies in using empirical mode decomposition/intrinsic mode functions (EMD/IMF) and variational mode decomposition (VMD) for signal processing purposes. Despite the fact that EMD/IMFs and VMD methods are widely used in biomedical and disease-related studies, they are not commonly utilized in emotion recognition. In other words, the methods used in the signal processing stage in this work are different from the methods used in literature. After the signal processing stage, namely in the feature extraction stage, two well-known technologies were used: entropy and Higuchi’s fractal dimension (HFD). Finally, in the classification stage, four classification methods were used—naïve Bayes, k-nearest neighbor (k-NN), convolutional neural network (CNN), and decision tree (DT)—for classifying emotional states. To evaluate the performance of our proposed model, experiments were applied to a common database called DEAP based on many evaluation models, including accuracy, specificity, and sensitivity. The experiments showed the efficiency of the proposed method; a 95.20% accuracy was achieved using the CNN-based method. Full article
(This article belongs to the Special Issue Machine Learning for EEG Signal Processing)
Show Figures

Figure 1

14 pages, 438 KiB  
Article
Statistical Model-Based Classification to Detect Patient-Specific Spike-and-Wave in EEG Signals
by Antonio Quintero-Rincón, Valeria Muro, Carlos D’Giano, Jorge Prendes and Hadj Batatia
Computers 2020, 9(4), 85; https://doi.org/10.3390/computers9040085 - 29 Oct 2020
Cited by 2 | Viewed by 5568
Abstract
Spike-and-wave discharge (SWD) pattern detection in electroencephalography (EEG) is a crucial signal processing problem in epilepsy applications. It is particularly important for overcoming time-consuming, difficult, and error-prone manual analysis of long-term EEG recordings. This paper presents a new method to detect SWD, with [...] Read more.
Spike-and-wave discharge (SWD) pattern detection in electroencephalography (EEG) is a crucial signal processing problem in epilepsy applications. It is particularly important for overcoming time-consuming, difficult, and error-prone manual analysis of long-term EEG recordings. This paper presents a new method to detect SWD, with a low computational complexity making it easily trained with data from standard medical protocols. Precisely, EEG signals are divided into time segments for which the continuous Morlet 1-D wavelet decomposition is computed. The generalized Gaussian distribution (GGD) is fitted to the resulting coefficients and their variance and median are calculated. Next, a k-nearest neighbors (k-NN) classifier is trained to detect the spike-and-wave patterns, using the scale parameter of the GGD in addition to the variance and the median. Experiments were conducted using EEG signals from six human patients. Precisely, 106 spike-and-wave and 106 non-spike-and-wave signals were used for training, and 96 other segments for testing. The proposed SWD classification method achieved 95% sensitivity (True positive rate), 87% specificity (True Negative Rate), and 92% accuracy. These promising results set the path for new research to study the causes underlying the so-called absence epilepsy in long-term EEG recordings. Full article
(This article belongs to the Special Issue Machine Learning for EEG Signal Processing)
Show Figures

Figure 1

9 pages, 636 KiB  
Article
Fusion Convolutional Neural Network for Cross-Subject EEG Motor Imagery Classification
by Karel Roots, Yar Muhammad and Naveed Muhammad
Computers 2020, 9(3), 72; https://doi.org/10.3390/computers9030072 - 05 Sep 2020
Cited by 41 | Viewed by 6637
Abstract
Brain–computer interfaces (BCIs) can help people with limited motor abilities to interact with their environment without external assistance. A major challenge in electroencephalogram (EEG)-based BCI development and research is the cross-subject classification of motor imagery data. Due to the highly individualized nature of [...] Read more.
Brain–computer interfaces (BCIs) can help people with limited motor abilities to interact with their environment without external assistance. A major challenge in electroencephalogram (EEG)-based BCI development and research is the cross-subject classification of motor imagery data. Due to the highly individualized nature of EEG signals, it has been difficult to develop a cross-subject classification method that achieves sufficiently high accuracy when predicting the subject’s intention. In this study, we propose a multi-branch 2D convolutional neural network (CNN) that utilizes different hyperparameter values for each branch and is more flexible to data from different subjects. Our model, EEGNet Fusion, achieves 84.1% and 83.8% accuracy when tested on the 103-subject eegmmidb dataset for executed and imagined motor actions, respectively. The model achieved statistically significantly higher results compared with three state-of-the-art CNN classifiers: EEGNet, ShallowConvNet, and DeepConvNet. However, the computational cost of the proposed model is up to four times higher than the model with the lowest computational cost used for comparison. Full article
(This article belongs to the Special Issue Machine Learning for EEG Signal Processing)
Show Figures

Figure 1

21 pages, 4929 KiB  
Article
Experimental Study for Determining the Parameters Required for Detecting ECG and EEG Related Diseases during the Timed-Up and Go Test
by Vasco Ponciano, Ivan Miguel Pires, Fernando Reinaldo Ribeiro, María Vanessa Villasana, Maria Canavarro Teixeira and Eftim Zdravevski
Computers 2020, 9(3), 67; https://doi.org/10.3390/computers9030067 - 27 Aug 2020
Cited by 8 | Viewed by 3652
Abstract
The use of smartphones, coupled with different sensors, makes it an attractive solution for measuring different physical and physiological features, allowing for the monitoring of various parameters and even identifying some diseases. The BITalino device allows the use of different sensors, including Electroencephalography [...] Read more.
The use of smartphones, coupled with different sensors, makes it an attractive solution for measuring different physical and physiological features, allowing for the monitoring of various parameters and even identifying some diseases. The BITalino device allows the use of different sensors, including Electroencephalography (EEG) and Electrocardiography (ECG) sensors, to study different health parameters. With these devices, the acquisition of signals is straightforward, and it is possible to connect them using a Bluetooth connection. With the acquired data, it is possible to measure parameters such as calculating the QRS complex and its variation with ECG data to control the individual’s heartbeat. Similarly, by using the EEG sensor, one could analyze the individual’s brain activity and frequency. The purpose of this paper is to present a method for recognition of the diseases related to ECG and EEG data, with sensors available in off-the-shelf mobile devices and sensors connected to a BITalino device. The data were collected during the elderly’s experiences, performing the Timed-Up and Go test, and the different diseases found in the sample in the study. The data were analyzed, and the following features were extracted from the ECG, including heart rate, linear heart rate variability, the average QRS interval, the average R-R interval, and the average R-S interval, and the EEG, including frequency and variability. Finally, the diseases are correlated with different parameters, proving that there are relations between the individuals and the different health conditions. Full article
(This article belongs to the Special Issue Machine Learning for EEG Signal Processing)
Show Figures

Figure 1

14 pages, 3703 KiB  
Article
Machine Learning Techniques with ECG and EEG Data: An Exploratory Study
by Vasco Ponciano, Ivan Miguel Pires, Fernando Reinaldo Ribeiro, Nuno M. Garcia, María Vanessa Villasana, Eftim Zdravevski and Petre Lameski
Computers 2020, 9(3), 55; https://doi.org/10.3390/computers9030055 - 29 Jun 2020
Cited by 7 | Viewed by 5151
Abstract
Electrocardiography (ECG) and electroencephalography (EEG) are powerful tools in medicine for the analysis of various diseases. The emergence of affordable ECG and EEG sensors and ubiquitous mobile devices provides an opportunity to make such analysis accessible to everyone. In this paper, we propose [...] Read more.
Electrocardiography (ECG) and electroencephalography (EEG) are powerful tools in medicine for the analysis of various diseases. The emergence of affordable ECG and EEG sensors and ubiquitous mobile devices provides an opportunity to make such analysis accessible to everyone. In this paper, we propose the implementation of a neural network-based method for the automatic identification of the relationship between the previously known conditions of older adults and the different features calculated from the various signals. The data were collected using a smartphone and low-cost ECG and EEG sensors during the performance of the timed-up and go test. Different patterns related to the features extracted, such as heart rate, heart rate variability, average QRS amplitude, average R-R interval, and average R-S interval from ECG data, and the frequency and variability from the EEG data were identified. A combination of these parameters allowed us to identify the presence of certain diseases accurately. The analysis revealed that the different institutions and ages were mainly identified. Still, the various diseases and groups of diseases were difficult to recognize, because the frequency of the different diseases was rare in the considered population. Therefore, the test should be performed with more people to achieve better results. Full article
(This article belongs to the Special Issue Machine Learning for EEG Signal Processing)
Show Figures

Figure 1

14 pages, 4709 KiB  
Article
Classification of Vowels from Imagined Speech with Convolutional Neural Networks
by Markus-Oliver Tamm, Yar Muhammad and Naveed Muhammad
Computers 2020, 9(2), 46; https://doi.org/10.3390/computers9020046 - 01 Jun 2020
Cited by 30 | Viewed by 5091
Abstract
Imagined speech is a relatively new electroencephalography (EEG) neuro-paradigm, which has seen little use in Brain-Computer Interface (BCI) applications. Imagined speech can be used to allow physically impaired patients to communicate and to use smart devices by imagining desired commands and then detecting [...] Read more.
Imagined speech is a relatively new electroencephalography (EEG) neuro-paradigm, which has seen little use in Brain-Computer Interface (BCI) applications. Imagined speech can be used to allow physically impaired patients to communicate and to use smart devices by imagining desired commands and then detecting and executing those commands in a smart device. The goal of this research is to verify previous classification attempts made and then design a new, more efficient neural network that is noticeably less complex (fewer number of layers) that still achieves a comparable classification accuracy. The classifiers are designed to distinguish between EEG signal patterns corresponding to imagined speech of different vowels and words. This research uses a dataset that consists of 15 subjects imagining saying the five main vowels (a, e, i, o, u) and six different words. Two previous studies on imagined speech classifications are verified as those studies used the same dataset used here. The replicated results are compared. The main goal of this study is to take the proposed convolutional neural network (CNN) model from one of the replicated studies and make it much more simpler and less complex, while attempting to retain a similar accuracy. The pre-processing of data is described and a new CNN classifier with three different transfer learning methods is described and used to classify EEG signals. Classification accuracy is used as the performance metric. The new proposed CNN, which uses half as many layers and less complex pre-processing methods, achieved a considerably lower accuracy, but still managed to outperform the initial model proposed by the authors of the dataset by a considerable margin. It is recommended that further studies investigating classifying imagined speech should use more data and more powerful machine learning techniques. Transfer learning proved beneficial and should be used to improve the effectiveness of neural networks. Full article
(This article belongs to the Special Issue Machine Learning for EEG Signal Processing)
Show Figures

Figure 1

11 pages, 2449 KiB  
Article
Evaluation of Features in Detection of Dislike Responses to Audio–Visual Stimuli from EEG Signals
by Firgan Feradov, Iosif Mporas and Todor Ganchev
Computers 2020, 9(2), 33; https://doi.org/10.3390/computers9020033 - 20 Apr 2020
Cited by 9 | Viewed by 4360
Abstract
There is a strong correlation between the like/dislike responses to audio–visual stimuli and the emotional arousal and valence reactions of a person. In the present work, our attention is focused on the automated detection of dislike responses based on EEG activity when music [...] Read more.
There is a strong correlation between the like/dislike responses to audio–visual stimuli and the emotional arousal and valence reactions of a person. In the present work, our attention is focused on the automated detection of dislike responses based on EEG activity when music videos are used as audio–visual stimuli. Specifically, we investigate the discriminative capacity of the Logarithmic Energy (LogE), Linear Frequency Cepstral Coefficients (LFCC), Power Spectral Density (PSD) and Discrete Wavelet Transform (DWT)-based EEG features, computed with and without segmentation of the EEG signal, on the dislike detection task. We carried out a comparative evaluation with eighteen modifications of the above-mentioned EEG features that cover different frequency bands and use different energy decomposition methods and spectral resolutions. For that purpose, we made use of Naïve Bayes classifier (NB), Classification and regression trees (CART), k-Nearest Neighbors (kNN) classifier, and support vector machines (SVM) classifier with a radial basis function (RBF) kernel trained with the Sequential Minimal Optimization (SMO) method. The experimental evaluation was performed on the well-known and widely used DEAP dataset. A classification accuracy of up to 98.6% was observed for the best performing combination of pre-processing, EEG features and classifier. These results support that the automated detection of like/dislike reactions based on EEG activity is feasible in a personalized setup. This opens opportunities for the incorporation of such functionality in entertainment, healthcare and security applications. Full article
(This article belongs to the Special Issue Machine Learning for EEG Signal Processing)
Show Figures

Figure 1

13 pages, 527 KiB  
Article
Statistical-Hypothesis-Aided Tests for Epilepsy Classification
by Alaa Alqatawneh, Rania Alhalaseh, Ahmad Hassanat and Mohammad Abbadi
Computers 2019, 8(4), 84; https://doi.org/10.3390/computers8040084 - 20 Nov 2019
Cited by 7 | Viewed by 4560
Abstract
In this paper, an efficient, accurate, and nonparametric epilepsy detection and classification approach based on electroencephalogram (EEG) signals is proposed. The proposed approach mainly depends on a feature extraction process that is conducted using a set of statistical tests. Among the many existing [...] Read more.
In this paper, an efficient, accurate, and nonparametric epilepsy detection and classification approach based on electroencephalogram (EEG) signals is proposed. The proposed approach mainly depends on a feature extraction process that is conducted using a set of statistical tests. Among the many existing tests, those fit with processed data and for the purpose of the proposed approach were used. From each test, various output scalars were extracted and used as features in the proposed detection and classification task. Experiments that were conducted on the basis of a Bonn University dataset showed that the proposed approach had very accurate results ( 98.4 % ) in the detection task and outperformed state-of-the-art methods in a similar task on the same dataset. The proposed approach also had accurate results ( 94.0 % ) in the classification task, but it did not outperform state-of-the-art methods in a similar task on the same dataset. However, the proposed approach had less time complexity in comparison with those methods that achieved better results. Full article
(This article belongs to the Special Issue Machine Learning for EEG Signal Processing)
Show Figures

Figure 1

Review

Jump to: Research

15 pages, 8849 KiB  
Review
Neural Net-Based Approach to EEG Signal Acquisition and Classification in BCI Applications
by Kathia Chenane, Youcef Touati, Larbi Boubchir and Boubaker Daachi
Computers 2019, 8(4), 87; https://doi.org/10.3390/computers8040087 - 04 Dec 2019
Cited by 3 | Viewed by 5654
Abstract
The following contribution describes a neural net-based, noninvasive methodology for electroencephalographic (EEG) signal classification. The application concerns a brain–computer interface (BCI) allowing disabled people to interact with their environment using only brain activity. It consists of classifying user’s thoughts in order to translate [...] Read more.
The following contribution describes a neural net-based, noninvasive methodology for electroencephalographic (EEG) signal classification. The application concerns a brain–computer interface (BCI) allowing disabled people to interact with their environment using only brain activity. It consists of classifying user’s thoughts in order to translate them into commands, such as controlling wheelchairs, cursor movement, or spelling. The proposed method follows a functional model, as is the case for any BCI, and can be achieved through three main phases: data acquisition and preprocessing, feature extraction, and classification of brains activities. For this purpose, we propose an interpretation model implementing a quantization method using both fast Fourier transform with root mean square error for feature extraction and a self-organizing-map-based neural network to generate classifiers, allowing better interpretation of brain activities. In order to show the effectiveness of the proposed methodology, an experimental study was conducted by exploiting five mental activities acquired by a G.tec BCI system containing 16 simultaneously sampled bio-signal channels with 24 bits, with experiments performed on 10 randomly chosen subjects. Full article
(This article belongs to the Special Issue Machine Learning for EEG Signal Processing)
Show Figures

Figure 1

Back to TopTop