Next Article in Journal
An Algorithm for Online Stochastic Error Modeling of Inertial Sensors in Urban Cities
Next Article in Special Issue
Cross-Domain Transfer of EEG to EEG or ECG Learning for CNN Classification Models
Previous Article in Journal
The Properties of a Ship’s Compass in the Context of Ship Manoeuvrability
Previous Article in Special Issue
Estimating the Depth of Anesthesia from EEG Signals Based on a Deep Residual Shrinkage Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Machine Learning-Based Emotional Valence Recognition Approach Towards Wearable EEG

by
Lamiaa Abdel-Hamid
Department of Electronics & Communication, Faculty of Engineering, Misr International University (MIU), Heliopolis, Cairo P.O. Box 1 , Egypt
Sensors 2023, 23(3), 1255; https://doi.org/10.3390/s23031255
Submission received: 3 December 2022 / Revised: 14 January 2023 / Accepted: 17 January 2023 / Published: 21 January 2023
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications)

Abstract

:
Emotion artificial intelligence (AI) is being increasingly adopted in several industries such as healthcare and education. Facial expressions and tone of speech have been previously considered for emotion recognition, yet they have the drawback of being easily manipulated by subjects to mask their true emotions. Electroencephalography (EEG) has emerged as a reliable and cost-effective method to detect true human emotions. Recently, huge research effort has been put to develop efficient wearable EEG devices to be used by consumers in out of the lab scenarios. In this work, a subject-dependent emotional valence recognition method is implemented that is intended for utilization in emotion AI applications. Time and frequency features were computed from a single time series derived from the Fp1 and Fp2 channels. Several analyses were performed on the strongest valence emotions to determine the most relevant features, frequency bands, and EEG timeslots using the benchmark DEAP dataset. Binary classification experiments resulted in an accuracy of 97.42% using the alpha band, by that outperforming several approaches from literature by ~3–22%. Multiclass classification gave an accuracy of 95.0%. Feature computation and classification required less than 0.1 s. The proposed method thus has the advantage of reduced computational complexity as, unlike most methods in the literature, only two EEG channels were considered. In addition, minimal features concluded from the thorough analyses conducted in this study were used to achieve state-of-the-art performance. The implemented EEG emotion recognition method thus has the merits of being reliable and easily reproducible, making it well-suited for wearable EEG devices.

1. Introduction

Emotion artificial intelligence (AI), also known as affective computing, is the study of systems that can recognize, process, and respond to the different human emotions, thereby making people’s lives more convenient [1]. Emotion AI is an interdisciplinary field that combines artificial intelligence, cognitive science, psychology, and neuroscience. In 2019, the emotion AI industry was worth about 21.6 billion dollars, and its value was predicted to reach 56 billion dollars by the year 2024 [2].
Emotions are mental states created in response to events occurring to us or in the world around us. A large body of research since the 1970s showed that basic emotions, such as happiness, sadness, and anger are similarly expressed among different cultures [3]. James Russell, a renowned American psychologist, suggested a dimensional approach in which all human emotions could be expressed in terms of valence and arousal [4]. Valence refers to the extent to which an emotion is pleasant (positive/happy) or unpleasant (negative/sad), whereas arousal (intensity) refers to the strength or mildness of a given emotion (Figure 1). Russell’s valence-arousal model is very popular owing to its simplicity and efficacy, both which lead to it being widely adopted in emotion AI systems [5].
Emotions can be detected from a person’s facial expressions and tone of speech. Although these methods were previously considered for automatic emotion recognition [7,8], they both have the limitation of being easily manipulated by a person to hide his/her true emotions [5,9]. Electroencephalography (EEG) is a non-invasive technique that can measure spontaneous human brain activity while providing excellent temporal resolution yet limited spatial resolution [10]. EEG can thus provide a reliable method to detect and monitor true, unmanipulated human emotions. EEG-based emotion recognition has been successfully implemented in various applications including (1) education: to measure student engagement, (2) health: to diagnosis psychological diseases, and (3) emotion-based music players: to provide a more engaging experience [11].
The cerebral cortex is the outermost layer of the brain that is associated with the highest mental capabilities. The cerebral cortex is traditionally divided into four main lobes which are the frontal (F), parietal (P), occipital (O), and temporal (T) (Figure 2). Each brain lobe is typically associated with certain functions, yet many activities require the coordination of multiple lobes [12]. The frontal lobe is responsible for cognitive functions such as emotions, memory, decision making, and problem solving, as well as voluntary movement control. The parietal lobe process information received from the outside world such as that related to touch, taste, and temperature. The occipital lobe is primarily responsible for vision, while the temporal lobe is responsible for understanding language, perception, and memory. EEG depicts the brain’s neuron activity in the different lobes through measuring the electrical voltage at the scalp. For an adult, this voltage is typically in the range of 10–100 µV. The 10/20 system is an internationally recognized EEG electrode placement method that divides the scalp into 10% and 20% intervals. The main EEG channels in the international 10/20 system are illustrated in Figure 3. Each channel is annotated with a letter and a number to identify the specific brain region and hemisphere location, respectively.
EEG signals are typically decomposed into five basic frequency bands which are the delta (Δ), theta (θ), alpha (α), beta (β), and gamma (δ) bands (Figure 4). Each frequency band is associated with a different type of brain activity [15,16,17]. Delta and theta are the two slowest brain waves often occurring whilst sleeping and during deep meditation. Specifically, delta waves are more dominant in deep restorative sleep (unconsciousness), whereas theta waves are related to light sleep, daydreaming, praying, and deep relaxation (subconsciousness). Both waves were also detected in cognitive processing, learning, and memory [17,18]. Alpha, beta, and gamma brain waves are on the other hand associated with consciousness. Alpha are the dominant brain waves of normal adults occurring when one is calm and relaxed while still being alert. Beta waves are produced throughout daily activities performed in attentive wakefulness. Gamma are the fastest waves linked to complex brain activities requiring high level of thought and focus, for example problem solving. Table 1 summarizes the five different brain wave bands and their associated psychological states. Brain wave frequency bands are typically used to extract meaningful emotion-related features [17].
Historically, EEG equipment has been highly complicated and bulky, restricted to the monitoring of stationary subjects by highly trained technical experts within controlled lab settings [19]. Recently, enormous effort has been exerted to develop wearable EEG handsets that are reliable, affordable, and portable, by that overcoming the limitations of conventional EEG headsets (Figure 5). Wearable EEG headsets allow for the long-term recording of brain signals while people are unmonitored, out of the lab, and navigating freely. Furthermore, EEG signals collected by the wearable headsets can be easily sent to a computer or mobile device for storage, monitoring, and/or data processing. Wearable EEG devices thus allow for the development of many clinical and non-clinical applications that were never previously possible. For example, wearable EEG has been shown to be effective for stroke [20], seizure [21], and sleep [22] remote monitoring by medical experts. EEG signals from wearable headsets can also be used for the development of brain-controlled-interface (BCI) applications such as car driver assistance [23], as well as wheelchair control for people with disability [24]. In addition, individuals can use EEG to improve their productivity and wellness via monitoring their moods and emotions [25]. However, extracting meaningful information using few EEG channels in order to reduce the computational complexity of wearable headsets is still an ongoing challenge [26,27].
In the present study, a subject-dependent emotional valence recognition algorithm is introduced that is intended for wearable EEG devices. The contributions of this work are as follows:
  • Only the difference signal between the frontal Fp1 and Fp2 channels was considered for feature extraction.
  • Simple statistical features were explored (Hjorth parameters, zero-crossings, and power spectral density), all which share the merit of having low computational complexity.
  • Several analyses were made to determine the frequency band, time slot, and features most suitable for reliable EEG-based valence detection.
  • The presented valence recognition algorithm outperformed several state-of-the-art methods with the added advantages of requiring only two EEG channels, a single frequency band, as well as only two simple statistical features, thus making it suitable for integration within wearable EEG devices.
Figure 4. Samples from delta, theta, alpha, beta, and gamma brain waves [28].
Figure 4. Samples from delta, theta, alpha, beta, and gamma brain waves [28].
Sensors 23 01255 g004
Figure 5. (a) Conventional lab EEG headset [29] versus (b) wearable headset from NeuroSky [30].
Figure 5. (a) Conventional lab EEG headset [29] versus (b) wearable headset from NeuroSky [30].
Sensors 23 01255 g005

2. Literature Review

Emotion AI systems generally rely on handcrafted and/or automatic extraction of meaningful features for the classification of the different human emotional states (Figure 6). In this section, the different types of EEG-based features commonly used for emotion recognition are introduced followed by a summary of the most widely used classifiers for emotion recognition. Next, state-of-the-art EEG-based emotion detection methods from literature are presented, indicating the considered EEG channels, frequency bands, features, and the classifier, as well as the performance results.

2.1. EEG Features

EEG-based emotion recognition features can be categorized based on the domain from which they are computed into four different types which are as follows [31]:
(1) 
Time-domain (spatial) features are handcrafted features that are extracted from the EEG time-series signal. They can be computed directly from the raw EEG signal or from the different frequency bands separated with the aid of bandpass filters. Time-domain features comprise simple statistical features [32,33,34] such as the mean, standard deviation, skewness, and kurtosis. In addition, they include more complex features such as the Hjorth parameters [5,32,35,36,37,38,39,40,41], High Order Crossings (HOC) [5,33,38,40,42], Fractal Dimensions [43,44,45], Recurrence Quantification Analysis (RQA) [46,47], in addition to entropy-based features [5,34,35,45,48].
(2) 
Frequency-domain features are also handcrafted features, yet they are computed from the EEG signal’s frequency representation. The Fast Fourier transform (FFT) and Short-time Fourier Transform (STFT) are typically used to acquire the frequency-domain signal from the EEG waves. Frequency-based features allow for the deeper understanding of the signal by considering its frequency content. Frequency-domain features include the widely used power spectral density (PSD) [33,35,39,49,50,51], as well as rational asymmetry features (RASM) [32,34,39,52,53]. Statistical features such as mean, median, variance, skewness, and kurtosis are also commonly computed in the EEG’s the frequency domain, as well as the relative powers of the various frequency bands [54].
(3) 
Time-frequency domain features are handcrafted features extracted from sophisticated time-frequency signal representations. Wavelet transform (WT) is a powerful tool that can decompose a signal into different subbands by applying a series of successive high and low frequency filters. WT has the advantage of being localized in both time and frequency. It can thus be used to divide the EEG signal into the delta, theta, alpha, beta, and gamma subbands from which wavelet time-frequency features can be directly computed for emotion classification. Wavelet features typically include simple statistical measures such as mean, standard deviation, skewness, kurtosis, energy, and entropy [9,32,39,53,55,56,57].
(4) 
Deep features refer to those features that are automatically extracted in an end-to-end manner using one or more deep networks. Deep features have been gaining increased popularity and are being used either solely or alongside handcrafted (traditional) features in emotion AI [58]. Inputs to the deep networks can be the raw EEG signal [59,60], traditional features [61], or images that are obtained either from the EEG signal’s Fourier Transform (spectrograms) or Wavelet Transform (scalograms) [62,63,64,65]. In addition, the deep networks used for the feature extraction can be directly utilized or initially pretrained (transfer learning) to enhance performance.
Handcrafted (traditional) features have been widely implemented in the design of reliable EEG-based emotion AI systems. Time-domain features have the merit of being easy to implement while efficiently extracting relevant information from the EEG signals. Specifically complex time domain features such as Hjorth parameters and High Order Crossings were shown to give reliable results in EEG emotion recognition [31]. Frequency-domain features have also been widely implemented for EEG emotion recognition due to their efficient performance, yet they have the disadvantage of missing temporal information. Wavelet-domain features have the advantages of being localized in time and frequency allowing for extraction of simple yet meaningful features from the signal. A limitation of the wavelet-based features is the selection of a suitable mother wavelet [31]. Most EEG-based emotion recognition approaches thus combine different types of features for consistent performance. Several traditional classifiers were implemented in literature to classify the handcrafted features from which some of the most popular are support vector machine (SVM), k-nearest neighbor (kNN), random forests (RF), naïve Bayes (NB), and gradient boosted decision trees (GBDT) [66].
As for deep learning approaches, convolutional neural networks (CNNs), deep belief networks (DBN), and long short-term memory networks (LSTMs) among others have been used for feature extraction in emotion AI systems. In addition, pretrained readily available CNNs, such as GoogleNet, were widely used in literature as they tend to give reliable performance without requiring enormous data for training. A SVM classifier as well as sigmoid/softmax activation functions are then typically used at the network’s final stage for emotion classification. Deep EEG emotion recognition methods, however, have the limitation of requiring a huge amount of data for their proper training in comparison to traditional methods [54].

2.2. Previous Literature

Several public EEG emotion datasets were introduced including DEAP [67], SEED [68,69], MAHNOB-HCI [70], and DREAMER [71]. Few works also report results using their own private self-generated datasets [51]. DEAP is currently considered the benchmark dataset in EEG-based emotion detection being the most widely used public EEG emotion dataset in the literature, mostly owing to it having the largest number of observations per subject [72].
EEG emotion recognition approaches can be divided into subject-dependent and subject-independent [46,73]. Subject-dependent methods train a separate model for each subject within the dataset. Subject-independent methods train a single model using data from all or some of the subjects within the considered dataset [74]. Recent papers comparing subject dependent and independent approaches showed that the former consistently gave 5–30% higher performance depending on the implemented approach. Such results are mainly due to the discrepancy between subjects related to how they feel and express their emotions [75]. For example, Nath et al. [73] have observed that EEG signals from a specific subject were somewhat similar yet significantly varied across different subjects, even when the same stimulus was considered. In addition, Putra et al. [75] found that different subjects varied in their response to valence stimuli, with some subjects being more responsive than others [75]. Subject-dependent approaches are thus better suited for reliable personalized emotion AI applications with wearable EEG [64].
Table 2 summarizes some of the recent EEG emotion recognition approaches using the benchmark DEAP dataset. For each research paper, the summary indicates the utilized (1) EEG channels, (2) frequency bands, (3) feature types: time—frequency—wavelet—deep features, (4) classifier, (5) experimental approach: subject-dependent (dep.)—subject-independent (indep.), as well as the (6) accuracies (Acc.) reported for valence (val.) and arousal (arl.) emotion recognition. For the subject independent emotion recognition methods, reported accuracies are for the experiments performed considering the complete dataset. As for the subject dependent methods, reported accuracies are the average of the experiments repeated for all the subjects in the dataset. The summarized literature review shows that subject-dependent (personalized) approaches that adopted deep learning methods, gave accuracies that were higher than 90% for both valence and arousal. However, subject-dependent approaches relying solely on traditional methods scarcely resulted in accuracies that exceeded 75%. Another limitation observed in previous literature is that most methods consider many or all EEG channel electrodes and/or frequency bands which can lead to high computational overhead with minimal, if any, performance improvement.
In the present study, a subject-dependent approach is adopted for valence (happy/sad) emotion classification intended for personalized emotion AI applications with wearable EEG. Since several previous studies showed that the frontal channels are the most relevant for EEG-based emotion recognition [33,39,40,53,83], only the Fp1 and Fp2 channels were considered for emotion recognition. The widely used DEAP benchmark dataset was considered for its reliability, as well as to facilitate comparison to previous approaches. Time and frequency EEG features were extracted from a single time series related to the Fp1 and Fp2 channels which are the Hjorth parameters, zero-crossings, and PSD.
Happiness and sadness emotions (valence) have been reported to dramatically affect the theta, alpha, and beta waves of the frontal channels [84]. Interestingly, the delta [85], alpha [86], and gamma [87,88] waves of the frontal channels were also shown to be individually useful for EEG-based emotion recognition. Several analyses were thus performed in this work to determine the frequency bands most suitable for valence detection considering the different computed features. In addition, performance was observed when the compete EEG signal was considered for feature computation in comparison to when only a short segment was utilized. The aim of the performed analyses was to find the most suitable feature set that would achieve superior performance comparable to state-of the-art methods, all while requiring minimal computational overhead. Primarily, only the sixteen strongest emotions (eight happiest and eight saddest) were considered in the analyses in order to assure significant discrepancy between the emotions. Then, the complete DEAP dataset was utilized for the final experimentations concerning binary and multiclass valence classifications, as well as for comparison to previous literature.

3. Methods

3.1. Dataset

DEAP is a public audio-visual stimuli-based emotion dataset [67] that was collected from 32 subjects. For emotion recognition, the use of audio-visual stimuli guarantee higher valence intensity is experienced with respect to visual stimuli (pictures) [89]. The subjects ages ranged between 19 and 37, with an average of 26.9 years. Each subject watched 40 one-minute music videos intended to elicit different emotions. These one-minute videos were extracted from long-version music videos to include maximum emotional content. EEG signals from thirty-two electrodes placed according to the international 10/20 system were recorded at a sampling rate of 512 Hz then downsampled to 128 Hz. Each electrode recorded 63 s EEG signal, with a 3s baseline signal before the trial. The 3 s baseline was ignored here as previously performed in [58,76,77,90].
After watching each video, participants performed a self-assessment of their emotional states of valence, arousal, liking, and dominance on a continuous scale from 1 to 9. Only valence was considered in the present study which would be useful for personalized medical applications as well as in emotion-based entertainment content. The valence scale ranges from sad to happy with ratings closer to one representing low valence (sad), whereas ratings closer to nine indicating high valence (happy). For the binary classification experiments, a threshold (thresh.) of five was considered to separate the low and high valence classes as commonly performed in many other works such as Refs. [58,60,61,73,74,76,77,91,92,93,94]. This threshold value is typically chosen to overcome the class imbalance issue in the DEAP dataset [64,67]. As for the three-class classifications, thresholds of three and six were considered to divide the dataset into low valence (sad), mid-range (neutral), and high valence (happy).

3.2. Channel Selection

The international 10/20 system includes several electrode placement markers applied to detect the brain waves from the different brain lobes. In deep learning approaches where it is basically the network’s task to extract meaningful features from the data, it is common to input all the EEG channels to the network for emotion recognition [59,60,94]. Nevertheless, several studies have shown that considering all EEG channels can be redundant and that extracting features from a few significant channels can results in reliable performance with the added advantage of reduced computational complexity [35,53,55].
For wearable EEG headsets, requiring only one or two EEG channels can substantially reduce the hardware complexity thus facilitating its usage in non-laboratory settings, as well as reducing its overall cost, all which would make it more attractive to day-to-day consumers [35,53,95]. From the different brain lobes, the frontal lobe is the one most associated with emotion recognition using EEG signals [5]. Specifically, several studies have shown that features calculated from the prefrontal brain region (Fp1-Fp2) result in best performance as compared to other brain areas [35]. Mohammadi et al. [55] more specifically showed that the Fp1-Fp2 channel pair resulted in highest accuracies in comparison to other frontal channel pairs, and that combining all the frontal channels resulted in a somewhat enhanced performance. Interestingly, Wu et al. [53] found that not only did Fp1-Fp2 result in the highest accuracies in comparison to the other frontal channels, but that solely using Fp1-Fp2 resulted in similar performance to the case when features from four or six frontal channels were combined. The Fp1-Fp2 channel pair was thus chosen in this study for valence-related feature extraction.
Previous research has shown that positive emotions are associated with left frontal activity, whereas negative emotions are associated with right frontal activity [96,97]. Symmetric channel pairs from the left and right brain hemispheres were thus commonly considered in literature by being either subtracted or divided in order to create a single wave from which relevant features were calculated [61,98,99]. In the present study, the EEG features were extracted from a single time series signal computed as the difference between the Fp1 and Fp2 channels in order to measure the asymmetry in brain activity due to the valence emotional stimuli [67].

3.3. EEG Band Separation

Five different third order Butterworth band-pass filters were implemented to separate the delta (2–4 Hz), theta (4–8 Hz), alpha (8–12 Hz), beta (12–30 Hz), and gamma (30–60 Hz) frequency bands (Table 1). The Butterworth filter has been previously used for the EEG bands separation owing to its flat response, simplicity, and efficiency [5,40].

3.4. Feature Extraction

Both time and frequency domain EEG features were initially computed from all the frequency bands (delta–theta–alpha–beta–gamma). Next, feature analysis was performed to determine which features were more suitable for valence emotion recognition, as well as the most relevant frequency band for feature extraction.
A. 
Hjorth Parameters
Hjorth parameters [100] were introduced by Bo Hjorth in 1970 to represent several signal statistical properties (Figure 7). Hjorth parameters have been successfully used in various EEG emotion recognition research [5,32,35,36,37,38,39,40]. The three Hjorth parameters are activity (variance), mobility, and complexity given by the following equations:
A c t i v i t y = v a r a i n c e   ( y ( t ) )
M o b i l i t y = a c t i v i t y ( d y ( t ) / d t ) a c t i v i t y ( y ( t ) )  
C o m p l e x i t y = m o b i l i t y ( d y ( t ) / d t ) m o b i l i t y ( y ( t ) )  
B. 
Zero-Crossings
The zero-crossings of a signal are the number of times the signal intercepts the horizontal x-axis thus changing signs. Zero crossings are used to measure the oscillating property of a signal indicating the degree of excitation within a specific frequency band.
C. 
Power Spectral Density
Power spectral density (PSD) is among the most widely implemented EEG features for emotion recognition [72]. PSD describes the average signal power over its frequency bands. To obtain the PSD, the amplitude of the FFT is multiplied by its complex conjugate which is then summed to get the total power.

4. Results

In the present study, an EEG-based subject-dependent valence emotion recognition approach is presented using the difference Fp1-Fp2 signal. Figure 8 illustrates the experimental workflow adopted in order to develop an efficient and reliable system that is suitable for wearable EEG. Initially, the Hjorth parameters (activity–mobility–complexity), zero-crossing, and PSD features were computed from the different frequency bands. Next, the strongest emotions per subject were considered for the feature analyses in which the EEG bands, timeslots, and features were determined. Finally, the selected feature set was used for the binary and multiclass valence emotion classification of the complete DEAP dataset. Since a subject dependent approach was adopted in this work, all the classification experiments were repeated for each of the 32 subjects in the DEAP dataset, and the average accuracies of all subjects were reported as the final performance measure.
KNN and SVM classifiers are the most commonly used for EEG emotion recognition [66,72]. The kNN classifier has the advantages of being simple while giving reliable results [45]. The SVM classifier can be easily tuned for optimal performance. A kNN classifier was used for the feature analyses, whereas both the kNN and SVM with radial basis function (rbf) were considered in the final classification experiments. For the kNN classifier, several k values were compared, then k = 5 was chosen as it was found to give better overall performance. For all cases, the Euclidian distance was considered within the kNN classifier to determine the nearest neighbors. As for the SVM classifier, the hyperparameters (cost and gamma) were repeatedly tuned for each subject in the different experiments using Bayesian optimization. A leave-one-out cross-validation (LOOCV) was used in all the experiments. All feature computations and classification experiments were performed using MATLAB R2021a on an Intel Core i7-5500U CPU @2.4 GHz with 16 GB of RAM.

4.1. Feature Analyses

In this work, the aim of the feature analyses was to determine the most relevant (1) frequency band (delta–theta–alpha–beta–gamma), (2) timeslot (first 20 s–middle 20 s–last 20 s–complete 60 s), and (3) features (activity–mobility–complexity–zero-crossings–PSD) for EEG valence recognition. Sixteen videos per subject were included in the feature analyses, those being the ones with eight highest and eight lowest self-rated valence emotions. Considering only the strongest emotions assures significant discrepancy between the two emotional classes (high valence and low valence) for more reliable feature analyses. A similar approach was previously considered in [52,53].
A. 
Band/Feature Analysis
Feature/band analysis was performed in order to determine the frequency bands and features most suitable for valence classification. The three Hjorth parameters, zero-crossings, and PSD features were calculated from the five EEG frequency bands (delta, theta, alpha, beta, gamma). KNN classifier was then used to classify the 1 minute trials into high or low valence. Figure 9 summarizes the valence (happy/sad) classification performance for the different experiments. For all the EEG frequency bands, the variance (Hjorth activity) and PSD were found to result in the highest accuracies. Roshdy et al. [101] have previously shown that the standard deviation, which is the square root of the variance, was highly correlated with valence emotion. PSD is among the most widely accepted measure for valence recognition in the literature [102]. Results of the feature analysis are thus in agreement with previous literature.
Table 3 summarizes the variance and PSD accuracies for the five different frequency bands. Results indicate that for both features, the alpha band gave the most reliable performance closely followed by the delta band. These results are in agreement with several research that showed that the alpha [32,72] and delta [85] bands were relevant for valence emotion detection. The low accuracies attained by the gamma band features were however unconventional as the gamma band was previously shown to be suitable for emotion recognition [55,87]. The gamma band was thus further divided into three subbands which are 30–40 Hz, 40–50 Hz, and 50–60 Hz, and the previous analysis were repeated. Results summarized in Table 4 indicate a significant improvement in performance when the gamma band was subdivided into three different subbands. Best results were attained by the fast gamma subband (50–60 Hz) for which accuracies of 99.02% and 98.63% were achieved for the variance and PSD, respectively, by that outperforming results attained by the same features for the delta and alpha bands.
Based on the feature/band analysis, it can be deduced that the variance (Hjorth activity) and PSD calculated from the delta, alpha, and fast gamma frequency bands result in the most consistent performance. Further experiments performed in this study will thus only use the indicated features and frequency bands.
B. 
Time Slot Analysis
In the DEAP dataset, a 1 minute EEG recording is provided for each video stimulus per subject. Several previous works considered only the middle time slot omitting the first part for emotions to settle and the last part for fatigue [56,58]. Others used only the last thirty seconds under the assumption that it yields better results [53,67]. In order to test these presumptions, the variance (Hjorth activity) and PSD features were calculated from the first, middle, and last 20 seconds (s) of the EEG recordings for the delta, alpha, and fast gamma bands. Valence classification results for the three indicated timeslots in comparison to using the complete 1 minute are summarized in Table 5. Overall, better valence classification performance is achieved by the alpha and fast gamma bands (~97–99%) than for the delta band (~95–96%). For the delta band, results from the different slots were somewhat close. However, the first timeslot resulted in slightly improved results compared to when the complete 1 minute was considered. As for the alpha and fast gamma bands, results indicate that the middle time slot gave more reliable performance in comparison to the first and last timeslots. Nevertheless, considering the full 1 minute EEG signal resulted in an overall better performance than for any of the 20 s time slots. The full one-minute signal will thus be considered for more consistent performance in all the upcoming experiments.
C. 
Feature Boxplots
At the beginning of this section, the activity (variance), mobility, complexity, zero-crossings, and PSD features were computed from the five EEG frequency bands. Classification results considering the strongest emotions showed that the variance and PSD were the most relevant for valence emotion recognition regardless of the frequency band. Specifically, experimentation results showed that the variance and PSD computed from the delta, alpha, and fast gamma full 1 minute EEG signals resulted in the most reliable valence emotion classification performance in comparison to the other considered cases.
In this subsection, the boxplots of the variance and PSD were generated (Figure 10) to illustrate the features’ distributions for the two valence classes: low valence (sad) and high valence (happy). Boxplots display a five-number summary of the data including the minimum, first quartile, second quartile (median), third quartile, and maximum. For both features, the boxplots demonstrate significant discrepancy between the two valence classes which emphasizes their relevance as previously shown in the different classification experiments within the previous subsections.

4.2. Valence Classifications

In this section, the subject dependent valence emotion classifications were performed considering all the forty video trials included in the DEAP dataset. The variance (Hjorth activity) and PSD features were computed from the full 1 minute delta, alpha, and fast gamma bands which were found in the previous section to be the most relevant for valence classification. Variance and PSD were used both individually and collectively and results were given for each case. KNN and SVM with rbf kernel were considered in all experiments.
Table 6 and Table 7 summarize the binary classification accuracies for the kNN and SVM classifiers, respectively. Overall, the SVM classifier gave better accuracies than the kNN classifier. The alpha band is shown to give consistently better results closely followed by the delta band, whereas the fast gamma band results are almost 10% less for both classifiers. Fast gamma is thus shown to be reliable when discriminating between strong sad and happy emotions attaining accuracies that were as high as 99% (Table 4), yet less useful when more mellow emotional states were additionally involved.
Generally, variance (Hjorth activity) and PSD gave close results in all experiments. For the alpha and delta bands, all achieved accuracies were greater than or equal to 95%, indicating the efficacy of the considered features for valence emotional recognition. Variance did, however, give slightly better results than PSD in most cases. Combining these two features resulted in an overall more consistent performance. Best results were achieved when the combined features were calculated from the alpha band resulting in accuracies of 96.33% and 97.42% for the kNN and SVM classifiers, respectively. Several research has shown that the frontal channels’ alpha band was significantly affected by a person’s happiness and sadness emotions [28,103]. The findings of this work, in which the alpha band was found to be more reliable than other frequency bands for valence recognition, are thus in agreement with previous literature.
For the sake of attaining a more comprehensive insight on the performance of the proposed method, the valence classification accuracies per subject for the combined variance and PSD features for the delta, alpha, and fast gamma bands are presented in Table 8. For the alpha band, twenty-eight and thirty of the total thirty-two DEAP subjects had their emotions recognized with an accuracy that is greater than or equal to 95% for the kNN and SVM classifiers, respectively, which indicates the reliability of the considered features.
In order to further investigate these results, the median, average, and standard deviation of the valence ratings of the two subjects with the lowest and highest SVM accuracies in the alpha band were inspected and summarized in Table 9. Furthermore, these statistical measures were also calculated for all the thirty-two subjects in the DEAP dataset. For subject #27 (one of the subjects with the lowest accuracies), it is noticed that both the median and average of the valence ratings are higher than the value of the threshold considered in this work for the low/high valence class separation. Modifying this threshold value to become six instead of five, which is closer to subject #27′s median and average, indeed resulted in improving this subject’s emotional recognition accuracy by 5% to become 97.5%. On the other hand, the increased threshold had no effect or minimal effect on the other considered subjects and minimal effect on the overall performance. These results indicate the robustness of the two implemented measures for valence emotion recognition whilst also highlighting the importance of considering subject variability for more reliable results.
Table 10 summarizes the three-class valence classification results using the variance, PSD, as well as both features calculated from the delta, alpha, and fast gamma bands. Similar to the binary classifications, best results were attained when the features were computed from the alpha band, closely followed by the delta band. For the alpha and delta bands, considering one of the features or both combined resulted in close accuracies ranging from 94.22% to 95.39%. Best performance (accuracy = 95.39%) was attained when the variance was computed from the alpha band.

5. Discussion

In the present study, an efficient EEG-based valence recognition method was presented that considers only the difference Fp1-Fp2 signal for feature extraction. Analyses showed that the variance and PSD computed from the 1 minute alpha band were the most suitable for valence recognition. Final classification experiments considering the entire DEAP dataset resulted in accuracies of 97.42% and 95.39% for the two and three class valence classifications, respectively. Torres et al. [72] have reported that in previous literature, accuracies were on average about 85% and 68% for two and three class EEG-based valence classifications, respectively. The performance of the proposed methods thus surpasses the average performance of EEG-based valence detection methods by approximately 10% and 27% for two- and three-class classifications, respectively, indicating the superiority of the implemented method.
The notion that few simple handcrafted features can give promising results in EEG-based valence classification has been previously demonstrated in several research papers. In an early work by Sourina et al. [104], accuracies well above 90% were achieved for all subjects considering only three frontal channels using music to invoke the emotional stimuli. In another work by Amin et al. [105], emotion recognition accuracies exceeding 98% were attained considering only the relative wavelet energy, which was calculated from the delta band of 128 electrodes. However, for both these works performance could not be compared to other methods as private datasets were utilized. A later work by Thejaswini et al. [32] achieved an overall average accuracy of 91.2% upon classifying the SEED dataset to three classes: positive, neutral, and negative emotions. They implemented simple statistical features including the RASM and Hjorth parameters, but again considering twenty-seven electrode pairs for the feature computations.
The DEAP dataset, considered in this study, is reportedly the most widely utilized for EEG emotional recognition [72] which facilitates comparison between the different approaches. Table 11 summarizes the performance of several other EEG emotion recognition methods from literature that also used the DEAP dataset. The comparison indicates the EEG channels and frequency bands considered in each approach, as well as the binary classification accuracy. For the sake of a fair comparison, all valence emotion recognition methods included are based on subject-dependent experiments, which is the approach considered in this work. Wu et al. [53], like in this work, used only the FP1 and Fp2 frontal channels, yet achieved a relatively low accuracy of 75.18%. Other methods used all the EEG channels whether individually or in the form of channel pairs. In addition, most of the studies summarized in Table 11 considered all the frequency channels by that ignoring the significance of some bands over others for valence emotion recognition. Overall, the valence classification accuracies of the summarized approaches mostly range from 75.18% to 96.65%. The EEG valence emotion recognition method introduced in the present study results in an accuracy of 97.42% by that outperforming several state-of-the-art methods deep learning methods.
Nevertheless, the recent approach introduced by Cheng et al. [82], which is based on randomized CNN and ensemble learning, resulted in an overall accuracy of 99.17% which is 1.75% higher than the implemented method. In their work, they reported an average training time of 35.15 s. As for the proposed method, an average of 0.06 s were required for the feature computation, training, and classification. Nevertheless, the machine learning-based proposed approach, even though performing not as well as Cheng et al.’s method, has the valuable merit of being simpler to reproduce.
The proposed EEG-based valence emotion recognition method was shown to result in reliable performance while relying on statistical measures that are simple to compute. In addition, it relies on standard machine learning algorithms that are easily configured. No image construction was required, and no complex neural networks needed to be trained. In the literature, several works have also shown that handcrafted features can achieve comparable performance to deep learning approaches with the former having the merit of reduced computational complexity which could be attractive in real-time applications [106,107,108]. Another advantage of the presented method is that unlike in other literature where all the frequency bands or the raw EEG signal were considered, only the alpha band was used for feature extraction. The alpha band was utilized in this work as it was shown in the analyses performed in Section 4.1 to be the most relevant for valence detection. Interestingly, several clinical studies have previously shown that there is indeed a relationship between the alpha activity measured from the prefrontal cortex and emotional response [109,110].
The proposed method considers only the Fp1-Fp2 channel pair from which the alpha band’s variance and PSD were computed, by that minimizing the computational overhead whilst achieving reliable performance making it suitable for wearable EEG headsets used in real-time applications [26,111]. Overall, the results attained here are quite promising. Yet, there is still room for enhancement of the suggested method. Future work includes considering arousal along with valence recognition, as well as calculating other statistical features that are relevant to EEG-based emotion recognition such as entropy and RASM. In addition, the integration of handcrafted and deep features can be investigated. Explainable AI (XAI) methods can then be implemented to understand what the models are learning and why the specific decisions were made. XAI can also be applied to investigate whether EEG-based emotion detection is gender or culture dependent, as is speech emotion recognition [112].

6. Conclusions

EEG-based subject-dependent valence emotion recognition is widely implemented in personalized emotion AI applications. In this work, the difference signal (Fp1-Fp2) was used to calculate the Hjorth parameters (variance-mobility-complexity), zero-crossings, and PSD features for the emotional valence detection using the benchmark DEAP dataset. Several analyses were performed to determine the features, frequency band, and timeslot most suitable for reliable subject-based valence recognition. Primarily, only the eight strongest high and low valence emotions per subject were considered for analysis to assure significant discrepancy between the two classes. Classification results indicated that the variance and PSD features were the most suitable for valence recognition regardless of the considered frequency channel. Nevertheless, the delta, alpha, and fast gamma bands were shown to be the most relevant for valence recognition. Boxplots of the variance and PSD features for the most relevant frequency bands validated and supported the classification results. In addition, calculating the features from the complete 1 minute EEG signal was found to give more reliable performance than when only a 20 s timeslot was used for feature computation. Best results were achieved when the variance and PSD were computed from the alpha band resulting in accuracies of 97.42% and 95.0% for the binary and multiclass classification, respectively. Comparison to previous literature showed that implemented method outperformed several state-of-the-art approaches with the advantage of reduced computational complexity due to the reduced number of electrodes, features, and frequency bands considered. This approach would thus be highly attractive for practical EEG-based emotion AI systems relying on wearable EEG devices.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The DEAP dataset supporting reported results is a public dataset that can be found here: https://www.eecs.qmul.ac.uk/mmv/datasets/deap/download.html, accessed on 16 January 2023.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Meredith Somers. Emotion AI, Explained. 2019. Available online: https://mitsloan.mit.edu/ideas-made-to-matter/emotion-ai-explained (accessed on 21 May 2022).
  2. Charlotte Gifford. The Problem with Emotion-Detection Technology. 2020. Available online: https://www.theneweconomy.com/technology/the-problem-with-emotion-detection-technology (accessed on 21 May 2022).
  3. Ekman, P.; Friesen, W.V. Constants across cultures in the face and emotion. J. Pers. Soc. Psychol. 1971, 17, 124. [Google Scholar] [CrossRef] [Green Version]
  4. Russell, J.A. A circumplex model of affect. J. Personal. Soc. Psychol. 1980, 39, 1161–1178. [Google Scholar] [CrossRef]
  5. Shashi Kumar, G.S.; Sampathila, N.; Shetty, H. Neural Network Approach for Classification of Human Emotions from EEG Signal. In Engineering Vibration, Communication and Information Processing; Ray, K., Sharan, S., Rawat, S., Jain, S., Srivastava, S., Bandyopadhyay, A., Eds.; Springer Singapore: Singapore, 2019; pp. 297–310. [Google Scholar]
  6. Tsiourti, C.; Weiss, A.; Wac, K.; Vincze, M. Multimodal Integration of Emotional Signals from Voice, Body, and Context: Effects of (In)Congruence on Emotion Recognition and Attitudes Towards Robots. Int. J. Soc. Robot. 2019, 11, 555–573. [Google Scholar] [CrossRef] [Green Version]
  7. Canal, F.Z.; Müller, T.R.; Matias, J.C.; Scotton, G.G.; Junior, A.R.d.S.; Pozzebon, E.; Sobieranski, A.C. A survey on facial emotion recognition techniques: A state-of-the-art literature review. Inf. Sci. 2021, 582, 593–617. [Google Scholar] [CrossRef]
  8. Abdel-Hamid, L.; Shaker, N.H.; Emara, I. Analysis of Linguistic and Prosodic Features of Bilingual Arabic–English Speakers for Speech Emotion Recognition. IEEE Access 2020, 8, 72957–72970. [Google Scholar] [CrossRef]
  9. Zubair, M.; Yoon, C. EEG based classification of human emotions using discrete wavelet transform. In IT Convergence and Security 2017; Springer: Singapore, 2018; pp. 21–28. [Google Scholar]
  10. Islam, M.S.; Hussain, I.; Rahman, M.; Park, S.J.; Hossain, A. Explainable Artificial Intelligence Model for Stroke Prediction Using EEG Signal. Sensors 2022, 22, 9859. [Google Scholar] [CrossRef]
  11. Arora, A.; Kaul, A.; Mittal, V. Mood Based Music Player. In Proceedings of the 2019 International Conference on Signal Processing and Communication (ICSC), NOIDA, India, 7–9 March 2019; pp. 333–337. [Google Scholar] [CrossRef]
  12. Guy-Evans, O.; Mcleod, S. What Does the Brain’s Cerebral Cortex Do? 2021. Available online: https://www.simplypsychology.org/what-is-the-cerebral-cortex.html (accessed on 18 April 2022).
  13. Alotaiby, T.N.; El-Samie, F.E.A.; Alshebeili, S.A.; Ahmad, I. A review of channel selection algorithms for EEG signal processing. EURASIP J. Adv. Signal Process. 2015, 2015, 66. [Google Scholar] [CrossRef] [Green Version]
  14. Kim, J.; Kim, C.; Yim, M.-S. An Investigation of Insider Threat Mitigation Based on EEG Signal Classification. Sensors 2020, 20, 6365. [Google Scholar] [CrossRef]
  15. Sinha Clinic. What Are Brainwaves? 2022. Available online: https://www.sinhaclinic.com/what-are-brainwaves/ (accessed on 2 April 2022).
  16. WebMD. What to Know about Gamma Brain Waves. In What to Know about Gamma Brain Waves. 2022. Available online: https://www.webmd.com/brain/what-to-know-about-gamma-brain-waves (accessed on 2 April 2022).
  17. Li, T.-M.; Chao, H.-C.; Zhang, J. Emotion classification based on brain wave: A survey. Human-Centric Comput. Inf. Sci. 2019, 9, 42. [Google Scholar] [CrossRef]
  18. Malik, A.S.; Amin, H.U. Chapter 1—Designing an EEG Experiment. In Designing EEG Experiments for Studying the Brain; Malik, A.S., Amin, H.U., Eds.; Academic Press: Cambridge, MA, USA, 2017; pp. 1–30. [Google Scholar]
  19. Casson, A.J. Wearable EEG and beyond. Biomed. Eng. Lett. 2019, 9, 53–71. [Google Scholar] [CrossRef]
  20. Hussain, I.; Park, S.J. HealthSOS: Real-Time Health Monitoring System for Stroke Prognostics. IEEE Access 2020, 8, 213574–213586. [Google Scholar] [CrossRef]
  21. Tang, J.; El Atrache, R.; Yu, S.; Asif, U.; Jackson, M.; Roy, S.; Mirmomeni, M.; Cantley, S.; Sheehan, T.; Schubach, S.; et al. Seizure detection using wearable sensors and machine learning: Setting a benchmark. Epilepsia 2021, 62, 1807–1819. [Google Scholar] [CrossRef] [PubMed]
  22. Hussain, I.; Hossain, A.; Jany, R.; Bari, A.; Uddin, M.; Kamal, A.R.M.; Ku, Y.; Kim, J.-S. Quantitative Evaluation of EEG-Biomarkers for Prediction of Sleep Stages. Sensors 2022, 22, 3079. [Google Scholar] [CrossRef] [PubMed]
  23. Hussain, I.; Young, S.; Park, S.-J. Driving-Induced Neurological Biomarkers in an Advanced Driver-Assistance System. Sensors 2021, 21, 6985. [Google Scholar] [CrossRef] [PubMed]
  24. Zgallai, W.; Brown, J.T.; Ibrahim, A.; Mahmood, F.; Mohammad, K.; Khalfan, M.; Mohammed, M.; Salem, M.; Hamood, N. Deep Learning AI Application to an EEG driven BCI Smart Wheelchair. In Proceedings of the 2019 Advances in Science and Engineering Technology International Conferences (ASET), Dubai, United Arab Emirates, 26 March–10 April 2019; pp. 1–5. [Google Scholar]
  25. Dadebayev, D.; Goh, W.W.; Tan, E.X. EEG-based emotion recognition: Review of commercial EEG devices and machine learning techniques. J. King Saud Univ. Comput. Inf. Sci. 2021, 34, 4385–4401. [Google Scholar] [CrossRef]
  26. Cai, J.; Xiao, R.; Cui, W.; Zhang, S.; Liu, G. Application of Electroencephalography-Based Machine Learning in Emotion Recognition: A Review. Front. Syst. Neurosci. 2021, 15, 729707. [Google Scholar] [CrossRef]
  27. Kim, M.; Yoo, S.; Kim, C. Miniaturization for wearable EEG systems: Recording hardware and data processing. Biomed. Eng. Lett. 2022, 12, 239–250. [Google Scholar] [CrossRef]
  28. Houssein, E.H.; Hammad, A.; Ali, A.A. Human emotion recognition from EEG-based brain–computer interface using machine learning: A comprehensive review. Neural Comput. Appl. 2022, 34, 12527–12557. [Google Scholar] [CrossRef]
  29. NeuroMat Random Structures in the Brain 102.jpg. Available online: https://commons.wikimedia.org/wiki/File:Random_Structures_in_the_Brain_102.jpg (accessed on 1 January 2023).
  30. SparkFun The MindWave Mobile from NeuroSky. Available online: https://learn.sparkfun.com/tutorials/hackers-in-residence---hacking-mindwave-mobile/what-is-the-mindwave-mobile (accessed on 1 January 2023).
  31. Souvik, P.; Sinha, N.; Ghosh, R. A Survey on Feature Extraction Methods for EEG Based Emotion Recognition. In Intelligent Techniques and Applications in Science and Technology; Dawn, S., Balas, V., Esposito, A., Gope, S., Eds.; Springer: Cham, Switzerland, 2020; pp. 31–45. [Google Scholar]
  32. Thejaswini, S.; Ravi Kumar, K.M.; Rupali, S.; Abijith, V. EEG Based Emotion Recognition Using Wavelets and Neural Net-works Classifier. In Cognitive Science and Artificial Intelligence: Advances and Applications; Gurumoorthy, S., Rao, B.N.K., Gao, X.-Z., Eds.; Springer: Singapore, 2018; pp. 101–112. [Google Scholar]
  33. Menezes, M.L.R.; Samara, A.; Galway, L.; Sant’Anna, A.; Verikas, A.; Alonso-Fernandez, F.; Wang, H.; Bond, R. Towards emotion recognition for virtual environments: An evaluation of eeg features on benchmark dataset. Pers. Ubiquitous Comput. 2017, 21, 1003–1013. [Google Scholar] [CrossRef]
  34. Yang, H.; Huang, S.; Guo, S.; Sun, G. Multi-Classifier Fusion Based on MI–SFFS for Cross-Subject Emotion Recognition. Entropy 2022, 24, 705. [Google Scholar] [CrossRef]
  35. Joshi, V.M.; Ghongade, R.B. EEG Based Emotion Investigation from Various Brain Region Using Deep Learning Algorithm. In ICDSMLA 2020; Kumar, A., Senatore, S., Gunjan, V.K., Eds.; Springer: Singapore, 2022; pp. 395–402. [Google Scholar]
  36. Parui, S.; Roshan Bajiya, A.K.; Samanta, D.; Chakravorty, N. Emotion Recognition from EEG Signal using XGBoost Algorithm. In Proceedings of the 2019 IEEE 16th India Council International Conference (INDICON), Rajkot, India, 13–15 December 2019; pp. 1–4. [Google Scholar]
  37. Gao, Q.; Yang, Y.; Kang, Q.; Tian, Z.; Song, Y. EEG-based Emotion Recognition with Feature Fusion Networks. Int. J. Mach. Learn. Cybern. 2021, 13, 421–429. [Google Scholar] [CrossRef]
  38. Patil, A.; Deshmukh, C.; Panat, A.R. Feature extraction of EEG for emotion recognition using Hjorth features and higher order crossings. In Proceedings of the 2016 Conference on Advances in Signal Processing (CASP) IEEE, Pune, India, 9–11 June 2016; pp. 429–434. [Google Scholar]
  39. Khateeb, M.; Anwar, S.M.; Alnowami, M. Multi-Domain Feature Fusion for Emotion Classification Using DEAP Dataset. IEEE Access 2021, 9, 12134–12142. [Google Scholar] [CrossRef]
  40. Elamir, M.M.; Al-Atabany, W.; Eldosoky, M.A. Emotion recognition via physiological signals using higher order crossing and Hjorth parameter. Res. J. Life Sci. Bioinform. Pharm. Chem. Sci. 2019, 5, 839–846. [Google Scholar]
  41. Oh, S.-H.; Lee, Y.-R.; Kim, H.-N. A novel EEG feature extraction method using Hjorth parameter. Int. J. Electron. Electr. Eng. 2014, 2, 106–110. [Google Scholar] [CrossRef] [Green Version]
  42. Jenke, R.; Peer, A.; Buss, M. Feature Extraction and Selection for Emotion Recognition from EEG. IEEE Trans. Affect. Comput. 2014, 5, 327–339. [Google Scholar] [CrossRef]
  43. Liu, Y.; Sourina, O. EEG-based subject-dependent emotion recognition algorithm using fractal dimension. In Proceedings of the 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), San Diego, CA, USA, 5–8 October 2014; pp. 3166–3171. [Google Scholar]
  44. Martínez-Tejada, L.A.; Yoshimura, N.; Koike, Y. Classifier comparison using EEG features for emotion recognition process. In Proceedings of the 2020 IEEE 18th World Symposium on Applied Machine Intelligence and Informatics (SAMI), Herlany, Slovakia, 23–25 January 2020; pp. 225–230. [Google Scholar]
  45. Alhalaseh, R.; Alasasfeh, S. Machine-Learning-Based Emotion Recognition System Using EEG Signals. Computers 2020, 9, 95. [Google Scholar] [CrossRef]
  46. Alarcao, S.M.; Fonseca, M.J. Emotions Recognition Using EEG Signals: A Survey. IEEE Trans. Affect. Comput. 2017, 3045, 1–20. [Google Scholar] [CrossRef]
  47. Yang, Y.X.; Gao, Z.K.; Wang, X.M.; Li, Y.L.; Han, J.W.; Marwan, N.; Kurths, J. A recurrence quantification analysis-based channel-frequency convolutional neural network for emotion recognition from EEG. Chaos: Interdiscip. J. Nonlinear Sci. 2018, 28, 085724. [Google Scholar] [CrossRef]
  48. Yin, Y.; Zheng, X.; Hu, B.; Zhang, Y.; Cui, X. EEG emotion recognition using fusion model of graph convolutional neural networks and LSTM. Appl. Soft Comput. 2020, 100, 106954. [Google Scholar] [CrossRef]
  49. Mahajan, R. Emotion Recognition via EEG Using Neural Network Classifier. In Soft Computing: Theories and Applications; Pant, M., Ray, K., Sharma, T., Rawat, S., Bandyopadhyay, A., Eds.; Springer: Singapore, 2018; pp. 429–438. [Google Scholar]
  50. Jirayucharoensak, S.; Pan-Ngum, S.; Israsena, P. EEG-Based Emotion Recognition Using Deep Learning Network with Principal Component Based Covariate Shift Adaptation. Sci. World J. 2014, 2014, 627892. [Google Scholar] [CrossRef]
  51. Thammasan, N.; Fukui, K.; Numao, M. Application of deep belief networks in eeg-based dynamic music-emotion recognition. In Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 881–888. [Google Scholar]
  52. Li, Z.; Tian, X.; Shu, L.; Xu, X.; Hu, B. Emotion Recognition from EEG Using RASM and LSTM. In Internet Multimedia Computing and Service; Huet, B., Nie, L., Hong, R., Eds.; Springer: Singapore, 2018; pp. 310–318. [Google Scholar]
  53. Wu, S.; Xu, X.; Shu, L.; Hu, B. Estimation of valence of emotion using two frontal EEG channels. In Proceedings of the 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Kansas City, MO, USA, 13–16 November 2017; pp. 1127–1130. [Google Scholar]
  54. Stancin, I.; Cifrek, M.; Jovic, A. A Review of EEG Signal Features and Their Application in Driver Drowsiness Detection Systems. Sensors 2021, 21, 3786. [Google Scholar] [CrossRef] [PubMed]
  55. Mohammadi, Z.; Frounchi, J.; Amiri, M. Wavelet-based emotion recognition system using EEG signal. Neural Comput. Appl. 2016, 28, 1985–1990. [Google Scholar] [CrossRef]
  56. Jie, X.; Cao, R.; Li, L. Emotion recognition based on the sample entropy of EEG. Bio-Med. Mater. Eng. 2014, 24, 1185–1192. [Google Scholar] [CrossRef] [PubMed]
  57. Wagh, K.P.; Vasanth, K. Performance evaluation of multi-channel electroencephalogram signal (EEG) based time frequency analysis for human emotion recognition. Biomed. Signal Process. Control. 2022, 78, 103966. [Google Scholar] [CrossRef]
  58. Zhang, Y.; Cheng, C.; Zhang, Y. Multimodal Emotion Recognition Using a Hierarchical Fusion Convolutional Neural Network. IEEE Access 2021, 9, 7943–7951. [Google Scholar] [CrossRef]
  59. Alhagry, S.; Fahmy, A.A.; El-Khoribi, R.A. Emotion recognition based on EEG using LSTM recurrent neural network. Emotion 2017, 8, 355–358. [Google Scholar] [CrossRef] [Green Version]
  60. Yang, Y.; Wu, Q.; Qiu, M.; Wang, Y.; Cheng, X. Emotion Recognition from Multi-Channel EEG through Parallel Convolutional Recurrent Neural Network. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–7. [Google Scholar]
  61. Huang, D.; Chen, S.; Liu, C.; Zheng, L.; Tian, Z.; Jiang, D. Differences first in asymmetric brain: A bi-hemisphere discrepancy convolutional neural network for EEG emotion recognition. Neurocomputing 2021, 448, 140–151. [Google Scholar] [CrossRef]
  62. Aslan, M. CNN based efficient approach for emotion recognition. J. King Saud Univ.—Comput. Inf. Sci. 2021, 34, 7335–7346. [Google Scholar] [CrossRef]
  63. Chaudhary, S.; Taran, S.; Bajaj, V.; Sengur, A. Convolutional Neural Network Based Approach Towards Motor Imagery Tasks EEG Signals Classification. IEEE Sens. J. 2019, 19, 4494–4500. [Google Scholar] [CrossRef]
  64. Pandey, P.; Seeja, K.R. Subject independent emotion recognition system for people with facial deformity: An EEG based approach. J. Ambient. Intell. Humaniz. Comput. 2020, 12, 2311–2320. [Google Scholar] [CrossRef]
  65. Garg, D.; Verma, G.K. Emotion Recognition in Valence-Arousal Space from Multi-channel EEG data and Wavelet based Deep Learning Framework. Procedia Comput. Sci. 2020, 171, 857–867. [Google Scholar] [CrossRef]
  66. Rahman, M.; Sarkar, A.K.; Hossain, A.; Hossain, S.; Islam, R.; Hossain, B.; Quinn, J.M.; Moni, M.A. Recognition of human emotions using EEG signals: A review. Comput. Biol. Med. 2021, 136, 104696. [Google Scholar] [CrossRef] [PubMed]
  67. Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.-S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. DEAP: A Database for Emotion Analysis; Using Physiological Signals. IEEE Trans. Affect. Comput. 2011, 3, 18–31. [Google Scholar] [CrossRef] [Green Version]
  68. Liu, W.; Qiu, J.-L.; Zheng, W.-L.; Lu, B.-L. Comparing Recognition Performance and Robustness of Multimodal Deep Learning Models for Multimodal Emotion Recognition. IEEE Trans. Cogn. Dev. Syst. 2021, 14, 715–729. [Google Scholar] [CrossRef]
  69. Zheng, W.-L.; Liu, W.; Lu, Y.; Lu, B.-L.; Cichocki, A. EmotionMeter: A Multimodal Framework for Recognizing Human Emotions. IEEE Trans. Cybern. 2018, 49, 1110–1122. [Google Scholar] [CrossRef]
  70. Soleymani, M.; Lichtenauer, J.; Pun, T.; Pantic, M. A Multimodal Database for Affect Recognition and Implicit Tagging. IEEE Trans. Affect. Comput. 2011, 3, 42–55. [Google Scholar] [CrossRef] [Green Version]
  71. Katsigiannis, S.; Ramzan, N. DREAMER: A Database for Emotion Recognition Through EEG and ECG Signals from Wireless Low-cost Off-the-Shelf Devices. IEEE J. Biomed. Healthc. Inform. 2017, 22, 98–107. [Google Scholar] [CrossRef] [Green Version]
  72. Torres, E.P.; Torres, E.A.; Hernández-Álvarez, M.; Yoo, S.G. EEG-Based BCI Emotion Recognition: A Survey. Sensors 2020, 20, 5083. [Google Scholar] [CrossRef] [PubMed]
  73. Nath, D.; Anubhav; Singh, M.; Sethia, D. A Comparative Study of Subject-Dependent and Subject-Independent Strategies for EEG-Based Emotion Recognition Using LSTM Network. In Proceedings of the 2020 the 4th International Conference on Compute and Data Analysis. Association for Computing Machinery, New York, NY, USA, 9–12 March 2020; pp. 142–147. [Google Scholar]
  74. Lew, W.-C.L.; Wang, D.; Shylouskaya, K.; Zhang, Z.; Lim, J.-H.; Ang, K.K.; Tan, A.-H. EEG-based Emotion Recognition Using Spatial-Temporal Representation via Bi-GRU. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 116–119. [Google Scholar]
  75. Putra, A.E.; Atmaji, C.; Ghaleb, F. EEG-Based Emotion Classification Using Wavelet Decomposition and K-Nearest Neighbor. In Proceedings of the 2018 4th International Conference on Science and Technology (ICST), Yogyakarta, Indonesia, 7–8 August 2018; pp. 1–4. [Google Scholar]
  76. Zhuang, N.; Zeng, Y.; Tong, L.; Zhang, C.; Zhang, H.; Yan, B. Emotion Recognition from EEG Signals Using Multidimensional Information in EMD Domain. BioMed Res. Int. 2017, 2017, 8317357. [Google Scholar] [CrossRef] [Green Version]
  77. Choi, E.J.; Kim, D.K. Arousal and Valence Classification Model Based on Long Short-Term Memory and DEAP Data for Mental Healthcare Management. Healthc. Inform. Res. 2018, 24, 309–316. [Google Scholar] [CrossRef]
  78. Xing, X.; Li, Z.; Xu, T.; Shu, L.; Hu, B.; Xu, X. SAE+LSTM: A New Framework for Emotion Recognition from Multi-Channel EEG. Front. Neurorobotics 2019, 13, 37. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  79. Cui, H.; Liu, A.; Zhang, X.; Chen, X.; Wang, K.; Chen, X. EEG-based emotion recognition using an end-to-end regional-asymmetric convolutional neural network. Knowl.-Based Syst. 2020, 205, 106243. [Google Scholar] [CrossRef]
  80. Anubhav; Nath, D.; Singh, M.; Sethia, D.; Kalra, D.; Indu, S. An Efficient Approach to EEG-Based Emotion Recognition using LSTM Network. In Proceedings of the 2020 16th IEEE International Colloquium on Signal Processing & Its Applications (CSPA), Langkawi, Malaysia, 28–29 February 2020; pp. 88–92. [Google Scholar]
  81. Ozdemir, M.A.; Degirmenci, M.; Izci, E.; Akan, A. EEG-based emotion recognition with deep convolutional neural networks. Biomed. Eng./Biomed. Tech. 2020, 66, 43–57. [Google Scholar] [CrossRef]
  82. Cheng, W.X.; Gao, R.; Suganthan, P.; Yuen, K.F. EEG-based emotion recognition using random Convolutional Neural Networks. Eng. Appl. Artif. Intell. 2022, 116, 105349. [Google Scholar] [CrossRef]
  83. Bazgir, O.; Mohammadi, Z.; Habibi, S.A.H. Emotion Recognition with Machine Learning Using EEG Signals. In Proceedings of the 2018 25th National and 3rd International Iranian Conference on Biomedical Engineering (ICBME), Qom, Iran, 29–30 November 2018; pp. 1–5. [Google Scholar]
  84. Zhao, G.; Zhang, Y.; Ge, Y. Frontal EEG Asymmetry and Middle Line Power Difference in Discrete Emotions. Front. Behav. Neurosci. 2018, 12, 225. [Google Scholar] [CrossRef] [Green Version]
  85. Şengür, D.; Siuly, S. Efficient approach for EEG-based emotion recognition. Electron. Lett. 2020, 56, 1361–1364. [Google Scholar] [CrossRef]
  86. Liu, Y.-J.; Yu, M.; Zhao, G.; Song, J.; Ge, Y.; Shi, Y. Real-Time Movie-Induced Discrete Emotion Recognition from EEG Signals. IEEE Trans. Affect. Comput. 2017, 9, 550–562. [Google Scholar] [CrossRef]
  87. Elamir, M.; Alatabany, W.; Aldosoky, M. Intelligent emotion recognition system using recurrence quantification analysis (RQA). In Proceedings of the 2018 35th National Radio Science Conference (NRSC), Cairo, Egypt, 20–22 March 2018; pp. 205–213. [Google Scholar]
  88. Sarma, P.; Barma, S. Emotion recognition by distinguishing appropriate EEG segments based on random matrix theory. Biomed. Signal Process. Control. 2021, 70, 102991. [Google Scholar] [CrossRef]
  89. Apicella, A.; Arpaia, P.; Mastrati, G.; Moccaldi, N. EEG-based detection of emotional valence towards a reproducible measurement of emotions. Sci. Rep. 2021, 11, 21615. [Google Scholar] [CrossRef]
  90. Liu, J.; Wu, G.; Luo, Y.; Qiu, S.; Yang, S.; Li, W.; Bi, Y. EEG-Based Emotion Classification Using a Deep Neural Network and Sparse Autoencoder. Front. Syst. Neurosci. 2020, 14, 43. [Google Scholar] [CrossRef]
  91. Liang, Z.; Oba, S.; Ishii, S. An unsupervised EEG decoding system for human emotion recognition. Neural Netw. 2019, 116, 257–268. [Google Scholar] [CrossRef] [PubMed]
  92. He, H.; Tan, Y.; Ying, J.; Zhang, W. Strengthen EEG-based emotion recognition using firefly integrated optimization algorithm. Appl. Soft Comput. 2020, 94, 106426. [Google Scholar] [CrossRef]
  93. Zhang, J.; Chen, M.; Hu, S.; Cao, Y.; Kozma, R. PNN for EEG-based Emotion Recognition. In Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016; pp. 2319–2323. [Google Scholar]
  94. Salama, E.S.; El-Khoribi, R.A.; Shoman, M.; Shalaby, M.A.W. EEG-Based Emotion Recognition using 3D Convolutional Neural Networks. Int. J. Adv. Comput. Sci. Appl. 2018, 9, 329. [Google Scholar] [CrossRef] [Green Version]
  95. Cheah, K.H.; Nisar, H.; Yap, V.v.; Lee, C.-Y. Short-time-span EEG-based personalized emotion recognition with deep convolutional neural network. In Proceedings of the 2019 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), Kuala Lumpur, Malaysia, 17–19 December 2019; pp. 78–83. [Google Scholar]
  96. Coan, J.A.; Allen, J.J.B.; Harmon-Jones, E. Voluntary facial expression and hemispheric asymmetry over the frontal cortex. Psychophysiology 2001, 38, 912–925. [Google Scholar] [CrossRef]
  97. Dimond, S.J.; Farrington, L.; Johnson, P. Differing emotional response from right and left hemispheres. Nature 1976, 261, 690–692. [Google Scholar] [CrossRef]
  98. Liu, Y.; Sourina, O. Real-Time Fractal-Based Valence Level Recognition from EEG. In Transactions on Computational Science XVIII; Gavrilova, M.L., Tan, C.J.K., Kuijper, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 101–120. [Google Scholar]
  99. Duan, R.; Zhu, J.; Lu, B. Differential entropy feature for EEG-based emotion classification. In Proceedings of the 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER), San Diego, CA, USA, 6–8 November 2013; pp. 81–84. [Google Scholar]
  100. Hjorth, B. EEG analysis based on time domain properties. Electroencephalogr. Clin. Neurophysiol. 1970, 29, 306–310. [Google Scholar] [CrossRef]
  101. Roshdy, A.; Alkork, S.; Karar, A.S.; Mhalla, H.; Beyrouthy, T.; Al Barakeh, Z.; Nait-ali, A. Statistical Analysis of Multi-channel EEG Signals for Digitizing Human Emotions. In Proceedings of the 2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART), Paris, France, 8–10 December 2021; pp. 1–4. [Google Scholar]
  102. Mert, A.; Akan, A. Emotion recognition from EEG signals by using multivariate empirical mode decomposition. Pattern Anal. Appl. 2016, 21, 81–89. [Google Scholar] [CrossRef]
  103. Hu, W.; Huang, G.; Li, L.; Zhang, L.; Zhang, Z.; Liang, Z. Video-triggered EEG-emotion public databases and current methods: A survey. Brain Sci. Adv. 2020, 6, 255–287. [Google Scholar] [CrossRef]
  104. Sourina, O.; Liu, Y. A fractal-based algorithm of emotion recognition from EEG using arousal-valence model. In Proceedings of the International Conference on Bio-Inspired Systems and SIGNAL Processing, SciTePress, Rome, Italy, 26–29 January 2011; pp. 209–214. [Google Scholar]
  105. Amin, H.U.; Malik, A.S.; Ahmad, R.F.; Badruddin, N.; Kamel, N.; Hussain, M.; Chooi, W.-T. Feature extraction and classification for EEG signals using wavelet transform and machine learning techniques. Australas. Phys. Eng. Sci. Med. 2015, 38, 139–149. [Google Scholar] [CrossRef]
  106. Bozkurt, F. A deep and handcrafted features-based framework for diagnosis of COVID-19 from chest x-ray images. Concurr. Comput. Pr. Exp. 2021, 34, e6725. [Google Scholar] [CrossRef]
  107. Loddo, A.; Di Ruberto, C. On the Efficacy of Handcrafted and Deep Features for Seed Image Classification. J. Imaging 2021, 7, 171. [Google Scholar] [CrossRef] [PubMed]
  108. De Miras, J.R.; Ibáñez-Molina, A.; Soriano, M.; Iglesias-Parro, S. Schizophrenia classification using machine learning on resting state EEG signal. Biomed. Signal Process. Control. 2023, 79, 104233. [Google Scholar] [CrossRef]
  109. Ramirez, R.; Planas, J.; Escude, N.; Mercade, J.; Farriols, C. EEG-based analysis of the emotional effect of music therapy on palliative care cancer patients. Front. Psychol. 2018, 9, 254. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  110. Schmidt, L.A.; Trainor, L.J. Frontal brain electrical activity (EEG) distinguishes valence and intensity of musical emotions. Cogn. Emot. 2001, 15, 487–500. [Google Scholar] [CrossRef]
  111. Chatterjee, S.; Byun, Y.-C. EEG-Based Emotion Classification Using Stacking Ensemble Approach. Sensors 2022, 22, 8550. [Google Scholar] [CrossRef]
  112. Abdel-Hamid, L. Egyptian Arabic speech emotion recognition using prosodic, spectral and wavelet features. Speech Commun. 2020, 122, 19–30. [Google Scholar] [CrossRef]
Figure 1. Valence-arousal model [6].
Figure 1. Valence-arousal model [6].
Sensors 23 01255 g001
Figure 2. The cerebral cortex divided into the frontal, temporal, parietal, and occipital lobes [13].
Figure 2. The cerebral cortex divided into the frontal, temporal, parietal, and occipital lobes [13].
Sensors 23 01255 g002
Figure 3. The international 10/20 system for electrode placement [14].
Figure 3. The international 10/20 system for electrode placement [14].
Sensors 23 01255 g003
Figure 6. Emotion AI system diagram.
Figure 6. Emotion AI system diagram.
Sensors 23 01255 g006
Figure 7. Characteristic changes in an arbitrary reference signal, illustrating their relation to the different Hjorth parameters [100].
Figure 7. Characteristic changes in an arbitrary reference signal, illustrating their relation to the different Hjorth parameters [100].
Sensors 23 01255 g007
Figure 8. Experimental workflow.
Figure 8. Experimental workflow.
Sensors 23 01255 g008
Figure 9. Valence classification accuracies for the different features and EEG frequency bands.
Figure 9. Valence classification accuracies for the different features and EEG frequency bands.
Sensors 23 01255 g009
Figure 10. Boxplots of the variance and PSD features for the delta, alpha, and fast gamma bands considering the full 1 minute EEG signal.
Figure 10. Boxplots of the variance and PSD features for the delta, alpha, and fast gamma bands considering the full 1 minute EEG signal.
Sensors 23 01255 g010
Table 1. Characteristics of the five basic brain waves.
Table 1. Characteristics of the five basic brain waves.
BandSymbolFrequency RangePsychological State
DeltaΔ<4 HzunconsciousnessDeep sleep
Thetaθ4–8 HzsubconsciousnessLight sleep and meditation
Alphaα8–12 HzconsciousnessNormal relaxed yet alert adult
Betaβ12–30 HzDaily activities
Gammaδ>30 HzComplex brain activities
Table 2. Summary of EEG-based emotion recognition approaches that utilize the DEAP dataset.
Table 2. Summary of EEG-based emotion recognition approaches that utilize the DEAP dataset.
Research PaperChannelsEEG BandsFeaturesClassifierDep./
Indep.
Val./
Arl.
Acc.
%
Mohammadi et al., 2017 [55]Fp1, Fp2GammaWavelet Features kNNIndep.Val.
Arl.
80.68
74.60
Fp1, Fp2, F7, F8, F3, F4, FC5, FC6, FC1, FC2Val.
Arl.
86.75
84.05
Salma et al., 2017 [59]AllRawDeep Features
(LSTM)
SigmoidDep.Val.
Arl.
85.45
85.65
Wu et al., 2017 [53] Fp1, Fp2AllFrequency, WT FeaturesGBDTDep.Val.75.18
Zhuang et al., 2017 [76]FP1, FP2, F7, F8, T7, T8, P7, P8Beta, GammaTime (EMD)SVMDep.Val.
Arl.
69.10
71.99
Eun et al., 2018 [77]Fp1, Fp2, F3, F4, T7, T8, P3, P4*RawDeep Features
(LSTM)
SigmoidIndep.Val.
Arl.
78.00
74.65
Putra, 2018 [75]AllAll except deltaWavelet FeatureskNNDep.Val.
Arl.
59.00
65.70
AllAll except deltaWavelet FeatureskNNIndep.Val.
Arl.
58.90
64.30
Yang et al., 2018 [60]AllRawDeep Features (LSTM, CNN)SoftmaxDep.Val.
Arl.
90.80
91.03
Parui et al., 2019 [36]AllRawTime, WT FeaturesXGBoostIndep.Val.
Arl.
75.97
74.20
AllFrequency Features
Xing et al., 2019 [78]AllAll except deltaFrequency FeaturesLSTMIndep.Val.
Arl.
81.10
74.38
Cui et al., 2020 [79]Symmetric ChannelsAll except deltaRegional-Asymmetric CNN (RACNN)SoftmaxDep.Val.
Arl.
96.65
97.11
Garg and Verma, 2020 [65]AllRawScalogram ImagesGoogleNet
(pretrained)
Indep.Val.
Arl.
92.19
61.23
Nath et al., 2020 [73,80]AllAll Band PowerLSTMDep.Val.
Arl.
94.69
93.13
SVMIndep.Val.
Arl.
72.19
71.25
Aslan, 2021, [62]AllRawScalogram ImagesGoogleNet
(pretrained)
+SVM
Indep.Val.
Arl.
91.20
93.70
Ozdemir et al., 2021 [81]AllAlpha, Beta, Gamma Multi-Spectral
Topology Images
CNN, LSTM + SoftmaxIndep.Val.
Arl.
90.62
86.13
Huang, 2021 [61]Symmetric ChannelsRaw signalBi-hemisphere spatial featuresCNNDep.Val.
Arl.
94.38
94.72
Indep.Val.
Arl.
68.14
63.94
Yin et al., 2021 [48]AllRaw signalDifferential Entropy CubeGCNN,
LSTM
Dep.Val.
Arl.
90.45
90.60
AllIndep.Val.
Arl.
84.81
85.27
Zhang et al., 2021 [58]Fp1, Fp2, F3,
F4, AF3, AF4*
AllTime, FrequencySoftmaxIndep.Val.
Arl.
84.71
83.28
Raw signalDeep Features
(HFCNN)
Cheng et al., 2022 [82]AllRaw SignalDeep Features
(randomized CNN)
EnsembleDep.Val.
Arl.
99.19
99.25
Gao et al., 2022 [37]AllAll except deltaTime, Frequency Features CNN
+ SVM
Indep.Val.
Arl.
80.52 75.22
Table 3. Valence classification accuracies (%) for the different EEG bands using activity and PSD.
Table 3. Valence classification accuracies (%) for the different EEG bands using activity and PSD.
All
(2–60 Hz)
Delta
(2–4 Hz)
Theta
(4–8 Hz)
Alpha
(8–12 Hz)
Beta
(12–30 Hz)
Gamma
(30–60 Hz)
variance62.5095.5184.7798.2484.5768.56
PSD61.3395.1284.3897.8584.3870.31
Table 4. Valence classification accuracies (%) for the different gamma subbands using activity and PSD.
Table 4. Valence classification accuracies (%) for the different gamma subbands using activity and PSD.
30–60 Hz30–40 Hz40–50 Hz50–60 Hz
variance68.5691.9991.4099.02
PSD70.3191.9991.029.63
Table 5. Strongest emotion classification accuracies (%) for different EEG time slots.
Table 5. Strongest emotion classification accuracies (%) for different EEG time slots.
Delta (2–4 Hz)Alpha (8–12 Hz)Gamma (50–60 Hz)
VariancePSDVariancePSDVariancePSD
1–20 s96.2996.0997.4697.0797.6697.66
20–40 s95.5195.7097.4697.4698.0598.05
40–60 s94.9295.5196.6897.2798.0597.85
1–60 s95.5195.1298.2498.2499.0298.63
Table 6. Valence classification accuracies (%) for the complete DEAP dataset (kNN).
Table 6. Valence classification accuracies (%) for the complete DEAP dataset (kNN).
Delta
(2–4 Hz)
Alpha
(8–12 Hz)
Fast Gamma
(50–60 Hz)
Variance95.0896.0985.23
PSD95.0896.2584.76
Variance + PSD95.0096.3385.55
Table 7. Valence classification accuracies (%) for the complete DEAP dataset (SVM-rbf).
Table 7. Valence classification accuracies (%) for the complete DEAP dataset (SVM-rbf).
Delta
(2–4 Hz)
Alpha
(8–12 Hz)
Fast Gamma
(50–60 Hz)
Variance96.9597.2687.58
PSD95.5596.80 87.50
Variance + PSD97.1997.4287.11
Table 8. Valence classification accuracies (%) per subject for the combined variance and PSD features considering the complete DEAP dataset.
Table 8. Valence classification accuracies (%) per subject for the combined variance and PSD features considering the complete DEAP dataset.
SubjectkNNSVM (rbf)
DeltaAlphaFast GammaDeltaAlphaFast Gamma
195.097.577.597.597.577.5
292.595.077.587.595.082.5
395.097.572.597.597.575.0
410092.560.097.595.072.5
595.090.090.097.595.092.5
610095.097.510097.595.0
795.095.087.597.510092.5
895.097.577.597.510085.0
995.097.582.597.597.587.5
1095.097.590.010095.090.0
1190.092.590.090.095.087.5
1295.010085.010010085.0
1397.595.070.010097.567.5
1495.097.597.510097.5100
1595.097.582.597.597.582.5
1610095.075.010095.077.5
1795.097.585.097.510085.0
1897.510090.097.510092.5
1995.095.095.097.592.595.0
2095.097.585.097.597.590.0
2195.097.590.010097.597.5
2297.510090.010010085.0
2392.510095.095.010095.0
2497.597.577.595.097.577.5
2595.095.087.597.597.585.0
2695.095.010097.597.5100
2797.590.095.010092.595.0
2887.597.597.590.097.597.5
2995.095.097.597.595.097.5
3095.097.587.597.510090.0
3185.097.582.592.510085.0
3295.097.570.010010070.0
Average95.096.3385.5597.1997.4287.11
Table 9. Valence ratings statistical measures and classification accuracies for different valence thresholds, given for the subjects with the lowest and highest performance as well as for the complete DEAP dataset.
Table 9. Valence ratings statistical measures and classification accuracies for different valence thresholds, given for the subjects with the lowest and highest performance as well as for the complete DEAP dataset.
Highest AccuraciesLowest AccuraciesAll Subjects
Subject #12Subject #22Subject #19Subject #27
Valence ratings statistical measuresMedian5.045.005.046.085.04
Average4.884.695.236.085.25
Std. deviation2.242.441.802.182.13
Accuracies
(SVM)
Threshold = 510010092.592.597.42
Threshold = 697.510092.597.596.56
Table 10. Valence three-class accuracies (%) for the complete DEAP dataset (SVM-rbf).
Table 10. Valence three-class accuracies (%) for the complete DEAP dataset (SVM-rbf).
Delta
(2–4 Hz)
Alpha
(8–12 Hz)
Fast Gamma
(50–60 Hz)
Variance94.3095.3978.13
PSD94.6994.2278.28
Variance + PSD94.9295.0078.44
Table 11. Valence (happy/sad) classification performance for the DEAP dataset.
Table 11. Valence (happy/sad) classification performance for the DEAP dataset.
MethodYearMethodChannelsBandsAcc. %
Wu et al. [53]2017FFT and WT features with GBDTFp1, Fp2All75.18
Salma et al. [59]2017LSTM and RNNAllRaw85.45
Yang et al. [60]2018LSTM and CNNAllAll90.80
Cui et al. [79]2020Differential Entropy + SVMSymmetric channel pairsAll except delta89.09
Multilayer Perceptron (MLP)92.57
Regional-Asymmetric CNN (RACNN)96.65
Nath et al. [80]2020Band power with LSTMAllAll94.69
Yin et al. [48]2021Differential entropy with ECLGCNNAllAll80.52
Huang et al. [61]2021Bi-hemisphere discrepancy CNNSymmetric channel pairsRaw94.38
Chen et al. [82]2022Ensemble Deep Randomized-CNNAllRaw99.19
Proposed2022Variance + PSD with SVMFp1-Fp2Alpha97.42
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abdel-Hamid, L. An Efficient Machine Learning-Based Emotional Valence Recognition Approach Towards Wearable EEG. Sensors 2023, 23, 1255. https://doi.org/10.3390/s23031255

AMA Style

Abdel-Hamid L. An Efficient Machine Learning-Based Emotional Valence Recognition Approach Towards Wearable EEG. Sensors. 2023; 23(3):1255. https://doi.org/10.3390/s23031255

Chicago/Turabian Style

Abdel-Hamid, Lamiaa. 2023. "An Efficient Machine Learning-Based Emotional Valence Recognition Approach Towards Wearable EEG" Sensors 23, no. 3: 1255. https://doi.org/10.3390/s23031255

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop