Next Article in Journal
Strong PUF Enrollment with Machine Learning: A Methodical Approach
Previous Article in Journal
A Control Strategy to Smooth Power Ripple of a Single-Stage Bidirectional and Isolated AC-DC Converter for Electric Vehicles Chargers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cross-Day EEG-Based Emotion Recognition Using Transfer Component Analysis

1
Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou 450001, China
2
Troops 73616 of PLA, Shanghai 201100, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(4), 651; https://doi.org/10.3390/electronics11040651
Submission received: 22 January 2022 / Revised: 15 February 2022 / Accepted: 17 February 2022 / Published: 19 February 2022
(This article belongs to the Topic Machine and Deep Learning)

Abstract

:
EEG-based emotion recognition can help achieve more natural human-computer interaction, but the temporal non-stationarity of EEG signals affects the robustness of EEG-based emotion recognition models. Most existing studies use the emotional EEG data collected in the same trial to train and test models, once this kind of model is applied to the data collected at different times of the same subject, its recognition accuracy will decrease significantly. To address the problem of EEG-based cross-day emotion recognition, this paper has constructed a database of emotional EEG signals collected over six days for each subject using the Chinese Affective Video System and self-built video library stimuli materials, and the database is the largest number of days collected for a single subject so far. To study the neural patterns of emotions based on EEG signals cross-day, the brain topography has been analyzed in this paper, which show there is a stable neural pattern of emotions cross-day. Then, Transfer Component Analysis (TCA) algorithm is used to adaptively determine the optimal dimensionality of the TCA transformation and match domains of the best correlated motion features in multiple time domains by using EEG signals from different time (days). The experimental results show that the TCA-based domain adaptation strategy can effectively improve the accuracy of cross-day emotion recognition by 3.55% and 2.34%, respectively, in the classification of joy-sadness and joy-anger emotions. The emotion recognition model and brain topography in this paper, verify that the database can provide a reliable data basis for emotion recognition across different time domains. This EEG database will be open to more researchers to promote the practical application of emotion recognition.

1. Introduction

Emotions are an important part of human psychological structure, and as the human-computer interaction technology develops, emotional perception computing [1,2] has established a harmonious human-computer environment by enabling computers to perceive, recognize, understand, express, and adapt to human emotions, and allows computers to have a higher and more comprehensive intelligence, which is an important symbol of the naturalization and intelligibility of human-computer interaction [3,4,5].
One of the important prerequisites for conducting emotion research is to elicit objective, stable and reliable emotions. Researchers use a variety of emotion stimuli, such as images [6,7], sounds and videos [8,9], to induce emotions. Video materials of different emotions are widely used by researchers as through visual and auditory stimuli, subjects may feel personally on the scene. Existing publicly available EEG databases based on emotion video stimuli include DEAP [10], MAHNOB-HCI [11], and SEED [12], etc.
Based on the above publicly available EEG datasets, various emotion feature extraction methods have been developed and used to recognize emotions from EEG signals. These feature extraction methods include (1) time-domain features: Non-linear features such as statistical features [13], fractal dimensions [14,15], sample entropy [16], and non-stationary indices [17], Hjorth features [18], and higher-order crossover features [19]; (2) time-frequency analysis features: The energy, power, power spectral density and differential entropy (DE) [20] of a certain frequency band are extracted as features after the short-time Fourier transform (STFT) [21,22], Hilbert-Huang transform [23,24] or discrete wavelet transform [25,26,27] of the EEG signals, and in some cases, t for high-frequency bands, such as Beta (16–32 Hz) and Gamma (32–64 Hz) bands, the emotion recognition achieves better results [10,28]; (3) features based on the empirical mode decomposition (EMD) [29,30]: The EMD is used to decompose the EEG signal into multiple intrinsic mode functions (IMFs), and then extract the waveform difference, phase difference, and normalized energy of the IMFs as features for emotion recognition. As the deep learning techniques develop, neural networks are gradually being used in emotion recognition. Zheng et al. [31] extracted DE features with different frequency bands and channels as inputs, and then used deep belief networks (DBN) to classify positive, neutral, and negative emotions. Yang [32] and Zhang et al. [33] all used the extracted neural network structure with DE as the input feature to recognize positive, neutral, and negative emotions from the SEED database. Li et al. [34] formed a two-dimensional matrix of DE features and the HCNN was used to perform emotion recognition based on the SEED database. Xing et al. [35] used the power spectral density of the five frequency bands as the feature to recognize emotions based on DEAP database using LSTM-RNN. Many studies choose DE as the feature and acquire effective recognition accuracy. Therefore, we choose DE as the main feature of emotion recognition in this paper.
Most of the above-mentioned studies have employed the traditional machine learning methods or neural networks for emotion recognition, and have obtained fairly satisfactory recognition accuracy. However, most of those studies have focused on time-specific emotion recognition, and the emotion recognition accuracy of the model developed under such condition will significantly decrease once applied to a complex real-world environment, since the temporal non-stationarity of the EEG signal is a major factor affecting the robustness of the EEG-based emotion recognition model. It is well known that hormone levels, external environment, and diet and sleep can all lead to differences in physiological signals [36], therefore, even for the same emotional state, the EEG signals cross-day also vary. However, in practical applications, there is bound to be a time lag between constructing emotion recognition models and recognizing the emotional states, so the study of emotion recognition cross-day is crucial, which serves as a necessary step from the laboratory to practical applications.
Currently, there are few studies on cross-day emotion recognition. Zheng et al. [12] investigate the important brain regions and electrodes in emotion recognition based on EEG over time. Lin et al. [37] proposed a robust principal component analysis (RPCA)-based signal filtering strategy and validated it on a binary emotion classification task (happiness vs. sadness) using a five-day EEG dataset of 12 subjects. Liu et al. [38] collected EEG data for days and found that emotion recognition performance can be improved by adding EEG data of different days. However, the problem of how to effectively and adaptively select emotion features cross-day with intrinsically implicit associations has not been well addressed. The robustness of emotion recognition across time domain is worth studying. In this paper, we constructed an EEG database, with collection of six days EEG signals from 12 subjects. Based on the EEG database, we first analyzed the difference in emotion recognition performance between intra-day and cross-day cases. Then, we applied TCA algorithm as a domain adaption method, and showed the effectiveness of enhancing cross-day emotion recognition tasks. At last, we analyzed the brain topography, which showed there was a stable neural pattern of emotions cross-day.

2. Materials and Methods

EEG-based emotion recognition mainly includes the following steps: emotion induction, EEG signal collection, EEG signal pre-processing, extraction and analysis of emotion-related EEG features, emotion computational modeling, and detection and recognition of emotional states, as shown in Figure 1.

2.1. Experimental Design

Due to the non-stationarity of EEG, EEG signals from different days can be considered as signals from different time domains under the same cognitive emotion recognition task. In order to address the problem of emotion recognition based on EEG signals from different time domains (across different days), this study has designed EEG experiments [39] for studying emotions cross-day, which allows us to collect sufficient emotional samples for deep neural network studies, and investigate the properties of EEG signals cross-day. We have chosen video materials of emotions for the experiment since they can give both visual and auditory stimuli to the subjects, making them feel as if they were in a real-life situation.
A total of 36 video clips of the four emotion types of joy, sadness, anger and fear have been selected from the Chinese Affective Video System [9] and the self-built affective video library for the experiment. The self-built affective video library is a standardized multi-sensory material library of emotional stimuli based on psychological methods provided by the partner Peking University. It includes various domestic and foreign comedies, romance, crime, war, documentary, and horror films, etc. The clips are selected according to such principles as clear content meaning, clear picture, good sound quality, and clear subtitles, and they are tested by the elicitation validity of the stimulus material.
The experiment will be conducted in three parts, namely A, B and C, with each part containing 12 clips of 4 emotions. As shown in Table A1, Table A2 and Table A3 of the Appendix A, there are three clips for each emotion type, and the duration of the video clips range from 50 s to 335 s. Videos named with initial uppercase letters are selected from the affective video library of Peking University, and those named with initial lowercase letters are selected from the Chinese Affective Video System.
The experiment is divided into the three parts of A, B, and C, each at a time, as shown in Figure 2. The order of the experiment is randomly balanced to avoid the effects of the fixed-order A-B-C experiments, and the interval between each two experiments is one week. Each subject is to perform the experiment in Figure 2 twice, with an interval of six months between the two experiments, so for each participating subject, a total of six days’ EEG data will be collected.
Part A, B, and C of the experiment each contains 12 films of four discrete emotion categories. Each category of emotions is played as a block, and the four blocks corresponding to the four categories of films are played randomly, and the films within each block are also played at random. The 12 film clips are divided into 12 trials, and the flow of each trial is as follows.
  • Before the movie clip starts, there will be a 10-s hint to inform the subject of the number of the current movie clip.
  • Present the white fixation cross on a black background for 5 s.
  • Play the emotion stimulus movie clips.
  • The subject will self-assess the valence and arousal of the movie clips with reference to the Self-Assessment Manikin (SAM) scale. The valence scale ranges from 1(extremely unpleasant) to 9(extremely pleasant); and the arousal scale ranges from 1 (calm) to 9 (extremely excited), and the subject will click the corresponding numbers on the keyboard to directly input the ratings.
While switching categories of emotion, the subject will have a 5-min rest to fully eliminate the effect of the previous category of emotion on the current one.

2.2. Data Collection

Before the experiment, we selected subjects through questionnaire and interviews based on the Beck anxiety inventory (BAI) [40], Hamilton anxiety rating scale (HARS) [41], and the Hamilton depression scale (HAMD) [42] to exclude subjects with anxiety and depression mood, mental and physical abnormalities which means a physical disease or a physical defect, and those using sedative agents and psychotropic drugs. In this case, 14 subjects (8 males, 6 females) with normal visual acuity or corrected visual acuity had been selected for the experiment from our current students. Prior to the experiment, all subjects were informed in detail of the content of the experiment, and they filled out an information registration form, and signed an informed consent form.
We used the 64 channel HIamp active electrode EEG cap of g.tec company of Austria to collect EEG signals. Among the 64 channels, the reference electricity was on the right earlobe, and AFz was connected to GND, and Fz was used for internal calculation of the equipment. Therefore, the remaining effective EEG channels were 61 channels. We used E-Prime to play the experimental stimulus, which is a professional psychological stimulus presentation software.

2.3. Data Preprocessing

For all subjects, the 61-channel EEG data were pre-processed as follows.
  • Data extraction. Extracted the EEG data corresponding to the film clips being played (Pre-stimulus duration was 5 s, and post-stimulus duration was that of the video stimulus material).
  • Bad channel averaging. Checked for corrupted channels where no EEG data had been collected, and replaced the data from the corrupted channel with the average data from the adjacent channels.
  • Artifact removal. EEG was decomposed into independent components using ICA algorithm to remove artifacts such as EOG, EMG, and ECG, and then reconstructed to obtain an artifact-free EEG signal.
  • Re-reference calculation. The data were re-referenced and calculated according to the reference electrode standardization technique (REST) proposed by Yao et al. [43,44]. Which used Three-concentric-sphere model as the head model.
  • Signal filtering. The signal passed through a bandpass filter of 0.1–64 Hz.
  • Baseline correction. The baseline correction was performed 5 s before watching the movie stimulus.
Among all the participants, 12 subjects’ (8 males, 4 females, with an average age of 22.50 and the standard deviation for age being 1.98) EEG signals were qualified, and two subjects’ EEG signals were unqualified. Since one subject’s signal had severe drift and the other subject’s signal could not be used because there was too much movement during two of the collection sessions. In removing the artifacts, this paper used the FastICA [45] algorithm to decompose the independent components of the EEG signal, one of the remaining 12 subjects removed two artifact components after ICA decomposition, and the other 11 subjects all removed one artifact component. After selecting and removing the artifact components, the EEG signal without artifact interference was reconstructed.
Since the lengths of the film videos range from 50 s to 180 s, and for all videos, the emotion elicitation was the most intense at the end of the film and therefore most effective. To ensure that the EEG signal lengths were consistent with each other for all emotion types, the last 50 s of all videos had been captured for analysis. Considering the application in studies of real-time emotion recognition, the segmentation of EEG signal was performed with reference to [46], taking 2 s of the EEG signal length as one sample, slid the EEG signal window forward for 1 s each time, with an overlap of 1 s for the two adjacent samples, as shown in Figure 3. Thus, the 50 s-length EEG signal could be divided into 49 samples.
For each sample, the differential entropy (DE) of Delta (1–4 Hz), Theta (4–8 Hz), Alpha (8–12 Hz), Beta (12–30 Hz) and Gamma (30–64 Hz) frequency bands had been extracted. For an EEG signal of 2 s length (sample rate of 512 Hz and sample points of 1024). A 128-point Hann window was selected with 50% overlap of windows, and the short-time Fourier transform was implemented by Matlab’s spectrogram function, and then the differential entropy was calculated for each of the five frequency bands. Since the number of effective EEG channels was 61, the number of features corresponding to each EEG sample of 2 s was 61 × 5 = 305 features.
In the future, we were willing to disclose this EEG database, including original EEG signals and preprocessed EEG signals, for use by more researchers, to promote the development of emotion recognition technology based on EEG signals.

2.4. Transfer Component Analysis

Due to individual differences and the non-stationary nature of EEG signals, it is difficult to promote the classification model across different domains. There is a method called Transfer Learning in machine learning, which can be used to reduce differences distribution between different domains. Domain adaptation as one of the methods of transfer learning, solves a learning problem in a target domain by utilizing the training data in a different but related source domain, which can be used to reduce differences in EEG data distribution between different domains. Based on this, the research team of Professor Yang Qiang at Hong Kong University of Science and Technology had proposed the Transfer Component Analysis (TCA) algorithm [47], which was a feature-based transfer learning method. When the source and target domains had different data distribution in the issue of domain adaptation, TCA would be used to map the data in the two domains to a high-dimensional Reproducing Kernel Hilbert Space (RKHS), where the maximum mean discrepancy (MMD) of the source and target would be minimized, while their respective internal properties were preserved to the largest extent possible. TCA was currently the most widely used domain adaptation method, with good generalization capabilities in multiple domains.
It has shown that, under the cross-day case, there is a more stable neural pattern of EEG signals from the same subject under different emotional conditions. Therefore, there are implicit common correlations between the EEG signals of the same subject in different time domains, and due to such correlations, there may be some implicit features of the categorical information inherent in the representational data. Thus, if one knew the abstract common feature representation ϕ ( X ) in the implicit feature space for both the training set X train and the test set X test , the difference in probability distributions of P ( ϕ ( X train ) ) and P ( ϕ ( X test ) ) between the data domains would greatly reduce. Therefore, in this paper TCA will be used to EEG emotion recognition under cross-day case.

2.5. Emotion Recognition

In this paper, the EEG data of 6 days had been collected for each subject and 5 cases had been given here.
  • Intra-day case
Classified data for each subject within each experiment. There were 3 video clips for each emotion under each experiment, and 3-fold cross-validation was performed between videos to ensure that the training and test sets are uncorrelated.
  • Cross-day case
Emotion recognition cross-day included four cases, namely train1_ V1_ Test4, Train2_ V1_ Test3, Train3_ V1_ Test2, Train4_ V1_ Test1. For instance, Train1_V1_Test4: Data of days 1 and 2 were for training, where one day’s data were selected as the validation set and the other day’s as the training set, with a total of 2 combinations and data of day 3 to day 6 are for testing, and the final emotion recognition accuracy was the average of the two combinations. In this case, the number of training samples for each emotion type was 147 (There were 3 movies for each category of emotion in one day’s EEG data, and the number of samples corresponding to each movie was 49, so the number of movie samples for each emotion type was 49 × 3 = 147).
According to Train1_V1_Test4, the remaining three cases, Train2_ V1_ Test3, Train3_ V1_ Test2 and Train4_ V1_ Test1, correspond to that the data of 3 days, 2 days and 1 day were randomly selected from the 6 days as the test set, and the other days are used as the training set and verification set. The number of training samples for each emotion type respective ware 147 × 2, 147 × 3, 147 × 4.
Since the experimental stimuli were four discrete emotion types of joy, sadness, anger, and fear, the emotion recognition in this paper was also performed for discrete emotions. A binary classification of positive-negative, joy-sadness, joy-anger, and joy-fear emotions was performed in five cases, Intra-days, Train1_V1_Test4, Train2_V1_Test3, Train3_V1_Test2, and Train4_V1_Test1, respectively. For the positive-negative classification of emotions, the emotion of joy would be presented as the positive emotion and sadness, anger, and fear emotions would be presented as negative emotions. Under the positive-negative emotion classification, the negative emotion samples were more than those of the positive emotion; the three classification tasks of joy-sadness, joy-anger, and joy-fear were all performed with a balanced training sample.
Firstly, for the above five cases, this paper utilizes SVM to recognize emotions, which was implemented by LIBSVM with linear kernel functions. Parameter optimization of the nuclear function was performed on the training and validation sets, and the optimal value of parameter C was searched with a stride of 1 within the range from 2−8 to 28. The classification in this paper was based on a subject-dependent system, and the final classification accuracy was the average of recognition accuracies for all subjects.
Then, this paper employs TCA to perform adaptive matching of the EEG features across time domains. The original feature dimension was 305, and the optimal transformed dimensionality L o p t via TCA would be searched with a stride of 20 in the range form 10 to300 for the training and validation sets. After determining optimal transformed dimensionality L o p t via TCA, the training and validation sets were merged into a new training set X train = [ X train   X validate ] T , and then the new training set X train and the test set X test would be analyzed using TCA algorithm under the parameters of L o p t and the features obtained after the transfer component analysis would be used for emotion recognition.
Such classification as above mentioned had two advantages, first, the overlap of data from the training and the test sets could be avoided as they were from different days; second, the EEG data were increasing cross-day to validate the practicability and robustness of the domain adaptive algorithm for emotion recognition cross-day.

3. Results

3.1. Cross-Day and Intra-Day Emotion Recognition

This paper utilizes SVM to recognize emotions, and Figure 4 gives the average classification accuracy for all the subjects under the four classification tasks in different cases. For the intra-day emotion recognition, the average recognition accuracies for positive-negative, joy-sadness, joy-anger, and joy-fear are 90.18%, 86.38%, 88.12%, and 85.13%, respectively. As time goes on, the accuracy of emotion recognition decreases significantly in the four cases of Train1_V1_Test4, Train2_V1_Test3, Train3_V1_Test2, and Train4_V1_Test1, indicating that the EEG signals of the subjects will change cross-day. In case of the Train1_V1_Test4 with the shortest training days, the average recognition accuracies for positive-negative, joy-sadness, joy-anger, and joy-fear are 76.06%, 69.23%, 68.90%, and 63.66%, respectively. In addition, the recognition accuracy rates also show that the classification accuracy rates can be effectively improved if training models are built using EEG signals from different days, as shown in Figure 4, where the accuracy rates for Train3_V1_Test2 and Train4_V1_Test1 are higher than those for Train1_V1_Test4, and Train2_V1_Test3. When the training data of 4 days are used (Train4_V1_Test1), the average recognition accuracies for positive-negative, joy-sadness, joy-anger, and joy-fear are 81.31%, 72.15%, 71.57%, and 72.38%, respectively.
If negative emotions can be effectively detected in life, timely intervention and positive regulation can be carried out for people to improve the quality of life and work efficiency. So, the core purpose of emotion classification tend to correct identify negative emotions. We selected sensitivity, specificity, and the receiver operating characteristic (ROC) curves of the positive-negative emotions as evaluation metrics, which can verify the robustness of the classification model.
Sensitivity reflects the ability of the model to recognize the positive samples, and the formula for computing sensitivity is:
Sensitivity = TP/(TP + FN)
where, TP is the number of positive samples predicted to be positive and FN is the number of positive samples predicted to be negative.
Specificity reflects the ability of the model to recognize negative samples and is calculated as follows:
Specificity = TN/(TN + FP)
where, TN is the number of negative samples predicted to be negative and FP is the number of negative samples predicted to be positive.
ROC curves typically feature false positive rate (FPR) on the X axis and true positive rate (TPR) on the Y axis. The area under curve (AUC) is a measure of how good the classification model is. Figure 5 presents the ROC curves under five classification tasks, with the intra-day AUC values of 0.9422, 0.7196, 0.7648, 0.7399 and 0.7829 for Train1_V1_Test4, Train2_V1_Test3, Train3_V1_Test2, and Train4_V1_Test1, respectively. A larger AUC value indicates a more robust classification model, and therefore the best classification model in all five cases is the intra-day rather than the cross-day model.
Table 1 gives the actual and predicted values of all the positive and negative samples tested in the five cases, as well as the sensitivity, specificity and accuracy of the model. From the results, it can be seen that the imbalance between positive and negative training samples (with more negative samples) leads to the higher ability of the model to classify the negative samples and the ability to classify the positive samples is weaker than that to classify the negative samples. In the case of cross-day, the value of Specificity becomes higher and higher as the increasing of the training data, which means the performance of negative emotion recognition is becoming better and better. This is consistent with the change of the recognition accuracy.

3.2. Cross-Day Emotion Recognition Based on the Domain Adaption Algorithm

The accuracy of emotion recognition decreases due to the random nature of the EEG signal, which changes cross-day for the same individual being tested. Domain adaptation of features across time domains via TCA algorithm has effectively improved the accuracy of emotion recognition, as shown in Figure 6, in the case of Train4_V1_Test1, the average recognition accuracies of positive-negative, joy-sadness, joy-anger, and joy-fear using the TCA algorithm are 83.03%, 75.70%, 73.91% and 72.79%, with an improvement of 1.72%, 3.55%, 2.34% and 0.41%, respectively, in accuracy compared to using SVM alone, and the use of TCA for domain adaptation strategy has significantly improved the accuracy of emotion recognition in the three classification tasks of positive-negative (t-test, p = 0.046), joy-sadness (t-test, p = 0.039), and joy-anger (t-test, p = 0.042).
Since the positive and negative samples are not balanced, the sensitivity, specificity, and the receiver operating characteristic (ROC) curves of the positive-negative emotions in the five cases are also presented to verify the robustness of the classification model. Table 2 gives the actual and predicted values for all the positive and negative samples tested in the five cases, as well as the sensitivity, specificity and accuracy of the model. From the results, it can be seen that the imbalance between the positive and negative training samples (with more negative samples) indicates a higher ability of the model to classify the negative samples and the ability to classify the positive samples is weaker than that to classify the negative samples. It can also be seen that the introduction of the TCA algorithm has improved the ability of the model to recognize both positive and negative samples. Figure 7 presents the ROC curves, and the AUC values are 0.7829 and 0.8166 in both cases. The larger the AUC value, the more robust the classification model is.
Figure 8 and Figure 9 present the confusion matrices of SVM and TCA + SVM under the three classification tasks of joy-sadness, joy-anger, and joy-fear. It can be seen from the figures that the TCA algorithm has improved the classification ability for both joy and the three negative emotions.

3.3. Analysis of Brain Topography

In order to seek the support of cross-day emotion recognition at the theoretical level, we study the neural patterns of emotions based on EEG signals cross-day. We have presented the average brain topography of all subjects at different time during the first three days of the trial. The interval between each two of the trials A, B, and C is one week.
Figure 10 gives the DE features of all subjects in Gamma bands (30–64 Hz) cross-day. The energy in the central regions of the temporal, occipital, and parietal lobes is higher for positive emotions than that for negative emotions, and the energy in the prefrontal lobes for positive emotions is lower than that for negative emotions, which has something to do with the mechanism that negative emotions need more prefrontal cognitive resources for actions such as defense and escape. Negative emotions also have an asymmetry of energy on both sides of the temporal lobe, with higher energy in the left temporal lobe than in the right temporal lobe. This also happens in the three experiments A, B and C at different moments, which indicates that even though there are some superficial changes in the amplitude of the individual’s EEG signals, there is still a more stable neural pattern of emotions cross-day. Meanwhile, these further verify that the database can provide a reliable data basis for emotion recognition across different time domains.

4. Discussion

This study performs the emotion recognition on 6 days of emotional EEG data collected from each subject and has the following findings.
  • The performance of emotion recognition within the same experiment is better than the emotion recognition cross-day.
The time effect of EEG can have impacts on the accuracy of emotion recognition, and a study by Liu et al. [40] has found that the accuracy of emotion recognition decreases significantly when the training and test samples of EEG are from different time domains (different days). Consistent with the results of this study, the performance of cross-day emotion recognition based on EEG signals can be significantly reduced in the four cases, Train1_V1_Test4, Train2_V1_Test3, Train3_V1_Test2, and Train4_V1_Test1, compared to the emotion recognition within a single experiment (Intra-days) (t-test. p < 0.05).
Though EEG signals will change cross-day, the average brain topography shows that there is still a stable neural pattern cross-day and the accuracy gradually improves as the number of days in the training set samples increases. When data of 4 days are used as for training (Train3_V1_Test2: the number of training samples for each emotion is 147 × 3), the emotion recognition accuracy has improved by 5.25%, 2.92%, 2.67%, and 8.73%, respectively, under the positive-negative, joy-sadness, joy-anger, and joy-fear classification compared to the case where data of only 1 day is selected for training (Train1_V1_Test4: the number of training samples for each emotion is 147).
  • Domain adaptation algorithm can improve the performance of cross-day emotion recognition
For the cross-day emotion recognition, the TCA algorithm is used for feature matching in different time domains, and by using EEG data from different days as training and validation sets, the optimal transformed dimensionality of TCA will be determined adaptively and the emotion recognition performance will be optimized. In case of the Train4_V1_Test1, the average recognition accuracies of positive-negative, joy-sadness, joy-anger, and joy-fear using the TCA algorithm are 83.03%, 75.70%, 73.91%, and 72.79%, respectively, which has been improved by 1.72%, 3.55%, 2.34%, 2.34% and 0.41%, respectively, compared to using SVM alone. Under the binary classification of discriminating positive from negative emotions, the use of TCA algorithm has improved the recognition performance of both the positive and negative samples, and the robustness of the model has also been improved, with the AUC value increasing from 0.7829 to 0.8166. Under the three classification tasks of positive-negative, joy-sadness, and joy-anger, the use of domain adaptation via TCA has significantly improved the accuracy of emotion recognition (t-test, p < 0.05). This paper also discovers that in comparison to using fixed dimensionality, selecting the optimal dimensionality of TCA through the validation set can improve the performance of emotion recognition.
  • The EEG database can provide a reliable data basis for emotion recognition across time domains
To study the neural patterns of emotions based on EEG signals cross-day, the brain topography has been analyzed in this paper, which show there is a stable neural pattern of emotions cross-day. The TCA-based emotion recognition model and brain topography in this paper, verify that the database can provide a reliable data basis for emotion recognition across time domains. This EEG database will be open to more researchers to promote the practical application of emotion recognition. Based on the self-built cross -day EEG database, this paper proposes a strategy based on cross-day EEG data to determine the dimension of TCA transform, which can be used to solve the problem of EEG feature matching in different time domains and effectively improve the performance of emotion recognition cross-day. Meanwhile, the experiment result shows that the database can provide a reliable data basis for emotion recognition across time domains. However, there are still some limitations, for example, the rate of emotion recognition can be further improved, TCA algorithm reduces the fear classification performance, and only the binary classification is studied, etc. Thus, on the basis of above research, we will continue to study better robust emotion recognition algorithms, subsequently study multiple classification and fine-grained classification recognition algorithms.

5. Conclusions

In this study, we have constructed a emotional EEG database based on the Chinese Affective Video System and the self-built video stimuli materials, which collected over 6 days of EEG signals for each subject. This database provided a favorable signal foundation for emotion recognition studies across time domains. On the basis of this database, we have proposed the employment of TCA algorithm to match the EEG emotional features in multi-time domain. The EEG data from different days are used as the training and validation sets to adaptively determine the optimal transformed dimensionality of TCA, which has effectively improved the recognition accuracy of joy, sadness, anger, and fear emotions, and has validated the effectiveness of the TCA strategy in improving emotion recognition performance across time domains. In future studies, further use of deep learning methods for emotion recognition cross-day will be investigated.

Author Contributions

Conceptualization, Z.H., N.Z. and Y.Z.; methodology, Z.H.; software, Z.H.; validation, Z.H. and N.Z.; investigation, Y.Z.; resources, Y.Z. and B.Y.; data curation, Z.H.; writing—original draft preparation, Z.H.; writing—review and editing, Z.H. and G.B.; supervision, B.Y. and Y.Z.; All authors have read and agreed to the published version of the manuscript.

Funding

This research has been supported by the National Key Research and Development Plan of China under grant 2017YFB1002502, the National Natural Science Foundation of China (No. 61701089), the Natural Science Foundation of Henan Province of China (No.162300410333) and the Fundamental Research Funds for the Central Universities (No. ZYGX2016J127).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Henan Provincial People’s Hospital.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Acknowledgments

We thank Li Sheng’s team at Peking University for providing the standardized video stimuli materials; and Luo Yuejia’s team at School of Psychology, Shenzhen University for providing the Chinese Affective Video System.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The experiment was conducted in three parts, namely A, B and C, with each part corresponded to a table, namely Table A1, Table A2, and Table A3 as follows.
  • Part A
Table A1. Number labels of emotion videos.
Table A1. Number labels of emotion videos.
Category of EmotionName of the VideoLabelTime (ms)
Joyj2.avi
Eat Hot Tofu Slowly
1109,000
j3.avi
A Big Potato
2142,000
j5.avi
Flirting Scholar
3112,000
Sadnesss12.avi
Roots and Branches
4146,000
s14.avi
My Beloved
5137,000
s15.avi
Warm Spring
6102,000
Angera23.avi
Fist of Fury (2)
766,000
a24.avi
Kang Xi Kingdom
894,000
a25.avi
Conman In Tokyo
9107,000
Fearf27.avi
Save Me
1050,000
f28.avi
The Game of Killing (1)
11159,000
f31.avi
Help
12247,000
  • Part B
Table A2. Number labels of emotion videos.
Table A2. Number labels of emotion videos.
Category of EmotionName of the VideoLabelTime (ms)
JoyH2.avi
East Meets West, Hong Qi expressed love to his cousin-sister
1228,000
H3.avi
A World Without Thieves, the clip of robbing
2191,000
H5.avi
Chaplin Comedy
3244,000
SadnessS2.avi
Darling, Tian Wenjun looked for his sun
4182,000
S3.avi
Aftershock
5335,000
S4.avi
Darling, Mom watched her daughter through the window
6120,000
AngerA1.avi
Yip Man 2, The boxing champion mocked Chinese martial arts
7172,000
A2.avi
Never Talk to Strangers
8205,000
a22.avi
Fist of Fury (1)
9258,000
FearF1.avi
Lights out, the film clip of shadows after lights out
10134,000
F5.avi
F_05, Four men lying on the ground at the beginning of the film
11291,000
F7.avi
The film clip of big snake eating people
12158,000
  • Part C
Table A3. Number labels of emotion videos.
Table A3. Number labels of emotion videos.
Category of EmotionName of the VideoLabelTime (ms)
JoyH1.avi
Lost on Journey, Check-in part
1281,000
H6.avi
Home with Kids
2187,000
j4.avi
East Meets West, Hong Qi jumped off the cliff part
353,000
SadnessS5.avi
English movie, a man calling in the snow
4142,000
S8.avi
Echoes of the Rainbow, the part of typhoon blowing
5241,000
s13.avi
Rob-B-Hood, saving the baby part
6234,000
AngerA3.avi
The film clip of Japanese invasion
796,000
A4.avi
Blind Mountain, villagers stopped the abducted woman from being saved
8275,000
A5.avi
Wildlife hunt
9148,000
FearF2.avi
Lying in bed and the quilt lifted by itself
10162,000
F3.avi
Ju-on: The Grudge, Japanese girl watching TV
11167,000
F6.avi
F_06, A woman hanging around with a gun
12190,000

References

  1. Picard, R.W. Affective Computing; MIT Press: Cambridge, MA, USA, 1997. [Google Scholar]
  2. Minsky, M. Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind; Simon & Schuster: New York, NY, USA, 2007. [Google Scholar]
  3. Lazar, J.; Feng, H.J.; Hochheiser, H. Case studies. In Research Methods in Human Computer Interaction II; Elsevier: Amsterdam, The Netherlands, 2017; pp. 153–185. [Google Scholar]
  4. Mühl, C.; Allison, B.; Nijholt, A.; Chanel, G. A survey of affective brain computer interfaces: Principles, state-of-the-art, and challenges. Brain Comput. Interfaces 2014, 1, 66–84. [Google Scholar] [CrossRef] [Green Version]
  5. Shanechi, M.M. Brain–machine interfaces from motor to mood. Nat. Neurosci. 2019, 22, 1554–1564. [Google Scholar] [CrossRef]
  6. Lang, P.J.; Bradley, M.M.; Cuthbert, B.N. International Affective Picture System (IAPS): Technical Manual and Affective Ratings; Technical Report A-4; University of Florida: Gainesville, FL, USA, 1999. [Google Scholar]
  7. Lang, P.; Bradley, M.; Cuthbert, B. International Affective Picture System (IAPS): Affective Ratings of Pictures and Instruction Manual (Rep. No. A-8); Technical Report A-8; University of Florida: Gainesville, FL, USA, 2008. [Google Scholar]
  8. Gross, J.J. Emotion elicitation using films: Cognition and Emotion. Cogn. Emot. 1995, 9, 87–108. [Google Scholar] [CrossRef]
  9. Liu, T.S.; Luo, Y.J.; Ma, H.; Huang, Y.X. The establishment and assessment of a native affective sound system. Psychol. Sci. 2006, 2, 406–408. [Google Scholar]
  10. Koelstra, S. DEAP: A Database for Emotion Analysis; Using Physiological Signals. IEEE Trans. Affect. Comput. 2012, 3, 18–31. [Google Scholar] [CrossRef] [Green Version]
  11. Soleymani, M. A Multimodal Database for Affect Recognition and Implicit Tagging. IEEE Trans. Affect. Comput. 2012, 3, 42–55. [Google Scholar] [CrossRef] [Green Version]
  12. Zheng, W.L.; Zhu, J.Y.; Lu, B.L. Identifying Stable Patterns over Time for Emotion Recognition from EEG. IEEE Trans. Affect. Comput. 2017, 10, 417–429. [Google Scholar] [CrossRef] [Green Version]
  13. Takahashi, K. Remarks on emotion recognition from multi-modal bio potential signals. JES Ergon. 2010, 3, 1138–1143. [Google Scholar]
  14. Sourina, O.; Liu, Y. A Fractal-based Algorithm of Emotion Recognition from EEG using Arousal-Valence Model. In Proceedings of the Biosignals-International Conference on Bio-Inspired Systems & Signal Processing, Rome, Italy, 26–29 January 2011. [Google Scholar]
  15. Liu, Y.; Sourina, O. Real-Time Subject-Dependent EEG-Based Emotion Recognition Algorithm. In Transactions on Computational Science XXIII; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  16. Jie, X.; Cao, R.; Li, L. Emotion recognition based on the sample entropy of EEG. Bio-Med. Mater. Eng. 2014, 24, 1185. [Google Scholar] [CrossRef] [PubMed]
  17. Kroupi, E.; Yazdani, A.; Ebrahimi, T. EEG Correlates of Different Emotional States Elicited during Watching Music Videos. In Affective Computing and Intelligent Interaction; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  18. Hjorth, B. EEG analysis based on time domain properties. Electroencephalogr. Clin. Neurophysiol. 1970, 29, 306–310. [Google Scholar] [CrossRef]
  19. Petrantonakis, P.C.; Hadjileontiadis, L.J. Emotion Recognition from EEG Using Higher Order Crossings. IEEE Trans. Inf. Technol. Biomed. 2010, 14, 186. [Google Scholar] [CrossRef]
  20. Duan, R.N.; Zhu, J.Y.; Lu, B.L. Differential entropy feature for EEG-based emotion classification. In Proceedings of the 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER), San Diego, CA, USA, 6–8 November 2013. [Google Scholar]
  21. Lin, Y.P.; Wang, C.H.; Jung, T.P.; Wu, T.L.; Jeng, S.K.; Duann, J.R.; Chen, J.H. EEG-based emotion recognition in music listening. IEEE Trans. Biomed. Eng. 2010, 57, 1798–1806. [Google Scholar]
  22. Chanel, G.; Ansari-Asl, K.; Pun, T. Valence-arousal evaluation using physiological signals in an emotion recall paradigm. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Montreal, QC, Canada, 7–10 October 2007. [Google Scholar]
  23. Uzun, S.; Yildirim, S.; Yildirim, E. Emotion primitives estimation from EEG signals using Hilbert Huang Transform. In Proceedings of the 2012 IEEE-EMBS International Conference on Biomedical and Health Informatics, Hong Kong, China, 5–7 January 2012. [Google Scholar] [CrossRef]
  24. Huang, N.E.; Shen, Z.; Long, S.R.; Wu, M.C.; Shih, H.H.; Zheng, Q.; Yen, N.-C.; Tung, C.C.; Liu, H.H. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc. Math. Phys. Eng. Sci. 1998, 454, 903–995. [Google Scholar] [CrossRef]
  25. Özerdem, M.S.; Polat, H. Emotion recognition based on EEG features in movie clips with channel selection. Brain Inform. 2017, 4, 241–252. [Google Scholar] [CrossRef]
  26. Mohammadi, Z.; Frounchi, J.; Amiri, M. Wavelet-based emotion recognition system using EEG signal. Neural Comput. Appl. 2017, 28, 1985–1990. [Google Scholar] [CrossRef]
  27. Murugappan, M. Human emotion classification using wavelet transform and KNN. In Proceedings of the 2011 International Conference on Pattern Analysis and Intelligent Robotics, ICPAIR 2011, Kuala Lumpur, Malaysia, 28–29 June 2011; Volume 1. [Google Scholar] [CrossRef]
  28. Wichakam, I.; Vateekul, P. An evaluation of feature extraction in EEG-based emotion prediction with support vector machines. In Proceedings of the 2014 11th International Joint Conference on Computer Science and Software Engineering (JCSSE), Chon Buri, Thailand, 14–16 May 2014. [Google Scholar]
  29. Mert, A.; Akan, A. Emotion recognition from EEG signals by using multivariate empirical mode decomposition. Pattern Anal. Appl. 2016, 21, 81–89. [Google Scholar] [CrossRef]
  30. Ning, Z.; Ying, Z.; Li, T.; Chi, Z.; Hanming, Z.; Bin, Y. Emotion Recognition from EEG Signals Using Multidimensional Information in EMD Domain. BioMed Res. Int. 2017, 2017, 8317357. [Google Scholar]
  31. Zheng, W.L.; Lu, B.L. Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
  32. Yang, Y.; Jonathan Wu, Q.M.; Zheng, W.L.; Lu, B.L. EEG-Based Emotion Recognition Using Hierarchical Network with Subnetwork Nodes. IEEE Trans. Cogn. Dev. Syst. 2018, 10, 408–419. [Google Scholar] [CrossRef]
  33. Zhang, T.; Zheng, W.; Cui, Z.; Zong, Y.; Li, Y. Spatial–Temporal Recurrent Neural Network for Emotion Recognition. IEEE Trans. Cybern. 2018, 49, 839–847. [Google Scholar] [CrossRef] [Green Version]
  34. Li, J.; Zhang, Z.; He, H. Hierarchical Convolutional Neural Networks for EEG-Based Emotion Recognition. Cogn. Comput. 2017, 10, 368–380. [Google Scholar] [CrossRef]
  35. Xing, X.; Li, Z.; Xu, T.; Shu, L.; Xu, X. SAE+LSTM: A New Framework for Emotion Recognition from Multi-Channel EEG. Front. Neurorobotics 2019, 13, 37. [Google Scholar] [CrossRef] [PubMed]
  36. Picard, R.W.; Vyzas, E.; Healey, J. Toward machine emotional intelligence: Analysis of affective physiological state. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 1175–1191. [Google Scholar] [CrossRef] [Green Version]
  37. Lin, Y.-P.; Jao, P.-K.; Yang, Y.-H. Improving Cross-Day EEG-Based Emotion Classification Using Robust Principal Component Analysis. Front. Comput. Neurosci. 2017, 11, 64. [Google Scholar] [CrossRef]
  38. Liu, S.; Chen, L.; Guo, D.; Liu, X.; Sheng, Y.; Ke, Y.; Xu, M.; An, X.; Yang, J.; Ming, D. Incorporation of Multiple-Days Information to Improve the Generalization of EEG-Based Emotion Recognition Over Time. Front. Hum. Neurosci. 2018, 12, 267. [Google Scholar] [CrossRef] [PubMed]
  39. Bao, G.; Zhuang, N.; Tong, L.; Yan, B.; Shu, J.; Wang, L.; Zeng, Y.; Shen, Z. Two-Level Domain Adaptation Neural Network for EEG-Based Emotion Recognition. Front. Hum. Neurosci. 2021, 14, 605246. [Google Scholar] [CrossRef]
  40. Julian, L.J. Measures of anxiety: State-Trait Anxiety Inventory (STAI), Beck Anxiety Inventory (BAI), and Hospital Anxiety and Depression Scale-Anxiety (HADS-A). Arthritis Care Res. 2011, 63, S467–S472. [Google Scholar] [CrossRef] [Green Version]
  41. Shear, M.K.; Bilt, J.V.; Rucci, P.; Endicott, J.; Lydiard, B.; Otto, M.W.; Pollack, M.H.; Chandler, L.; Williams, J.; Ali, A.; et al. Reliability and validity of a structured interview guide for the Hamilton Anxiety Rating Scale (SIGH-A). Depress. Anxiety 2001, 13, 166–178. [Google Scholar] [CrossRef]
  42. Williams, J.B.W. A structured interview guide for the Hamilton Depression Rating Scale. Arch. Gen. Psychiatry 1988, 45, 742. [Google Scholar] [CrossRef] [PubMed]
  43. Yao, D. A method to standardize a reference of scalp EEG recordings to a point at infinity. Physiol. Meas. 2001, 22, 693–711. [Google Scholar] [CrossRef] [PubMed]
  44. Yao, D.; Qin, Y.; Hu, S.; Dong, L.; Vega, M.L.B.; Sosa, P.A.V. Which Reference Should We Use for EEG and ERP practice? Brain Topogr. 2019, 32, 530–549. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Hyvarinen, A. Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans. Neural Netw. 2002, 10, 626–634. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Liu, Y.-J.; Yu, M.; Zhao, G.; Song, J.; Ge, Y.; Shi, Y. Real-Time Movie-Induced Discrete Emotion Recognition from EEG Signals. IEEE Trans. Affect. Comput. 2018, 9, 550–562. [Google Scholar] [CrossRef]
  47. Pan, S.J.; Tsang, I.; Kwok, J.T.; Yang, Q. Domain Adaptation via Transfer Component Analysis. IEEE Trans. Neural Netw. 2011, 22, 199–210. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The process of emotion recognition based on EEG signals.
Figure 1. The process of emotion recognition based on EEG signals.
Electronics 11 00651 g001
Figure 2. The experiment paradigm for the EEG-based emotion recognition cross-day.
Figure 2. The experiment paradigm for the EEG-based emotion recognition cross-day.
Electronics 11 00651 g002
Figure 3. Segmentation of EEG signals. EEG data are segmented using a window of 2 s with 50% overlap between two consecutive windows.
Figure 3. Segmentation of EEG signals. EEG data are segmented using a window of 2 s with 50% overlap between two consecutive windows.
Electronics 11 00651 g003
Figure 4. The intra-day and cross-day emotion recognition performance.
Figure 4. The intra-day and cross-day emotion recognition performance.
Electronics 11 00651 g004
Figure 5. The average ROC curves of all the subjects in the binary classification of positive and negative emotions.
Figure 5. The average ROC curves of all the subjects in the binary classification of positive and negative emotions.
Electronics 11 00651 g005
Figure 6. Performance of cross-day emotion recognition using domain adaptation algorithm and SVM, where “Positive-Negative”, “Joy-Sadness”, “Joy-Anger”, “Joy-Fear” indicate the binary classification of the positive and negative emotions: joy and sadness, joy and anger, and joy and fear, respectively. “SVM” represents a short-time Fourier transform of the EEG signals, and then the differential entropy features are extracted for recognition using SVM. “TCA + SVM” represents a short-time Fourier transform of the EEG signals, and then domain adaptive matching via TCA algorithm will be applied to the EEG features and the features transformed will be recognized using SVM.
Figure 6. Performance of cross-day emotion recognition using domain adaptation algorithm and SVM, where “Positive-Negative”, “Joy-Sadness”, “Joy-Anger”, “Joy-Fear” indicate the binary classification of the positive and negative emotions: joy and sadness, joy and anger, and joy and fear, respectively. “SVM” represents a short-time Fourier transform of the EEG signals, and then the differential entropy features are extracted for recognition using SVM. “TCA + SVM” represents a short-time Fourier transform of the EEG signals, and then domain adaptive matching via TCA algorithm will be applied to the EEG features and the features transformed will be recognized using SVM.
Electronics 11 00651 g006
Figure 7. The average ROC curve of all the subjects in the binary classification of positive and negative emotions in case of Train4_V1_Test1.
Figure 7. The average ROC curve of all the subjects in the binary classification of positive and negative emotions in case of Train4_V1_Test1.
Electronics 11 00651 g007
Figure 8. Average confusion matrices for all subjects in the binary classification of emotions using SVM underTrain4_V1_Test1. (a) Average confusion matrix for the joy-sadness classification. (b) Average confusion matrix for the joy-anger classification (c) Average confusion matrix for the joy-fear classification.
Figure 8. Average confusion matrices for all subjects in the binary classification of emotions using SVM underTrain4_V1_Test1. (a) Average confusion matrix for the joy-sadness classification. (b) Average confusion matrix for the joy-anger classification (c) Average confusion matrix for the joy-fear classification.
Electronics 11 00651 g008
Figure 9. Average confusion matrices for all subjects in the binary classification of emotions using TCA + SVM under Train4_V1_Test1. (a) Average confusion matrix for the joy-sadness classification. (b) Average confusion matrix for the joy-anger classification (c) Average confusion matrix for the joy-fear classification.
Figure 9. Average confusion matrices for all subjects in the binary classification of emotions using TCA + SVM under Train4_V1_Test1. (a) Average confusion matrix for the joy-sadness classification. (b) Average confusion matrix for the joy-anger classification (c) Average confusion matrix for the joy-fear classification.
Electronics 11 00651 g009
Figure 10. The average brain topography of all subjects under different emotional states, which shows the differential entropy features of all subjects in Gamma bands (30–64 Hz) cross-day.
Figure 10. The average brain topography of all subjects under different emotional states, which shows the differential entropy features of all subjects in Gamma bands (30–64 Hz) cross-day.
Electronics 11 00651 g010
Table 1. Performance of the binary classification of positive-negative emotions on the intra-day and cross-day.
Table 1. Performance of the binary classification of positive-negative emotions on the intra-day and cross-day.
Intra-DayTrain1_V1_Test4Train2_V1_Test3Train3_V1_Test2Train4_V1_Test1
Actual
Predicted
PositiveNegativePositiveNegativePositiveNegativePositiveNegativePositiveNegative
Positive8081165781297728628456846511428439341709
Negative250330,095774739,900782836,652760138,052488624,751
Sensitivity (%)76.3551.2044.5346.1444.60
Specificity (%)94.7883.7786.5789.8893.54
Accuracy rate (%)90.1775.6376.0678.9581.31
Table 2. Performance of binary classification of the positive-negative emotions in case of Train4_V1_Test1.
Table 2. Performance of binary classification of the positive-negative emotions in case of Train4_V1_Test1.
SVMTCA + SVM
Actual PredictedPositiveNegativePositiveNegative
Positive3934170943621530
Negative488624,751445824,930
Sensitivity (%)44.6049.46
Specificity (%)93.5494.22
Accuracy rate (%)81.3183.03
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

He, Z.; Zhuang, N.; Bao, G.; Zeng, Y.; Yan, B. Cross-Day EEG-Based Emotion Recognition Using Transfer Component Analysis. Electronics 2022, 11, 651. https://doi.org/10.3390/electronics11040651

AMA Style

He Z, Zhuang N, Bao G, Zeng Y, Yan B. Cross-Day EEG-Based Emotion Recognition Using Transfer Component Analysis. Electronics. 2022; 11(4):651. https://doi.org/10.3390/electronics11040651

Chicago/Turabian Style

He, Zhongyang, Ning Zhuang, Guangcheng Bao, Ying Zeng, and Bin Yan. 2022. "Cross-Day EEG-Based Emotion Recognition Using Transfer Component Analysis" Electronics 11, no. 4: 651. https://doi.org/10.3390/electronics11040651

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop