Next Article in Journal
Three-Dimensional Kinematics during Shoulder Scaption in Asymptomatic and Symptomatic Subjects by Inertial Sensors: A Cross-Sectional Study
Previous Article in Journal
Towards Dynamic Model-Based Agile Architecting of Cyber-Physical Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantitative Evaluation of EEG-Biomarkers for Prediction of Sleep Stages

1
Department of Biomedical Engineering, Medical Research Center, College of Medicine, Seoul National University, Seoul 03080, Korea
2
Network and Data Analysis Group, Department of Computer Science and Engineering, Islamic University and Technology (IUT), Gazipur 1704, Bangladesh
3
Department of Biomedical Engineering, College of Medicine, Chungnam National University, Daejeon 35015, Korea
4
Department of Computer Engineering, Myongji University, Yongin 17058, Korea
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(8), 3079; https://doi.org/10.3390/s22083079
Submission received: 23 March 2022 / Revised: 11 April 2022 / Accepted: 15 April 2022 / Published: 17 April 2022
(This article belongs to the Section Wearables)

Abstract

:
Electroencephalography (EEG) is immediate and sensitive to neurological changes resulting from sleep stages and is considered a computing tool for understanding the association between neurological outcomes and sleep stages. EEG is expected to be an efficient approach for sleep stage prediction outside a highly equipped clinical setting compared with multimodal physiological signal-based polysomnography. This study aims to quantify the neurological EEG-biomarkers and predict five-class sleep stages using sleep EEG data. We investigated the three-channel EEG sleep recordings of 154 individuals (mean age of 53.8 ± 15.4 years) from the Haaglanden Medisch Centrum (HMC, The Hague, The Netherlands) open-access public dataset of PhysioNet. The power of fast-wave alpha, beta, and gamma rhythms decreases; and the power of slow-wave delta and theta oscillations gradually increases as sleep becomes deeper. Delta wave power ratios (DAR, DTR, and DTABR) may be considered biomarkers for their characteristics of attenuation in NREM sleep and subsequent increase in REM sleep. The overall accuracy of the C5.0, Neural Network, and CHAID machine-learning models are 91%, 89%, and 84%, respectively, for multi-class classification of the sleep stages. The EEG-based sleep stage prediction approach is expected to be utilized in a wearable sleep monitoring system.

1. Introduction

Sleep is a biological activity that occurs spontaneously in humans and has an influence on task performance, physical and mental health, and overall quality of life. Sleep accounts for about one-third of an individual’s whole lifetime. Sleep deprivation is the root cause of insomnia, anxiety, schizophrenia, and other mental illnesses. Moreover, drowsiness, an outcome of sleep deprivation, is a reason for around one-fifth of vehicle accidents and injuries. Sleep is a dynamic phenomenon including a variety of sleep phases, wake (W), non-rapid eye movement (NREM) sleep, and rapid eye movement (REM) sleep. Furthermore, NREM sleep stages are classified into NREM-1 (N1), NREM-2 (N2), and NREM-3 (N3) [1]. A healthy sleeper goes through multiple NREM and REM cycles throughout the night. The N1 stage occurs when the individual feels sleepy and marks the shift from the awake state. In Stage N2, the dynamics of vital signals, such as ocular movements, heart rate, body temperature, and brain activity start to attenuate. The N3 stage is considered deep sleep or slow-wave sleep; no eye or muscular movement occurs, and muscles and tissues are healed. The last stage, referred to as the REM state, is characterized by rapid eye movements and fast breathing. During this period, the body becomes relaxed, and dreaming begins. About 75% of the sleeping period is spent in NREM sleep, while REM sleep accounts for about 25% [2].
Physiological signal monitoring can be employed for health monitoring, disease prognostics, and functional response in daily activities, such as sleeping, driving, walking, working, and so on [3,4,5,6,7,8,9,10]. Tracking brainwaves is one of the essential methods for assessing cognitive load, and electroencephalography (EEG) is the physiological tool for measuring the electrical potential from the scalp that directly reflects the activities originating from the brain [11,12,13,14]. Polysomnography (PSG) has traditionally been used in the clinical environment to examine sleep quality and sleep disorders. PSG is a multi-sensor recording technique that collects physiological signals from the sleeping individual and is considered the primary tool in the diagnosis of sleep disorders. These biomedical signals include EEG, electromyography (EMG), electrooculography (EOG), and electrocardiography (ECG) [15]. These biosignals are measured to understand physiological activity, such as EEG for brain activity, EOG to track eye movement, EMG to measure the activity of muscles, and ECG to monitor the electrical activity of the heart [15]. Each 30-second epoch of the PSG signal is analyzed and given a sleep stage by a sleep specialist. The American Association of Sleep Medicine (AASM) classifies sleep-wake cycles into five phases: waking (W), REM sleep, and three types of NREM sleep: N1, N2, and N3. Moreover, the N1 and N2 stages are considered light sleep, and N3 is considered deep-sleep or slow-wave sleep [1].
The gold-standard PSG sleep scoring procedure is time-consuming and labor-intensive; it requires a human expert to score a whole night of sleep data manually by examining signal patterns. Additionally, a patient must attend a lab or clinic and spend the whole night PSG recording in a clinical environment, which is often an expensive and complex process. Aside from that, the PSG signal is very inconvenient and uncomfortable for individuals since its highly sticky electrodes and cabling are continually connected to the body. These challenges compelled physicians to depend only on subjective questionaries to determine sleep quality for neurological therapy and a variety of sleep disorder diagnostic methods. Therefore, developing an automated sleep staging system that is simple to use and reliable would have a significant contribution to this field [16,17,18,19]. HeathSOS, a wearable health monitoring system consisting of an eye mask embedded with EEG and EOG electrodes, has been reported as an alternative system for sleep monitoring [12,18]. Big-ECG, a cyber-physical cardiac monitoring system consisting of a wearable ECG sensor, a big data platform, and health advisory services, has been studied for disease prediction in the resting state, sleep, and other daily activities [15]. Several studies have reported that EEG signals are more helpful during sleep scoring than any other kind of PSG signal [20]. EEG signals directly track the brain’s activity and differentiate various sleep patterns [8,18,19].
Our study attempts to automate this sleep scoring process by utilizing data from three representative EEG channels from three cortical positions (F4, C4, and O2). F4 is representative of the frontal lobe, C4 is representative of the central lobe, and O2 is representative of the occipital lobe. We hypothesized that sleep-stage dependent responses of the central nervous system would be immediately sensed by the EEG. Signal processing, feature extraction, and a machine-learning approach are likely reliable methods to explore the physiological and neurological patterns of sleep stages.
We aim to investigate the EEG activity and identify the physiological biomarkers during sleeping. We developed the neurological state prediction model to classify the neurological responses in different phases of sleep. The key contributions of this paper can be summarized as follows:
  • EEG biomarkers, consisting of frequency spectral measures for sleep stages, have been identified using statistical analysis.
  • Machine-learning models have been developed to classify the neurological states in different sleep stages.
We organized the remainder of this article into four sections. The datasets and the methodology for EEG pre-processing, feature extraction, and statistical and machine-learning analysis methods are described in Section 2. After that, the results are reported in Section 3, trailed by the discussion. Lastly, we state our conclusions in Section 5.

2. Materials and Methods

To identify the physiological biomarkers of sleep stages and develop a machine-learning-based prediction model to classify the sleep stages, we performed EEG data pre-processing, feature extraction, feature selection, statistical analysis of features, and a machine-learning classification approach (Figure 1). Details about the EEG data processing, statistical analysis, and machine-learning classification methods are presented in the following subsections.

2.1. Dataset

We utilized the sleep recording of Haaglanden Medisch Centrum (HMC, The Hague, The Netherlands), available as an open-access public dataset in PhysioNet [21,22]. It was collected in 2018 and published very recently on 1 July 2021. The dataset includes a whole-night PSG sleep recording of 154 people (88 Male, 66 Female) with a mean age of 53.8 ± 15.4 years. Patient recordings were chosen at random and represented a diverse group of people who were referred for PSG examinations in the context of various sleep disorders. All signals were captured at 256 Hz using AgAgCl electrodes on SOMNOscreen PSG, PSG+, and EEG 10–20 recorders (SOMNOmedics, Randersacker, Germany). Each recording consists of four-channel EEG (F4/M1, C4/M1, O2/M1, and C3/M2), two-channel EOG (E1/M2 and E2/M2), one-channel bipolar chin EMG, and one-channel ECG. The recordings also contain the sleep scoring, consisting of W, N1, N2, N3, and R for an epoch of 30 sec. The AASM guidelines were used to score sleep stages which were manually scored by well-trained sleep technicians [1]. We have decided to use three EEG channels (F4, C4, and O2) in this study according to the international 10–20 EEG system.

2.2. Pre-Processing

The EEG signal was filtered to remove any 60 Hz AC noise from the nearby electrical grid. The eye-blink and muscle artifacts were separated and removed using EOG and EMG recordings from the EEG signal. Independent component analysis (ICA) was then used to eliminate ocular and muscle artifacts from the EEG signal using the FastICA methods [23]. Low-frequency motion artifact interference was produced by head and sensor movement close to the skin. A signal-to-noise ratio (SNR) was obtained for each signal by calculating the power ratio of the movement-affected EEG signal and the undisturbed measurement [24]. A band-pass filter was used to filter the EEG waveform within the frequency range of 0.5–44 Hz. The pre-processing and feature extraction of EEG data were carried out using the AcqKnowledge version 5.0 program (Biopac Systems Inc., Goleta, CA, USA).

2.3. Feature Extraction

EEG can be defined in terms of frequency and power within different frequency bands. The delta (δ) band ranges in frequency from 0.5 to 4.0 Hz, the theta (θ) band ranges in frequency from 4.0 to 8.0 Hz, the alpha (α) wave runs on 8.0–13.0 Hz, the beta (β) band is maintained in frequency from 13.0 to 30.0 Hz, and the gamma (γ) wave attained 30.0–44.0 Hz band [25,26]. EEG features were extracted from EEG signals using Fast Fourier transforms (FFT) and other methods to study the power within the EEG data. The power spectrum was computed using power spectral density (PSD) for each time epoch using the Welch periodogram technique [27]. For each epoch, the mean power, median frequency, mean frequency, spectral edge, and peak frequency features were extracted from this PSD. The epoch width was specified as 30 s. Extracted EEG features are summarized in Table 1 of this study. This EEG dataset contains a total of 89 sets of EEG features.

2.3.1. EEG Frequency-Domain Features

The EEG Frequency Analysis was performed using FFT and the Welch periodogram [27] on artifact-free EEG signals with 10% hamming and extracted absolute power in the following spectral frequency bands: delta (0.5–4.0 Hz), theta (4.0–8.0 Hz), alpha (8.0–13.0 Hz), beta (13.0–30 Hz), and gamma (30.0–44 Hz). The average power of the power spectrum within the epoch was defined as the mean power. The median frequency was defined as the frequency at which half of the total power in the epoch is attained. The mean frequency was defined as the frequency at which the epoch’s average power is obtained. The spectral edge is defined as the frequency below which 90% of the total power inside the epoch is attained. The frequency at which the maximum power occurs throughout the epoch was identified as the peak frequency. To normalize the amplitudes of distinct EEG bands, relative power (RP) was computed as the ratio of each band’s power to the total power of all bands. For every 30 s epoch, all band power features were calculated. The following is the definition of the spectral power density of an EEG time-series signal x(t) with frequency j:
E j = lim t 1 t | x t ^ ( j ) | 2
where x t ^ ( j ) is the Fourier transform of x(t) at frequency, j (in Hz) using the Welch periodogram. The EEG Band Relative power is defined in Equation (2).
e j = E ( j 1 , j 2 ) j = 0.5 44 E j
where Ej is absolute spectral power density with frequency j (with j = 0.5,..., 44) and j1 and j2 are the low and high frequency in Hz, respectively, and (j1, j2) is defined as δ (0.5, 4), θ (4, 8), α (8, 13), β (13, 30), and γ (30.0, 44) [28].

2.3.2. DAR, DTR, and DTABR

The delta-alpha ratio (DAR), defined as the ratio of delta to alpha band power, was calculated according to Equation (3). The delta-theta ratio (DTR) was defined as the ratio of delta band power to theta band power and computed according to Equation (4). Equation (5) defines the (Delta + Theta)/(Alpha + Beta) ratio (DTABR), identified as the relative sum of slow-wave (delta rhythm and theta rhythm) power to fast-oscillating wave (alpha rhythm and beta rhythm) power [29].
DAR = e j = δ e j = α
DTR = e j = δ e j = θ
DTABR = e j = δ + e j = θ e j = α + e j = β
where j is the spectral frequency range, delta (δ) ranges 0.5–4.0 Hz, theta (θ) ranges 4.0–8.0 Hz, and alpha (α) ranges 8.0–13.0 Hz; e j = δ and e j = α   is the EEG Band Relative delta and alpha power, respectively, in different sleep stages (W, N1, N2, N3, and R).

2.4. Features Selection

Feature selection greatly reduces the time and memory required for data processing, enabling machine learning algorithms to focus on just the most important features. The F-statistics [30] were used to determine the relevance of each feature on a scale ranging from zero to one. We used the p-value (probability) based on F-statistics for feature selection to investigate the most contributing features after performing the one-way ANOVA F-test for each continuous predictor. In the first step, we eliminated any features that had constant or missing values. The significance of each feature was measured by its effectiveness in independently predicting the target class. In this study, features with feature importance (1-p) of more than 95% were selected, where p is the F-test result.

2.5. Classification Algorithms

Machine-learning algorithms were used to classify neurological features during wakefulness, stages N1, N2, N3, and R. EEG feature data from 80% of selected features was used for training, while 20% of data were used for testing classification algorithms. The Neural Network, CHAID, and C5.0 models were used to distinguish the neurological features of sleep stages. As the N1 stage dataset is smaller compared with the other sleep stage datasets, we implemented the “class weighting” technique [31], heavily weighting the N1 stage and under-weighting the majority classes to deal with the imbalanced dataset.

2.5.1. The Neural Network Model

The neural network is a data analysis technique that makes predictions based on the growth of a complex multi-layered network [32]. In this research, we employed a multilayer perceptron (MLP) neural network. This model is capable of estimating a broad variety of analytical models with minimal requirements on the model structure and assumptions. This model is comprised of multiple input nodes, a neural network with hidden layers, and an output layer.

2.5.2. Chi-Squared Automatic Interaction Detector (CHAID) Model

The chi-squared automatic interaction detector (CHAID) method creates a decision tree by incrementally breaking a subset into two or more child nodes, starting with the entire data set [33]. The optimal partition across all nodes is obtained by merging the classifiers’ pairs until no significant difference in the target’s pair is noticed. As a decision tree model, the output of the CHAID model is visually appealing and simple to read in a clinical decision support system. This technique is commonly used in applications involving biological data analysis.

2.5.3. C5.0 Model

The C5.0 model is a supervised data analysis method that attempts to construct decision trees or rule sets [34]. This model partitions the data according to the field with the greatest gain ratio. The model constructs a decision tree which is then pruned to reduce the tree’s estimated error rate. This model requires little training time and is resilient to missing data and a large number of input variables.

2.6. Data Analysis

This study employed descriptive statistics to compare the participants’ demographic data. The characteristics of the EEG spectra features were shown in a bar chart with an error bar. The data in the bar chart represents the mean value of the data along with their respective 95% confidence intervals (CI). Methods of statistical analysis consisted of descriptive statistics and hypothesis tests. The independent-samples t-test was used as a comparative measure of EEG features among sleep stages. A p-value of less than 0.05 was marked as statistically significant. Statistical analyses were accomplished using SPSS 26 software (IBM, Armonk, NY, USA). For the classification of sleep phases, we utilized state-of-art machine learning methods. EEG feature datasets were partitioned into the training and the testing dataset. We trained the machine learning algorithms on the training dataset to build the classification models which were then utilized for prediction on the EEG testing datasets. To eliminate overfitting, we used non-exhaustive k-fold (k = 10) cross-validation on the training dataset. For machine learning evaluations, we utilized IBM SPSS Modeler 18 software (IBM, Armonk, New York, NY, USA).

3. Results

3.1. Statistical Analysis

3.1.1. EEG Biomarkers for Sleep Stages

The EEG waveform varied during sleep with the change in sleep stages. Figure 2 shows the bar charts with error bars with a 95% confidence interval (C.I.) of EEG features of frequency bands during sleep stages W, N1, N2, N-3, and R. The global data indicates the average measures of the features of the frontal, central, and occipital lobes. The horizontal bars (brown color) are the outcomes of the hypothesis tests and indicate significant differences (p < 0.05) in EEG features among the sleep stages.
Alpha was highest in the wake stage and lowest in the N3 or deep sleep stage in all cortical positions. Alpha gradually weakens as sleep becomes deeper. In the REM sleep stage, the alpha wave again gains strength. Beta was also dominant in the wake stage and lowest in the N3 or deep sleep stage in all cortical positions. Beta gradually becomes dormant as sleep propagates from light sleep to a deep sleep state. In the REM sleep stage, the beta wave again increases.
Theta was highest in the REM stage and lowest in the N3 or deep sleep stage in all cortical positions. Theta increases in light sleep. In the REM sleep stage, the theta wave again gains strength. Delta was highest in deep sleep or N3 stage and lowest in the wake stage in the frontal and occipital cortical positions. An exception is observed only in the central lobe. In the central lobe, delta was highest in the wake stage and sharply went down in the N1 and N2 stages. Delta again gradually increased as sleep became deeper and was highest in REM sleep in the central cortex. Gamma was highest in the wake stage and lowest in the N3 or deep sleep stage in all cortical positions. Gamma gradually weakens as sleep becomes deeper. In the REM sleep stage, the gamma wave again became dominant. Statistical results (Mean and Standard Deviation) of EEG spectral features (δ, θ, α, β, and γ) during sleep stages are reported in Table 2.

3.1.2. Association of DAR, DTR, and DTABR with Sleep Stages

Delta power ratios, such as DAR and DTR, were explored during sleep stages W, N1, N2, N-3, and R (Table 3). Figure 3 shows the bar charts with error bars with a 95% confidence interval of DAR, DTR, and DTABR in sleep stages. Global delta ratio parameters (DAR, DTR, and DTABR) were dominant in the wake and N1 stages; they decreased sharply in the N2 and N3 stages. In the REM sleep stage, DAR, DTR, and DTABR further increase compared with the deep sleep N3 stage.

3.2. Machine Learning Analysis

Machine-learning algorithms were utilized to predict the physiological states of various sleep stages. Machine Learning analysis is comprised of three steps: feature selection, model training, and model testing (or validation). During the feature selection process, the F-statistics were used to assess the feature relevance of sleep EEG features. EEG features with a p-value larger than 0.95 were selected for further classification investigation. The confusion matrix, also known as the error matrix, clearly demonstrates prediction outcomes for all target classes. Other performance parameters are computed using the confusion matrix, including accuracy, sensitivity, and precision. Accuracy was defined as the ratio of correct predictions to total observations and was regarded as the most intuitive performance metric for identifying the optimal model. The following standard formulas are used to estimate the performance evaluation matrix:
Sensitivity = TP TP + FN
Specificity = TN TN + FP
Precision = TP TP + FP
Negative   predictive   value   ( NPV ) = TN TN + FN
Accuracy = TN + TP TN + TP + FN + FP
where TP stands for the true positive, TN means the true negative, FP stands for the false positive, and FN means the false negative.

Multi-Class Classification of Sleep Stages

We utilized the machine learning algorithms for the multi-class classification of the sleep stages W, N1, N2, N-3, and R. The confusion matrices of the three machine-learning models (C5.0, Neural Network, and CHAID Models) were demonstrated in Table 4, Table 5 and Table 6 as the outcomes of prediction performance for sleep stages. The performances of the three machine-learning models (C5.0, Neural Network, and CHAID Models) were demonstrated in Figure 4 to classify the sleep stages using a training and testing dataset of EEG features.
The C5.0 model showed 94% accuracy using the training dataset and 87% accuracy using the testing dataset for multi-class classification of sleep stages (Table 7). N3 and W stages were most accurately classified with accuracy for training (95% and 96%) and testing (91% and 92%). The wake stage was classified with the highest sensitivity for training (92%) and testing (81%). The sensitivity of the C5.0 model was the lowest in the N1 stage. Moreover, the wake stage was classified with the highest precision for training (86%) and testing (77%). Furthermore, the negative predictive value of the C5.0 model was highest in the wake stage for training (98%) and testing (96%).
The Neural Network model showed 89% accuracy using the training dataset and 89% accuracy using the testing dataset for multi-class classification of sleep stages (Table 8). The N3, REM, and W stage were the most accurately classified with accuracy for training (91%, 91%, and 92%) and testing (91%, 91%, and 92%). The wake stage was classified with the highest sensitivity for training (86%) and testing (86%). The sensitivity of the Neural Network model was lowest for the N1 stage. Moreover, the wake stage was classified with the highest precision for training (75%) and testing (76%). Furthermore, the negative predictive value of the Neural Network model was highest in the wake stage for training (97%) and testing (97%).
The CHAID model showed 84% accuracy using the training dataset and 84% accuracy using the testing dataset for multi-class classification of sleep stages (Table 9). The W stage was most accurately classified with accuracy for training (90%) and testing (90%). The wake stage was classified with the highest sensitivity for training (72%) and testing (71%). The sensitivity of the CHAID model was lowest in the N1 stage. Moreover, the wake stage was classified with the highest precision for training (73%) and testing (73%). Furthermore, the negative predictive value of the CHAID model was highest in the wake stage for training (94%) and testing (94%).

4. Discussion

In our study, we characterized the neurological changes in sleep stages and classification of sleep stages using three EEG channels located in the frontal (F4), central (C4), and occipital (O2) lobes of a diverse group of adults. The extent of neurological change depends on the individual’s sleep pattern, dynamics of sleep stage transitions, and the individual’s lifestyle overall. We evaluated the neurological biomarkers through EEG in every sleep stage. Patient recordings were randomly chosen and reflected a broad sample of individuals referred for PSG exams for a variety of sleep disorders. Sleep is classified as REM or NREM sleep. Stages N1, N2, and N3 correspond to NREM sleep. Different sleep phases must be characterized and classified to identify sleep-related diseases. For instance, detecting REM sleep is an essential job for diagnosing REM sleep behavior disorder, and classification of wake and sleep states is required for sleep monitoring. This study addresses these demands by classifying W, N1, N2, N3, and REM stages.
Alpha rhythm, one of the basic features of human EEG, is prominent in the relaxed eye-closed awake state, N1, and REM sleep [35]. Alpha attenuates during high arousal states. In our study, alpha oscillation is higher in the resting awake state and decreases in the light sleep stage. The alpha activity also increases during REM sleep due to the short bursts of alpha rhythm [36]. A similar nature was observed for beta activity in sleep stages. Theta rhythm increases in light sleep (N1 and N2) stages relative to the wake stage and attenuates in the slow-wave sleep (N3) stage. A rise in delta activity was observed in the slow-wave deep-sleep (N3) stage compared with light sleep stages. Delta wave is considered an indicator of slow-wave deep sleep [37].
It has been observed that the classification rates for the N1 and N2 sleep stages are lower, which is one of the most challenging tasks. The N2 sleep stage is usually the transition between the light sleep and deep sleep stages [38]. As both N1 and N2 stages are light sleep states, the N2 stage is often mislabeled as N1. Therefore, the automated sleep staging algorithm misclassified it as N1 or N2 [39]. Moreover, gamma rhythms are identical in light sleep stages (N1 and N2) and REM sleep. This may also lead to the misclassification of N1, N2, and REM sleep stages. Furthermore, human sleep is a combination of distinct sleep phases with an unequal distribution of sleep epochs. Table 10 demonstrates a comparative study of methodologies and results between the current work and previous machine learning-based sleep studies. It is observed in Table 10 that our proposed approach has a notable improvement in prediction performance compared with the existing state-of-the-art works related to the five-class sleep states classification. Our classification performance is much higher than other multi-class classification studies.
We analyzed only three-channel EEG data to understand the neurological changes in EEG due to sleep stages, focusing on single-channel data from each frontal, central, and occipital lobe. Although a standard sleep study consists of multimodal biosignals, we did not study all EEG channels to simplify the automatic sleep stage prediction suitable for a wearable sleep monitoring system. In the future, we plan to extend this study with multimodal signals to enhance the accuracy of the prediction models.

5. Conclusions

Prediction of sleep stages is considered an assistive technology in machine-learning-enabled wearable sleep monitoring systems. The neurological biomarkers of sleep stages have been quantified through the EEG signal of polysomnography. In NREM sleep, attenuation of the alpha, beta, and gamma rhythms were observed, as well as the rise of theta and delta rhythms with the awake state and the subsequent increase in alpha and beta rhythms in REM sleep. Delta wave power ratios (DAR, DTR, and DTABR) are expected to be considered as biomarkers for their nature of decreasing NREM sleep and subsequent increase in REM sleep. The overall accuracy of the C5.0, Neural Network, and CHAID models are 91%, 89%, and 84%, respectively, in the multi-class classification of the sleep stages. This EEG-based sleep stage prediction technique is a promising candidate for further neuroscience research in a wearable sleep monitoring system.

Author Contributions

Conceptualization, I.H.; methodology, I.H. and M.A.H.; software, I.H., R.J., M.A.B., M.U., Y.K. and A.R.M.K.; validation, I.H. and M.A.H.; formal analysis, I.H., R.J., M.A.B., M.U. and A.R.M.K.; investigation, I.H.; resources, J.-S.K.; data curation, I.H., R.J., M.A.B., M.U. and A.R.M.K.; writing—original draft preparation, I.H.; writing—review and editing, I.H., M.A.H., R.J., M.A.B., M.U., A.R.M.K., Y.K. and J.-S.K.; visualization, I.H.; supervision, J.-S.K.; project administration, I.H., M.A.H. and J.-S.K.; funding acquisition, J.-S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No.NRF-2019R1A2C1005360) and the Ministry of Education (NRF-2020S1A3A2A02103899).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Acknowledgments

This study received funding support from the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) and the Ministry of Education, Korea.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Berry, R.B.; Brooks, R.; Gamaldo, C.E.; Harding, S.M.; Marcus, C.; Vaughn, B.V. The Aasm Manual for the Scoring of Sleep and Associated Events. In Rules, Terminology and Technical Specifications; American Academy of Sleep Medicine: Darien, IL, USA, 2012. [Google Scholar]
  2. Carskadon, M.A.; Dement, W.C. Normal Human Sleep: An Overview. Princ. Pract. Sleep Med. 2005, 4, 13–23. [Google Scholar]
  3. Park, S.J.; Hussain, I.; Hong, S.; Kim, D.; Park, H.; Benjamin, H.C.M. Real-Time Gait Monitoring System for Consumer Stroke Prediction Service. In Proceedings of the IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 4–6 January 2020. [Google Scholar]
  4. Park, H.; Hong, S.; Hussain, I.; Kim, D.; Seo, Y.; Park, S.J. Gait Monitoring System for Stroke Prediction of Aging Adults. In Proceedings of the International Conference on Applied Human Factors and Ergonomics, Washington, DC, USA, 24–28 July 2019. [Google Scholar]
  5. Hong, S.; Kim, D.; Park, H.; Seo, Y.; Iqram, H.; Park, S. Gait Feature Vectors for Post-Stroke Prediction Using Wearable Sensor. Sci. Emot. Sensib. 2019, 22, 55–64. [Google Scholar] [CrossRef]
  6. Park, S.J.; Hong, S.; Kim, D.; Hussain, I.; Seo, Y. Intelligent in-Car Health Monitoring System for Elderly Drivers in Connected Car. In Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018), Florence, Italy, 26–28 August 2018. [Google Scholar]
  7. Park, S.J.; Hong, S.; Kim, D.; Seo, Y.; Hussain, I. Knowledge Based Health Monitoring During Driving. In Proceedings of the International Conference on Human-Computer Interaction (HCI), Las Vegas, NV, USA, 15–18 July 2018. [Google Scholar]
  8. Park, S.J.; Hong, S.; Kim, D.; Seo, Y.; Hussain, I.; Won, S.T.; Jung, J. Real-Time Medical Examination System; Korean Intellectual Property Office: Daejeon, Korea, 2019. [Google Scholar]
  9. Park, S.J.; Hong, S.; Kim, D.; Seo, Y.; Hussain, I.; Hur, J.H.; Jin, W. Development of a Real-Time Stroke Detection System for Elderly Drivers Using Quad-Chamber Air Cushion and Iot Devices. In Proceedings of the WCX World Congress Experience, Detroit, MI, USA, 10–12 April 2018. [Google Scholar]
  10. Le, N.Q.K.; Hung, T.N.K.; Do, D.T.; Lam, L.H.T.; Dang, L.H.; Huynh, T.T. Radiomics-Based Machine Learning Model for Efficiently Classifying Transcriptome Subtypes in Glioblastoma Patients from Mri. Comput. Biol. Med. 2021, 132, 104320. [Google Scholar] [CrossRef]
  11. Kim, D.; Hong, S.; Hussain, I.; Seo, Y.; Park, S.J. Analysis of Bio-Signal Data of Stroke Patients and Normal Elderly People for Real-Time Monitoring. In Proceedings of the 20th Congress of the International Ergonomics Association, Florence, Italy, 26–30 August 2019. [Google Scholar]
  12. Hussain, I.; Park, S.J. Healthsos: Real-Time Health Monitoring System for Stroke Prognostics. IEEE Access 2020, 8, 213574–213586. [Google Scholar] [CrossRef]
  13. Hussain, I.; Young, S.; Kim, C.; Benjamin, H.; Park, S. Quantifying Physiological Biomarkers of a Microwave Brain Stimulation Device. Sensors 2021, 21, 1896. [Google Scholar] [CrossRef]
  14. Park, S.J.; Hong, S.; Kim, D.; Hussain, I.; Seo, Y.; Kim, M.K. Physiological Evaluation of a Non-Invasive Wearable Vagus Nerve Stimulation (Vns) Device. In Proceedings of the International Conference on Applied Human Factors and Ergonomics, Washington, DC, USA, 24–28 July 2019. [Google Scholar]
  15. Hussain, I.; Park, S.J. Big-Ecg: Cardiographic Predictive Cyber-Physical System for Stroke Management. IEEE Access 2021, 9, 123146–123164. [Google Scholar] [CrossRef]
  16. Park, S.J.; Hong, S.; Hussain, I.; Kim, D.; Seo, Y. Wearable Sleep Monitoring System Using Air-Mattress and Microwave-Radar Sensor: A Study; Korean Society of Emotion and Sensibility (KOSES): Daejeon, Korea, 2019. [Google Scholar]
  17. Park, S.; Hong, S.; Kim, D.; Yu, J.; Hussain, I.; Park, H.; Benjamin, H. Development of intelligent stroke monitoring system for the elderly during sleeping. Sleep Med. 2019, 64, S294. [Google Scholar] [CrossRef]
  18. Park, S.J.; Hussain, I.; Kyoung, B. Wearable Sleep Monitoring System and Method; Korean Intellectual Property Office: Daejeon, Korea, 2021. [Google Scholar]
  19. Park, S.J.; Hussain, I. Monitoring System for Stroke During Sleep; Korean Intellectual Property Office: Daejeon, Korea, 2020. [Google Scholar]
  20. Boostani, R.; Karimzadeh, F.; Nami, M. A comparative review on sleep stage classification methods in patients and healthy individuals. Comput. Methods Programs Biomed. 2016, 140, 77–91. [Google Scholar] [CrossRef]
  21. Alvarez-Estevez, D.; Rijsman, R.M. Haaglanden Medisch Centrum Sleep Staging Database (Version 1.0.1); PhysioNet, 2021; Available online: https://physionet.org/ (accessed on 22 March 2022).
  22. Alvarez-Estevez, D.; Rijsman, R.M. Inter-Database Validation of a Deep Learning Approach for Automatic Sleep Scoring. PLoS ONE 2021, 16, e0256111. [Google Scholar] [CrossRef]
  23. Hyvarinen, A. Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans. Neural Netw. 1999, 10, 626–634. [Google Scholar] [CrossRef] [Green Version]
  24. Oliveira, A.S.; Schlink, B.R.; Hairston, W.D.; König, P.; Ferris, D.P. Induction and Separation of Motion Artifacts in EEG Data Using a Mobile Phantom Head Device. J. Neural Eng. 2016, 13, 036014. [Google Scholar] [CrossRef] [PubMed]
  25. Sanei, S.; Chambers, J.A. EEG Signal Processing and Machine Learning; John Wiley & Sons: Hoboken, NJ, USA, 2021. [Google Scholar]
  26. Hussain, I. Prediction of Stroke-Impaired Neurological, Cardiac, and Neuromuscular Changes Using Physiological Signals and Machine-Learning Approach. Ph.D. Thesis, University of Science and Technology, Daejeon, Korea, 2022. [Google Scholar]
  27. Welch, P. The Use of Fast Fourier Transform for the Estimation of Power Spectra: A Method Based on Time Averaging over Short, Modified Periodograms. IEEE Trans. Audio Electroacoust. 1967, 15, 70–73. [Google Scholar] [CrossRef] [Green Version]
  28. Hussain, I.; Young, S.; Park, S.-J. Driving-Induced Neurological Biomarkers in an Advanced Driver-Assistance System. Sensors 2021, 21, 6985. [Google Scholar] [CrossRef] [PubMed]
  29. Hussain, I.; Park, S.-J. Quantitative Evaluation of Task-Induced Neurological Outcome after Stroke. Brain Sci. 2021, 11, 900. [Google Scholar] [CrossRef] [PubMed]
  30. Snecdecor, G.W.; Cochran, W.G. Statistical Methods; John Wiley & Sons: Hoboken, NJ, USA, 1991. [Google Scholar]
  31. King, G.; Zeng, L. Logistic Regression in Rare Events Data. Political Anal. 2001, 9, 137–163. [Google Scholar] [CrossRef] [Green Version]
  32. Bishop, C.M. Neural Networks for Pattern Recognition; Oxford University Press: Oxford, UK, 1995. [Google Scholar]
  33. Kass, G.V. An Exploratory Technique for Investigating Large Quantities of Categorical Data. J. R. Stat. Soc. 1980, 29, 119–127. [Google Scholar] [CrossRef]
  34. Ross, Q.J. Data Mining Tools See5 and C5.0. Available online: www.rulequest.com/see5-info.html (accessed on 12 March 2022).
  35. Johnson, L.C. A Psychophysiology for All States. Psychophysiology 1970, 6, 501–516. [Google Scholar] [CrossRef]
  36. Cantero, J.L.; Atienza, M.; Salas, R.M. Human alpha oscillations in wakefulness, drowsiness period, and REM sleep: Different electroencephalographic phenomena within the alpha band. Neurophysiol. Clin. Neurophysiol. 2002, 32, 54–71. [Google Scholar] [CrossRef]
  37. Simon, C.W.; Emmons, W.H. EEG, Consciousness, and Sleep. Science 1956, 124, 1066–1069. [Google Scholar] [CrossRef] [Green Version]
  38. Anderer, P.; Gruber, G.; Parapatics, S.; Woertz, M.; Miazhynskaia, T.; Klosch, G.; Saletu, B.; Zeitlhofer, J.; Barbanoj, M.J.; Danker-Hopfe, H.; et al. An E-Health Solution for Automatic Sleep Classification According to Rechtschaffen and Kales: Validation Study of the Somnolyzer 24 X 7 Utilizing the Siesta Database. Neuropsychobiology 2005, 51, 115–133. [Google Scholar] [CrossRef]
  39. Himanen, S.-L.; Hasan, J. Limitations of Rechtschaffen and Kales. Sleep Med. Rev. 2000, 4, 149–167. [Google Scholar] [CrossRef] [PubMed]
  40. Giannakeas, N.; Tzimourta, K.D.; Tsilimbaris, A.; Tzioukalia, K.; Tzallas, A.T.; Tsipouras, M.G.; Astrakas, L.G. EEG-Based Automatic Sleep Stage Classification. Biomed. J. Sci. Tech. Res. 2018, 1, 6. [Google Scholar] [CrossRef]
  41. Ghasemzadeh, P.; Kalbkhani, H.; Shayesteh, M.G. Sleep stages classification from EEG signal based on Stockwell transform. IET Signal Process. 2019, 13, 242–252. [Google Scholar] [CrossRef]
  42. Tripathy, R.K.; Ghosh, S.K.; Gajbhiye, P.; Acharya, U.R. Development of Automated Sleep Stage Classification System Using Multivariate Projection-Based Fixed Boundary Empirical Wavelet Transform and Entropy Features Extracted from Multichannel EEG Signals. Entropy 2020, 22, 1141. [Google Scholar] [CrossRef]
  43. Widasari, E.R.; Tanno, K.; Tamura, H. Automatic Sleep Disorders Classification Using Ensemble of Bagged Tree Based on Sleep Quality Features. Electronics 2020, 9, 512. [Google Scholar] [CrossRef] [Green Version]
  44. Wang, I.N.; Lee, C.H.; Kim, H.J.; Kim, H.; Kim, D.J. An Ensemble Deep Learning Approach for Sleep Stage Classification Via Single-Channel EEG and EOG. In Proceedings of the 2020 International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Korea, 19–21 October 2020. [Google Scholar]
  45. Sharma, M.; Tiwari, J.; Acharya, U. Automatic Sleep-Stage Scoring in Healthy and Sleep Disorder Patients Using Optimal Wavelet Filter Bank Technique with EEG Signals. Int. J. Environ. Res. Public Health 2021, 18, 3087. [Google Scholar] [CrossRef]
Figure 1. Methodology of EEG-based sleep stages classification using a machine-learning approach.
Figure 1. Methodology of EEG-based sleep stages classification using a machine-learning approach.
Sensors 22 03079 g001
Figure 2. Results from EEG spectral power features during sleep stages W, N1, N2, N-3, and R. The bar chart describes the relative mean power of the EEG waves, and the vertical error bar (black color) is the 95% CI. (a) Alpha relative power for sleep stages in the frontal lobe, central lobe, occipital lobe, and global. (b) Beta relative power for sleep stages in the frontal lobe, central lobe, occipital lobe, and global. (c) Theta relative power for sleep stages in the frontal lobe, central lobe, occipital lobe, and global. (d) Delta relative power for sleep stages in the frontal lobe, central lobe, occipital lobe, and global. (e) Gamma relative power for sleep stages in the frontal lobe, central lobe, occipital lobe, and global. Global indicates the average measures of features of the frontal, central, and occipital lobes. The horizontal bars (brown color) are the outcomes of the hypothesis tests and indicate significant differences (p < 0.05) in EEG features among the sleep stages.
Figure 2. Results from EEG spectral power features during sleep stages W, N1, N2, N-3, and R. The bar chart describes the relative mean power of the EEG waves, and the vertical error bar (black color) is the 95% CI. (a) Alpha relative power for sleep stages in the frontal lobe, central lobe, occipital lobe, and global. (b) Beta relative power for sleep stages in the frontal lobe, central lobe, occipital lobe, and global. (c) Theta relative power for sleep stages in the frontal lobe, central lobe, occipital lobe, and global. (d) Delta relative power for sleep stages in the frontal lobe, central lobe, occipital lobe, and global. (e) Gamma relative power for sleep stages in the frontal lobe, central lobe, occipital lobe, and global. Global indicates the average measures of features of the frontal, central, and occipital lobes. The horizontal bars (brown color) are the outcomes of the hypothesis tests and indicate significant differences (p < 0.05) in EEG features among the sleep stages.
Sensors 22 03079 g002
Figure 3. Results from DAR, DTR, and DTABR during sleep stages W, N1, N2, N-3, and R. The bar chart describes the relative mean power of the EEG waves and the vertical error bar (black color) is the 95% CI. Global indicates the average measures of features of the frontal, central, and occipital lobes. The horizontal bars (brown color) are the outcomes of the hypothesis tests and indicate significant differences (p < 0.05) in EEG features among the sleep stages.
Figure 3. Results from DAR, DTR, and DTABR during sleep stages W, N1, N2, N-3, and R. The bar chart describes the relative mean power of the EEG waves and the vertical error bar (black color) is the 95% CI. Global indicates the average measures of features of the frontal, central, and occipital lobes. The horizontal bars (brown color) are the outcomes of the hypothesis tests and indicate significant differences (p < 0.05) in EEG features among the sleep stages.
Sensors 22 03079 g003
Figure 4. Performance of the three machine-learning models (C5.0, Neural Network, and CHAID Models) to classify the sleep stages W, N1, N2, N-3, and R using training and testing datasets of EEG features.
Figure 4. Performance of the three machine-learning models (C5.0, Neural Network, and CHAID Models) to classify the sleep stages W, N1, N2, N-3, and R using training and testing datasets of EEG features.
Sensors 22 03079 g004
Table 1. Features extracted from the EEG signal. The Global channel is averaged over F4, C4, and O2 electrodes.
Table 1. Features extracted from the EEG signal. The Global channel is averaged over F4, C4, and O2 electrodes.
EEG ChannelEEG Spectral WavesEEG FeatureNumber of Features
F4, C4, and O2δ, θ, α, β, and γMean Power15
F4, C4, and O2δ, θ, α, β, and γMedian Frequency15
F4, C4, and O2δ, θ, α, β, and γMean Frequency15
F4, C4, and O2δ, θ, α, β, and γSpectral Edge15
F4, C4, and O2δ, θ, α, β, and γPeak Frequency15
Globalδ, θ, α, β, and γMean Power5
F4, C4, and O2DAR (δ/α) and DTR (δ/θ)Mean Power6
F4, C4, and O2-Total Mean Power3
Table 2. Statistical results (Mean and Standard Deviation) of EEG spectral features (δ, θ, α, β, and γ) in the frontal, central, and occipital lobes during sleep stages W, N1, N2, N-3, and R. Global indicates the average measures of features of the frontal, central, and occipital lobes.
Table 2. Statistical results (Mean and Standard Deviation) of EEG spectral features (δ, θ, α, β, and γ) in the frontal, central, and occipital lobes during sleep stages W, N1, N2, N-3, and R. Global indicates the average measures of features of the frontal, central, and occipital lobes.
EEG
Feature
N1N2N3RW
MeanStd. Dev.MeanStd. Dev.MeanStd. Dev.MeanStd. Dev.MeanStd. Dev.
Frontal LobeAlpha0.1020.0560.0820.0420.0480.0280.0890.0420.1120.079
Beta0.1130.0700.0700.0450.0330.0280.0880.0510.1400.092
Theta0.1370.0640.1300.0510.0930.0370.1470.0580.1250.078
Delta0.6130.1860.6940.1440.8130.1080.6480.1480.5700.234
Gamma0.0360.0610.0240.0700.0130.0610.0280.0600.0530.071
Central LobeAlpha0.1150.0620.0920.0450.0530.0320.1020.0440.1370.087
Beta0.1260.0750.0830.0480.0400.0330.1000.0500.1690.097
Theta0.1510.0690.1470.0530.1040.0430.1690.0600.1410.084
Delta3.92227.4941.57219.7251.95410.2191.98225.3494.92232.353
Gamma0.0480.0920.0370.1010.0210.0850.0430.1010.0670.092
Occipital LobeAlpha0.1120.0640.0960.0460.0570.0320.1080.0480.1420.097
Beta0.1170.0740.0860.0500.0430.0330.1020.0480.1610.101
Theta0.1440.0710.1530.0640.1160.0520.1560.0600.1370.086
Delta0.5800.2070.6200.1700.7590.1360.5900.1560.4990.261
Gamma0.0470.0910.0460.1120.0250.0840.0450.1000.0610.089
GlobalAlpha0.1090.0580.0900.0410.0520.0280.1000.0410.1300.084
Beta0.1190.0700.0800.0450.0390.0290.0970.0460.1560.090
Theta0.1440.0640.1430.0500.1040.0400.1570.0540.1340.079
Delta1.7019.2000.9606.5911.1753.4221.0718.4681.99410.825
Gamma0.0430.0710.0360.0840.0200.0680.0380.0760.0600.075
Table 3. Statistical results (Mean and Standard Deviation) of EEG Delta features (DAR, DTR, and DTABR) in the Global cortex during sleep stages W, N1, N2, N-3, and R. Global indicates the average measures of features of the frontal, central, and occipital lobes.
Table 3. Statistical results (Mean and Standard Deviation) of EEG Delta features (DAR, DTR, and DTABR) in the Global cortex during sleep stages W, N1, N2, N-3, and R. Global indicates the average measures of features of the frontal, central, and occipital lobes.
EEG
Feature
N1N2N3RW
MeanStd. Dev.MeanStd. Dev.MeanStd. Dev.MeanStd. Dev.MeanStd. Dev.
GlobalDAR296.03326.7103.01917.773.6723.5180.53406.5292.82914.9
DTR89.8824.531.2486.424.3195.848.9790.496.6748.6
DTABR166.01912.762.51219.848.6440.1105.11950.1153.81678.6
Table 4. Confusion matrix of the C5.0 Model using training and testing datasets for the classification of EEG features of the sleep stages W, N1, N2, N-3, and R.
Table 4. Confusion matrix of the C5.0 Model using training and testing datasets for the classification of EEG features of the sleep stages W, N1, N2, N-3, and R.
C5.0Prediction
N1N2N3REMWakeN1N2N3REMWake
ActualN15760152978705145174863242383585
N283727,58118498424814935713858548226
N388226214,65666633897630833820
REM665111010311,046233400676552060117
Wake6224433715214,19239123725863170
Table 5. Confusion matrix of the Neural Network Model using training and testing datasets for the classification of EEG features of the sleep stages W, N1, N2, N-3, and R.
Table 5. Confusion matrix of the Neural Network Model using training and testing datasets for the classification of EEG features of the sleep stages W, N1, N2, N-3, and R.
Neural NetworkPrediction
N1N2N3REMWakeN1N2N3REMWake
ActualN121972634881948265655066629470675
N284524,7463078188310381966149753483257
N322425012,647421747103930661330
REM6992504869331537176624212372115
Wake9807966831813,28424321223783353
Table 6. Confusion matrix of the CHAID Model using training and testing datasets for the classification of EEG features of the sleep stages W, N1, N2, N-3, and R.
Table 6. Confusion matrix of the CHAID Model using training and testing datasets for the classification of EEG features of the sleep stages W, N1, N2, N-3, and R.
CHAIDPrediction
N1N2N3REMWakeN1N2N3REMWake
ActualN1210929462701817238154174154451603
N2139221,3804679317596433853051121834240
N380583510,91314716016141826593131
REM13664547422621061235411521031560139
Wake1970169719947911,101535407661242777
Table 7. Classification Performance parameters of the C5.0 Model using training and testing datasets for the classification of EEG features of the sleep stages W, N1, N2, N-3, and R.
Table 7. Classification Performance parameters of the C5.0 Model using training and testing datasets for the classification of EEG features of the sleep stages W, N1, N2, N-3, and R.
C5.0Training (Average Accuracy = 94%)Testing (Average Accuracy = 87%)
AccuracySensitivitySpecificityPrecisionNegative Predictive ValueAccuracySensitivitySpecificityPrecisionNegative Predictive Value
N10.930.600.9710.720.950.860.310.9310.360.92
N20.890.870.9030.840.930.780.730.8170.690.84
N30.950.860.9700.880.960.910.740.9440.760.94
R0.960.840.9760.860.970.890.620.9420.660.93
W0.960.920.9690.860.980.920.810.9460.770.96
Table 8. Confusion matrix of the Neural Network Model using training and testing datasets for the classification of EEG features of the sleep stages W, N1, N2, N-3, and R.
Table 8. Confusion matrix of the Neural Network Model using training and testing datasets for the classification of EEG features of the sleep stages W, N1, N2, N-3, and R.
Neural NetworkTraining (Average Accuracy = 89%)Testing (Average Accuracy = 89%)
AccuracySensitivitySpecificityPrecisionNegative Predictive ValueAccuracySensitivitySpecificityPrecisionNegative Predictive Value
N10.890.230.970.460.910.890.230.970.470.91
N20.800.780.820.710.870.800.780.820.710.87
N30.910.740.950.790.940.910.740.950.790.94
R0.910.710.940.690.950.910.720.940.690.95
W0.920.860.940.750.970.920.860.940.760.97
Table 9. Confusion matrix of the CHAID Model using training and testing datasets for the classification of EEG features of the sleep stages W, N1, N2, N-3, and R.
Table 9. Confusion matrix of the CHAID Model using training and testing datasets for the classification of EEG features of the sleep stages W, N1, N2, N-3, and R.
CHAIDTraining (Average Accuracy = 84%)Testing (Average Accuracy = 84%)
AccuracySensitivitySpecificityPrecisionNegative Predictive ValueAccuracySensitivitySpecificityPrecisionNegative Predictive Value
N10.860.220.940.300.910.860.230.940.300.91
N20.710.680.730.590.800.710.680.730.590.80
N30.860.640.920.660.910.870.640.920.660.91
R0.860.470.920.530.910.850.470.920.520.91
W0.900.720.940.730.940.900.710.940.730.94
Table 10. Comparative analysis of the methods and outcomes of the proposed work with other sleep studies.
Table 10. Comparative analysis of the methods and outcomes of the proposed work with other sleep studies.
Study YearStudy Subject Dataset (Year)/SignalClassAlgorithmAccuracy %
Tzimourta et al. [40]2018100 subjectsISRUC-Sleep dataset (2009–2013)/EEGFive-class {W, N1, N2, N3, and REM}Random Forest75.29
Kalbkhani et al. [41]2018100 subjectsISRUC-Sleep dataset (2009–2013)/EEGFive-class {W, N1, N2, N3, and REM}SVM82.33
Tripathi et al. [42]202025 subjectsCyclic Alternating Pattern (CAP) (2001)/EEGSix-class {W, S1, S2, S3, S4, and REM}Hybrid Classifier71.68
Widasari et al. [43]202051 subjectsCyclic Alternating Pattern (CAP) (2001)/EEGFour-class {W, Light sleep (S1 + S2), Deep sleep (S3 + S4), and REM}Ensemble of bagged tree (EBT)86.26
Wang et al. [44]2020157 subjectsSleep-EDF Expanded (Sleep-EDFX) (2000)/EEG and EOGFive-class {W, N1, N2, N3, and REM}Ensembles of EEGNet-BiLSTM82
Sharma et al. [45]202180 subjectsCyclic Alternating Pattern (CAP) (2001)/EEGSix-class {W, S1, S2, S3, S4, and REM}Ensemble of Bagged Tree (EBT)85.3
Proposed work2022157 subjectsHMC-Haaglanden Medisch Centrum (2021)/EEGFive-class {W, N1, N2, N3, and REM}C5.0, Neural Network, and CHAIDC5.0 (91%), Neural Network (92%), and CHAID (84%)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hussain, I.; Hossain, M.A.; Jany, R.; Bari, M.A.; Uddin, M.; Kamal, A.R.M.; Ku, Y.; Kim, J.-S. Quantitative Evaluation of EEG-Biomarkers for Prediction of Sleep Stages. Sensors 2022, 22, 3079. https://doi.org/10.3390/s22083079

AMA Style

Hussain I, Hossain MA, Jany R, Bari MA, Uddin M, Kamal ARM, Ku Y, Kim J-S. Quantitative Evaluation of EEG-Biomarkers for Prediction of Sleep Stages. Sensors. 2022; 22(8):3079. https://doi.org/10.3390/s22083079

Chicago/Turabian Style

Hussain, Iqram, Md Azam Hossain, Rafsan Jany, Md Abdul Bari, Musfik Uddin, Abu Raihan Mostafa Kamal, Yunseo Ku, and Jik-Soo Kim. 2022. "Quantitative Evaluation of EEG-Biomarkers for Prediction of Sleep Stages" Sensors 22, no. 8: 3079. https://doi.org/10.3390/s22083079

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop