sensors-logo

Journal Browser

Journal Browser

Intelligent Biosignal Analysis Methods

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (15 May 2021) | Viewed by 56774

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Electrical Engineering and Computing, University of Zagreb, Unska 3, HR-10000, Zagreb, Croatia
Interests: biosignal analysis; machine learning; data mining; knowledge representation; software engineering

Special Issue Information

Dear colleagues,

At present, intelligent biosignal analysis methods are involved in a wide range of significant biomedical applications, ranging from clinical decision support systems and portable personal devices used for patient screening and monitoring to organism state and disorder modeling. The choice of use of biosignal analysis methods depends on the goal of the analysis, hardware and software availability, and potential real-time requirements. Biosignals are usually measured using electrodes attached to analysis-specific parts of the body, but they may also be acquired using other types of sensors. The analysis methods usually form a type of pipeline that starts with raw-measured biosignals and ends with an intelligent decision.

The focus of this Special Issue is on different types of methods used for intelligent analysis of biosignals. This includes preprocessing methods, feature extraction methods, feature selection methods, and data modeling methods. Data modeling methods involve traditional machine learning methods and more recent deep learning approaches. All of the analysis methods are continuously evolving and improving within the field of artificial intelligence, and their application to biosignals is highly significant for improving patients’ health and healthcare in general.

The applications of intelligent biosignal processing methods to electrocardiogram, electroencephalogram, electromyogram, skin-resistance, oxygen saturation, and other types of biosignals may be considered for publication in this Special Issue. Differences in application of processing methods to data from fixed, portable, wearable, and implementable sensors may also be considered.

Both original and review papers are welcome.

Prof. Dr. Alan Jović
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Biosignal processing
  • Modeling methods
  • Time-series analysis
  • Machine learning
  • Deep learning
  • Feature extraction
  • Feature selection

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

5 pages, 180 KiB  
Editorial
Intelligent Biosignal Analysis Methods
by Alan Jovic
Sensors 2021, 21(14), 4743; https://doi.org/10.3390/s21144743 - 12 Jul 2021
Cited by 1 | Viewed by 1663
Abstract
This Editorial presents the accepted manuscripts for the special issue “Intelligent Biosignal Analysis Methods” of the Sensors MDPI journal [...] Full article
(This article belongs to the Special Issue Intelligent Biosignal Analysis Methods)

Research

Jump to: Editorial, Review

18 pages, 2157 KiB  
Article
Event-Centered Data Segmentation in Accelerometer-Based Fall Detection Algorithms
by Goran Šeketa, Lovro Pavlaković, Dominik Džaja, Igor Lacković and Ratko Magjarević
Sensors 2021, 21(13), 4335; https://doi.org/10.3390/s21134335 - 24 Jun 2021
Cited by 3 | Viewed by 2314
Abstract
Automatic fall detection systems ensure that elderly people get prompt assistance after experiencing a fall. Fall detection systems based on accelerometer measurements are widely used because of their portability and low cost. However, the ability of these systems to differentiate falls from Activities [...] Read more.
Automatic fall detection systems ensure that elderly people get prompt assistance after experiencing a fall. Fall detection systems based on accelerometer measurements are widely used because of their portability and low cost. However, the ability of these systems to differentiate falls from Activities of Daily Living (ADL) is still not acceptable for everyday usage at a large scale. More work is still needed to raise the performance of these systems. In our research, we explored an essential but often neglected part of accelerometer-based fall detection systems—data segmentation. The aim of our work was to explore how different configurations of windows for data segmentation affect detection accuracy of a fall detection system and to find the best-performing configuration. For this purpose, we designed a testing environment for fall detection based on a Support Vector Machine (SVM) classifier and evaluated the influence of the number and duration of segmentation windows on the overall detection accuracy. Thereby, an event-centered approach for data segmentation was used, where windows are set relative to a potential fall event detected in the input data. Fall and ADL data records from three publicly available datasets were utilized for the test. We found that a configuration of three sequential windows (pre-impact, impact, and post-impact) provided the highest detection accuracy on all three datasets. The best results were obtained when either a 0.5 s or a 1 s long impact window was used, combined with pre- and post-impact windows of 3.5 s or 3.75 s. Full article
(This article belongs to the Special Issue Intelligent Biosignal Analysis Methods)
Show Figures

Figure 1

15 pages, 2473 KiB  
Article
An Improved Deep Residual Network Prediction Model for the Early Diagnosis of Alzheimer’s Disease
by Haijing Sun, Anna Wang, Wenhui Wang and Chen Liu
Sensors 2021, 21(12), 4182; https://doi.org/10.3390/s21124182 - 18 Jun 2021
Cited by 27 | Viewed by 3610
Abstract
The early diagnosis of Alzheimer’s disease (AD) can allow patients to take preventive measures before irreversible brain damage occurs. It can be seen from cross-sectional imaging studies of AD that the features of the lesion areas in AD patients, as observed by magnetic [...] Read more.
The early diagnosis of Alzheimer’s disease (AD) can allow patients to take preventive measures before irreversible brain damage occurs. It can be seen from cross-sectional imaging studies of AD that the features of the lesion areas in AD patients, as observed by magnetic resonance imaging (MRI), show significant variation, and these features are distributed throughout the image space. Since the convolutional layer of the general convolutional neural network (CNN) cannot satisfactorily extract long-distance correlation in the feature space, a deep residual network (ResNet) model, based on spatial transformer networks (STN) and the non-local attention mechanism, is proposed in this study for the early diagnosis of AD. In this ResNet model, a new Mish activation function is selected in the ResNet-50 backbone to replace the Relu function, STN is introduced between the input layer and the improved ResNet-50 backbone, and a non-local attention mechanism is introduced between the fourth and the fifth stages of the improved ResNet-50 backbone. This ResNet model can extract more information from the layers by deepening the network structure through deep ResNet. The introduced STN can transform the spatial information in MRI images of Alzheimer’s patients into another space and retain the key information. The introduced non-local attention mechanism can find the relationship between the lesion areas and normal areas in the feature space. This model can solve the problem of local information loss in traditional CNN and can extract the long-distance correlation in feature space. The proposed method was validated using the ADNI (Alzheimer’s disease neuroimaging initiative) experimental dataset, and compared with several models. The experimental results show that the classification accuracy of the algorithm proposed in this study can reach 97.1%, the macro precision can reach 95.5%, the macro recall can reach 95.3%, and the macro F1 value can reach 95.4%. The proposed model is more effective than other algorithms. Full article
(This article belongs to the Special Issue Intelligent Biosignal Analysis Methods)
Show Figures

Figure 1

29 pages, 18749 KiB  
Article
The Effects of Individual Differences, Non-Stationarity, and the Importance of Data Partitioning Decisions for Training and Testing of EEG Cross-Participant Models
by Alexander Kamrud, Brett Borghetti and Christine Schubert Kabban
Sensors 2021, 21(9), 3225; https://doi.org/10.3390/s21093225 - 06 May 2021
Cited by 17 | Viewed by 3186
Abstract
EEG-based deep learning models have trended toward models that are designed to perform classification on any individual (cross-participant models). However, because EEG varies across participants due to non-stationarity and individual differences, certain guidelines must be followed for partitioning data into training, validation, and [...] Read more.
EEG-based deep learning models have trended toward models that are designed to perform classification on any individual (cross-participant models). However, because EEG varies across participants due to non-stationarity and individual differences, certain guidelines must be followed for partitioning data into training, validation, and testing sets, in order for cross-participant models to avoid overestimation of model accuracy. Despite this necessity, the majority of EEG-based cross-participant models have not adopted such guidelines. Furthermore, some data repositories may unwittingly contribute to the problem by providing partitioned test and non-test datasets for reasons such as competition support. In this study, we demonstrate how improper dataset partitioning and the resulting improper training, validation, and testing of a cross-participant model leads to overestimated model accuracy. We demonstrate this mathematically, and empirically, using five publicly available datasets. To build the cross-participant models for these datasets, we replicate published results and demonstrate how the model accuracies are significantly reduced when proper EEG cross-participant model guidelines are followed. Our empirical results show that by not following these guidelines, error rates of cross-participant models can be underestimated between 35% and 3900%. This misrepresentation of model performance for the general population potentially slows scientific progress toward truly high-performing classification models. Full article
(This article belongs to the Special Issue Intelligent Biosignal Analysis Methods)
Show Figures

Figure 1

21 pages, 4252 KiB  
Article
Constructing an Emotion Estimation Model Based on EEG/HRV Indexes Using Feature Extraction and Feature Selection Algorithms
by Kei Suzuki, Tipporn Laohakangvalvit, Ryota Matsubara and Midori Sugaya
Sensors 2021, 21(9), 2910; https://doi.org/10.3390/s21092910 - 21 Apr 2021
Cited by 19 | Viewed by 5514
Abstract
In human emotion estimation using an electroencephalogram (EEG) and heart rate variability (HRV), there are two main issues as far as we know. The first is that measurement devices for physiological signals are expensive and not easy to wear. The second is that [...] Read more.
In human emotion estimation using an electroencephalogram (EEG) and heart rate variability (HRV), there are two main issues as far as we know. The first is that measurement devices for physiological signals are expensive and not easy to wear. The second is that unnecessary physiological indexes have not been removed, which is likely to decrease the accuracy of machine learning models. In this study, we used single-channel EEG sensor and photoplethysmography (PPG) sensor, which are inexpensive and easy to wear. We collected data from 25 participants (18 males and 7 females) and used a deep learning algorithm to construct an emotion classification model based on Arousal–Valence space using several feature combinations obtained from physiological indexes selected based on our criteria including our proposed feature selection methods. We then performed accuracy verification, applying a stratified 10-fold cross-validation method to the constructed models. The results showed that model accuracies are as high as 90% to 99% by applying the features selection methods we proposed, which suggests that a small number of physiological indexes, even from inexpensive sensors, can be used to construct an accurate emotion classification model if an appropriate feature selection method is applied. Our research results contribute to the improvement of an emotion classification model with a higher accuracy, less cost, and that is less time consuming, which has the potential to be further applied to various areas of applications. Full article
(This article belongs to the Special Issue Intelligent Biosignal Analysis Methods)
Show Figures

Figure 1

21 pages, 17830 KiB  
Article
Wearable Technologies for Mental Workload, Stress, and Emotional State Assessment during Working-Like Tasks: A Comparison with Laboratory Technologies
by Andrea Giorgi, Vincenzo Ronca, Alessia Vozzi, Nicolina Sciaraffa, Antonello di Florio, Luca Tamborra, Ilaria Simonetti, Pietro Aricò, Gianluca Di Flumeri, Dario Rossi and Gianluca Borghini
Sensors 2021, 21(7), 2332; https://doi.org/10.3390/s21072332 - 26 Mar 2021
Cited by 28 | Viewed by 7818
Abstract
The capability of monitoring user’s performance represents a crucial aspect to improve safety and efficiency of several human-related activities. Human errors are indeed among the major causes of work-related accidents. Assessing human factors (HFs) could prevent these accidents through specific neurophysiological signals’ evaluation [...] Read more.
The capability of monitoring user’s performance represents a crucial aspect to improve safety and efficiency of several human-related activities. Human errors are indeed among the major causes of work-related accidents. Assessing human factors (HFs) could prevent these accidents through specific neurophysiological signals’ evaluation but laboratory sensors require highly-specialized operators and imply a certain grade of invasiveness which could negatively interfere with the worker’s activity. On the contrary, consumer wearables are characterized by their ease of use and their comfortability, other than being cheaper compared to laboratory technologies. Therefore, wearable sensors could represent an ideal substitute for laboratory technologies for a real-time assessment of human performances in ecological settings. The present study aimed at assessing the reliability and capability of consumer wearable devices (i.e., Empatica E4 and Muse 2) in discriminating specific mental states compared to laboratory equipment. The electrooculographic (EOG), electrodermal activity (EDA) and photoplethysmographic (PPG) signals were acquired from a group of 17 volunteers who took part to the experimental protocol in which different working scenarios were simulated to induce different levels of mental workload, stress, and emotional state. The results demonstrated that the parameters computed by the consumer wearable and laboratory sensors were positively and significantly correlated and exhibited the same evidences in terms of mental states discrimination. Full article
(This article belongs to the Special Issue Intelligent Biosignal Analysis Methods)
Show Figures

Figure 1

15 pages, 3493 KiB  
Article
EEG-Based Sleep Staging Analysis with Functional Connectivity
by Hui Huang, Jianhai Zhang, Li Zhu, Jiajia Tang, Guang Lin, Wanzeng Kong, Xu Lei and Lei Zhu
Sensors 2021, 21(6), 1988; https://doi.org/10.3390/s21061988 - 11 Mar 2021
Cited by 18 | Viewed by 4776
Abstract
Sleep staging is important in sleep research since it is the basis for sleep evaluation and disease diagnosis. Related works have acquired many desirable outcomes. However, most of current studies focus on time-domain or frequency-domain measures as classification features using single or very [...] Read more.
Sleep staging is important in sleep research since it is the basis for sleep evaluation and disease diagnosis. Related works have acquired many desirable outcomes. However, most of current studies focus on time-domain or frequency-domain measures as classification features using single or very few channels, which only obtain the local features but ignore the global information exchanging between different brain regions. Meanwhile, brain functional connectivity is considered to be closely related to brain activity and can be used to study the interaction relationship between brain areas. To explore the electroencephalography (EEG)-based brain mechanisms of sleep stages through functional connectivity, especially from different frequency bands, we applied phase-locked value (PLV) to build the functional connectivity network and analyze the brain interaction during sleep stages for different frequency bands. Then, we performed the feature-level, decision-level and hybrid fusion methods to discuss the performance of different frequency bands for sleep stages. The results show that (1) PLV increases in the lower frequency band (delta and alpha bands) and vice versa during different stages of non-rapid eye movement (NREM); (2) alpha band shows a better discriminative ability for sleeping stages; (3) the classification accuracy of feature-level fusion (six frequency bands) reaches 96.91% and 96.14% for intra-subject and inter-subjects respectively, which outperforms decision-level and hybrid fusion methods. Full article
(This article belongs to the Special Issue Intelligent Biosignal Analysis Methods)
Show Figures

Figure 1

17 pages, 6051 KiB  
Article
Detection of Myocardial Infarction Using ECG and Multi-Scale Feature Concatenate
by Jia-Zheng Jian, Tzong-Rong Ger, Han-Hua Lai, Chi-Ming Ku, Chiung-An Chen, Patricia Angela R. Abu and Shih-Lun Chen
Sensors 2021, 21(5), 1906; https://doi.org/10.3390/s21051906 - 09 Mar 2021
Cited by 14 | Viewed by 4487
Abstract
Diverse computer-aided diagnosis systems based on convolutional neural networks were applied to automate the detection of myocardial infarction (MI) found in electrocardiogram (ECG) for early diagnosis and prevention. However, issues, particularly overfitting and underfitting, were not being taken into account. In other words, [...] Read more.
Diverse computer-aided diagnosis systems based on convolutional neural networks were applied to automate the detection of myocardial infarction (MI) found in electrocardiogram (ECG) for early diagnosis and prevention. However, issues, particularly overfitting and underfitting, were not being taken into account. In other words, it is unclear whether the network structure is too simple or complex. Toward this end, the proposed models were developed by starting with the simplest structure: a multi-lead features-concatenate narrow network (N-Net) in which only two convolutional layers were included in each lead branch. Additionally, multi-scale features-concatenate networks (MSN-Net) were also implemented where larger features were being extracted through pooling the signals. The best structure was obtained via tuning both the number of filters in the convolutional layers and the number of inputting signal scales. As a result, the N-Net reached a 95.76% accuracy in the MI detection task, whereas the MSN-Net reached an accuracy of 61.82% in the MI locating task. Both networks give a higher average accuracy and a significant difference of p < 0.001 evaluated by the U test compared with the state-of-the-art. The models are also smaller in size thus are suitable to fit in wearable devices for offline monitoring. In conclusion, testing throughout the simple and complex network structure is indispensable. However, the way of dealing with the class imbalance problem and the quality of the extracted features are yet to be discussed. Full article
(This article belongs to the Special Issue Intelligent Biosignal Analysis Methods)
Show Figures

Figure 1

13 pages, 2455 KiB  
Article
Wearable Sensors for Assessing the Role of Olfactory Training on the Autonomic Response to Olfactory Stimulation
by Alessandro Tonacci, Lucia Billeci, Irene Di Mambro, Roberto Marangoni, Chiara Sanmartin and Francesca Venturi
Sensors 2021, 21(3), 770; https://doi.org/10.3390/s21030770 - 24 Jan 2021
Cited by 26 | Viewed by 3019
Abstract
Wearable sensors are nowadays largely employed to assess physiological signals derived from the human body without representing a burden in terms of obtrusiveness. One of the most intriguing fields of application for such systems include the assessment of physiological responses to sensory stimuli. [...] Read more.
Wearable sensors are nowadays largely employed to assess physiological signals derived from the human body without representing a burden in terms of obtrusiveness. One of the most intriguing fields of application for such systems include the assessment of physiological responses to sensory stimuli. In this specific regard, it is not yet known which are the main psychophysiological drivers of olfactory-related pleasantness, as the current literature has demonstrated the relationship between odor familiarity and odor valence, but has not clarified the consequentiality between the two domains. Here, we enrolled a group of university students to whom olfactory training lasting 3 months was administered. Thanks to the analysis of electrocardiogram (ECG) and galvanic skin response (GSR) signals at the beginning and at the end of the training period, we observed different autonomic responses, with higher parasympathetically-mediated response at the end of the period with respect to the first evaluation. This possibly suggests that an increased familiarity to the proposed stimuli would lead to a higher tendency towards relaxation. Such results could suggest potential applications to other domains, including personalized treatments based on odors and foods in neuropsychiatric and eating disorders. Full article
(This article belongs to the Special Issue Intelligent Biosignal Analysis Methods)
Show Figures

Figure 1

19 pages, 2229 KiB  
Article
EEG-Based Estimation on the Reduction of Negative Emotions for Illustrated Surgical Images
by Heekyung Yang, Jongdae Han and Kyungha Min
Sensors 2020, 20(24), 7103; https://doi.org/10.3390/s20247103 - 11 Dec 2020
Cited by 5 | Viewed by 2095
Abstract
Electroencephalogram (EEG) biosignals are widely used to measure human emotional reactions. The recent progress of deep learning-based classification models has improved the accuracy of emotion recognition in EEG signals. We apply a deep learning-based emotion recognition model from EEG biosignals to prove that [...] Read more.
Electroencephalogram (EEG) biosignals are widely used to measure human emotional reactions. The recent progress of deep learning-based classification models has improved the accuracy of emotion recognition in EEG signals. We apply a deep learning-based emotion recognition model from EEG biosignals to prove that illustrated surgical images reduce the negative emotional reactions that the photographic surgical images generate. The strong negative emotional reactions caused by surgical images, which show the internal structure of the human body (including blood, flesh, muscle, fatty tissue, and bone) act as an obstacle in explaining the images to patients or communicating with the images with non-professional people. We claim that the negative emotional reactions generated by illustrated surgical images are less severe than those caused by raw surgical images. To demonstrate the difference in emotional reaction, we produce several illustrated surgical images from photographs and measure the emotional reactions they engender using EEG biosignals; a deep learning-based emotion recognition model is applied to extract emotional reactions. Through this experiment, we show that the negative emotional reactions associated with photographic surgical images are much higher than those caused by illustrated versions of identical images. We further execute a self-assessed user survey to prove that the emotions recognized from EEG signals effectively represent user-annotated emotions. Full article
(This article belongs to the Special Issue Intelligent Biosignal Analysis Methods)
Show Figures

Figure 1

17 pages, 16167 KiB  
Article
Robust T-End Detection via T-End Signal Quality Index and Optimal Shrinkage
by Pei-Chun Su, Elsayed Z. Soliman and Hau-Tieng Wu
Sensors 2020, 20(24), 7052; https://doi.org/10.3390/s20247052 - 09 Dec 2020
Cited by 1 | Viewed by 2011
Abstract
An automatic accurate T-wave end (T-end) annotation for the electrocardiogram (ECG) has several important clinical applications. While there have been several algorithms proposed, their performance is usually deteriorated when the signal is noisy. Therefore, we need new techniques to support the noise robustness [...] Read more.
An automatic accurate T-wave end (T-end) annotation for the electrocardiogram (ECG) has several important clinical applications. While there have been several algorithms proposed, their performance is usually deteriorated when the signal is noisy. Therefore, we need new techniques to support the noise robustness in T-end detection. We propose a new algorithm based on the signal quality index (SQI) for T-end, coined as tSQI, and the optimal shrinkage (OS). For segments with low tSQI, the OS is applied to enhance the signal-to-noise ratio (SNR). We validated the proposed method using eleven short-term ECG recordings from QT database available at Physionet, as well as four 14-day ECG recordings which were visually annotated at a central ECG core laboratory. We evaluated the correlation between the real-world signal quality for T-end and tSQI, and the robustness of proposed algorithm to various additive noises of different types and SNR’s. The performance of proposed algorithm on arrhythmic signals was also illustrated on MITDB arrhythmic database. The labeled signal quality is well captured by tSQI, and the proposed OS denoising help stabilize existing T-end detection algorithms under noisy situations by making the mean of detection errors decrease. Even when applied to ECGs with arrhythmia, the proposed algorithm still performed well if proper metric is applied. We proposed a new T-end annotation algorithm. The efficiency and accuracy of our algorithm makes it a good fit for clinical applications and large ECG databases. This study is limited by the small size of annotated datasets. Full article
(This article belongs to the Special Issue Intelligent Biosignal Analysis Methods)
Show Figures

Figure 1

15 pages, 3599 KiB  
Article
Multi-Branch Convolutional Neural Network for Automatic Sleep Stage Classification with Embedded Stage Refinement and Residual Attention Channel Fusion
by Tianqi Zhu, Wei Luo and Feng Yu
Sensors 2020, 20(22), 6592; https://doi.org/10.3390/s20226592 - 18 Nov 2020
Cited by 10 | Viewed by 3264
Abstract
Automatic sleep stage classification of multi-channel sleep signals can help clinicians efficiently evaluate an individual’s sleep quality and assist in diagnosing a possible sleep disorder. To obtain accurate sleep classification results, the processing flow of results from signal preprocessing and machine-learning-based classification is [...] Read more.
Automatic sleep stage classification of multi-channel sleep signals can help clinicians efficiently evaluate an individual’s sleep quality and assist in diagnosing a possible sleep disorder. To obtain accurate sleep classification results, the processing flow of results from signal preprocessing and machine-learning-based classification is typically employed. These classification results are refined based on sleep transition rules. Neural networks—i.e., machine learning algorithms—are powerful at solving classification problems. Some methods apply them to the first two processes above; however, the refinement process continues to be based on traditional methods. In this study, the sleep stage refinement process was incorporated into the neural network model to form real end-to-end processing. In addition, for multi-channel signals, the multi-branch convolutional neural network was combined with a proposed residual attention method. This approach further improved the model classification accuracy. The proposed method was evaluated on the Sleep-EDF Expanded Database (Sleep-EDFx) and University College Dublin Sleep Apnea Database (UCDDB). It achieved respective accuracy rates of 85.7% and 79.4%. The results also showed that sleep stage refinement based on a neural network is more effective than the traditional refinement method. Moreover, the proposed residual attention method was determined to have a more robust channel–information fusion ability than the respective average and concatenation methods. Full article
(This article belongs to the Special Issue Intelligent Biosignal Analysis Methods)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

29 pages, 1317 KiB  
Review
A Review of EEG Signal Features and Their Application in Driver Drowsiness Detection Systems
by Igor Stancin, Mario Cifrek and Alan Jovic
Sensors 2021, 21(11), 3786; https://doi.org/10.3390/s21113786 - 30 May 2021
Cited by 85 | Viewed by 11051
Abstract
Detecting drowsiness in drivers, especially multi-level drowsiness, is a difficult problem that is often approached using neurophysiological signals as the basis for building a reliable system. In this context, electroencephalogram (EEG) signals are the most important source of data to achieve successful detection. [...] Read more.
Detecting drowsiness in drivers, especially multi-level drowsiness, is a difficult problem that is often approached using neurophysiological signals as the basis for building a reliable system. In this context, electroencephalogram (EEG) signals are the most important source of data to achieve successful detection. In this paper, we first review EEG signal features used in the literature for a variety of tasks, then we focus on reviewing the applications of EEG features and deep learning approaches in driver drowsiness detection, and finally we discuss the open challenges and opportunities in improving driver drowsiness detection based on EEG. We show that the number of studies on driver drowsiness detection systems has increased in recent years and that future systems need to consider the wide variety of EEG signal features and deep learning approaches to increase the accuracy of detection. Full article
(This article belongs to the Special Issue Intelligent Biosignal Analysis Methods)
Show Figures

Figure 1

Back to TopTop