sensors-logo

Journal Browser

Journal Browser

Deep Learning in Biomedical Informatics and Healthcare

A topical collection in Sensors (ISSN 1424-8220). This collection belongs to the section "Biomedical Sensors".

Viewed by 42077

Editors


E-Mail Website
Co-Guest Editor
Department of Molecular Medicine, Sapienza University of Rome, 00185 Rome, Italy
Interests: cognitive neuroscience; behavioural neuroscience; neuropsychology, biosignals processing; brain-computer interface; human-machine interaction; human factor; road safety
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Co-Guest Editor
Department of Molecular Medicine, Sapienza University of Rome, 00185 Rome, Italy
Interests: Neuroimaging; passive Brain-Computer Interface; Human Factors; Machine Learning; Applied Neuroscience; Cooperation

E-Mail Website
Co-Guest Editor
Associate professor, School of Innovation Design and Engineering (IDT), Mälardalen University, 72220 Västerås, Sweden
Interests: deep learning; case-based reasoning; data mining; fuzzy logic; other machine learning and machine intelligence approaches for analytics especially in Big data
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Co-Guest Editor
Department of Psychology, International Faculty of the University of Sheffield, CITY College, 54626 Thessaloniki, Greece
Interests: brain networks; affective and personality neuroscience; applied neuroscience; EEG signal processing; machine learning; graph theory
Special Issues, Collections and Topics in MDPI journals

Topical Collection Information

Dear Colleagues,

Artificial intelligence (AI) is already part of our everyday lives, and over the past few years, AI has exploded. Much of that has to do with the wide availability of GPUs that make parallel processing ever faster, cheaper, and more powerful. Deep learning (DL) has enabled many practical applications of machine learning and by extension the overall field of AI. The concept of DL has been applied to numerous research areas, such as mental states prediction and classification, image/speech recognition, vision, and predictive healthcare. The main advantage of DL algorithms relies on providing a computational model of a large dataset by learning and representing data at multiple levels. Therefore, deep learning models are able to give intuitions to understand the complex structures of large dataset. The employment of DL algorithms in controlled settings and datasets has been widely demonstrated and recognized, but realistic healthcare contexts may present some issues and limitations, for example, the invasiveness and cost of the biomedical signals recording systems, to achieve high performance. In this regard, the aim of the Special Issue is to collect the latest DL algorithms and applications to be applied in everyday life, contexts, and various research areas, in which biomedical signals, for example, Electroencephalogram (EEG), Electrocardiogram (ECG), and Galvanic Skin Response (GSR), are considered and eventually combined for the user’s mental states monitoring while dealing with realistic tasks (i.e., passive BCI), the adaptation of Human–Machine Interactions (HMIs), and for an objective and comprehensive user’s wellbeing assessment such as remote and predictive healthcare applications.

Areas covered by this section include but are not limited to the following:

  • Modelling and assessment of mental states and physical/psychological impairments/disorders;
  • Remote and predictive healthcare;
  • Transfer learning;
  • Human–Machine Interactions (HMIs);
  • Adaptive automation;
  • Human Performance Envelope (HPE);
  • Passive Brain-Computer Interaction (pBCI);
  • Wearable technologies;
  • Multimodality for neurophysiological assessment.

All types of manuscripts are considered, including original basic science reports, translational research, clinical studies, review articles, and methodology papers.

Dr. Gianluca Borghini
Dr. Gianluca Di Flumeri
Dr. Nicolina Sciaraffa
Assoc. Prof. Mobyen Uddin Ahmed
Ass. Prof. Manousos Klados
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • healthcare
  • deep learning
  • transfer learning
  • diagnosis
  • mental states
  • biomedical signal fusion
  • machine learning
  • articificial intelligence
  • adaptive automation
  • passive brain–computer interface

Published Papers (13 papers)

2023

Jump to: 2022, 2021, 2020

27 pages, 7890 KiB  
Review
Electroencephalography Signal Processing: A Comprehensive Review and Analysis of Methods and Techniques
by Ahmad Chaddad, Yihang Wu, Reem Kateb and Ahmed Bouridane
Sensors 2023, 23(14), 6434; https://doi.org/10.3390/s23146434 - 16 Jul 2023
Cited by 15 | Viewed by 6731
Abstract
The electroencephalography (EEG) signal is a noninvasive and complex signal that has numerous applications in biomedical fields, including sleep and the brain–computer interface. Given its complexity, researchers have proposed several advanced preprocessing and feature extraction methods to analyze EEG signals. In this study, [...] Read more.
The electroencephalography (EEG) signal is a noninvasive and complex signal that has numerous applications in biomedical fields, including sleep and the brain–computer interface. Given its complexity, researchers have proposed several advanced preprocessing and feature extraction methods to analyze EEG signals. In this study, we analyze a comprehensive review of numerous articles related to EEG signal processing. We searched the major scientific and engineering databases and summarized the results of our findings. Our survey encompassed the entire process of EEG signal processing, from acquisition and pretreatment (denoising) to feature extraction, classification, and application. We present a detailed discussion and comparison of various methods and techniques used for EEG signal processing. Additionally, we identify the current limitations of these techniques and analyze their future development trends. We conclude by offering some suggestions for future research in the field of EEG signal processing. Full article
Show Figures

Figure 1

15 pages, 3589 KiB  
Article
A New Method for Heart Disease Detection: Long Short-Term Feature Extraction from Heart Sound Data
by Mesut Guven and Fatih Uysal
Sensors 2023, 23(13), 5835; https://doi.org/10.3390/s23135835 - 23 Jun 2023
Cited by 3 | Viewed by 1577
Abstract
Heart sounds have been extensively studied for heart disease diagnosis for several decades. Traditional machine learning algorithms applied in the literature have typically partitioned heart sounds into small windows and employed feature extraction methods to classify samples. However, as there is no optimal [...] Read more.
Heart sounds have been extensively studied for heart disease diagnosis for several decades. Traditional machine learning algorithms applied in the literature have typically partitioned heart sounds into small windows and employed feature extraction methods to classify samples. However, as there is no optimal window length that can effectively represent the entire signal, windows may not provide a sufficient representation of the underlying data. To address this issue, this study proposes a novel approach that integrates window-based features with features extracted from the entire signal, thereby improving the overall accuracy of traditional machine learning algorithms. Specifically, feature extraction is carried out using two different time scales. Short-term features are computed from five-second fragments of heart sound instances, whereas long-term features are extracted from the entire signal. The long-term features are combined with the short-term features to create a feature pool known as long short-term features, which is then employed for classification. To evaluate the performance of the proposed method, various traditional machine learning algorithms with various models are applied to the PhysioNet/CinC Challenge 2016 dataset, which is a collection of diverse heart sound data. The experimental results demonstrate that the proposed feature extraction approach increases the accuracy of heart disease diagnosis by nearly 10%. Full article
Show Figures

Figure 1

2022

Jump to: 2023, 2021, 2020

13 pages, 4830 KiB  
Article
Robust Iris-Localization Algorithm in Non-Cooperative Environments Based on the Improved YOLO v4 Model
by Qi Xiong, Xinman Zhang, Xingzhu Wang, Naosheng Qiao and Jun Shen
Sensors 2022, 22(24), 9913; https://doi.org/10.3390/s22249913 - 16 Dec 2022
Cited by 5 | Viewed by 1411
Abstract
Iris localization in non-cooperative environments is challenging and essential for accurate iris recognition. Motivated by the traditional iris-localization algorithm and the robustness of the YOLO model, we propose a novel iris-localization algorithm. First, we design a novel iris detector with a modified you [...] Read more.
Iris localization in non-cooperative environments is challenging and essential for accurate iris recognition. Motivated by the traditional iris-localization algorithm and the robustness of the YOLO model, we propose a novel iris-localization algorithm. First, we design a novel iris detector with a modified you only look once v4 (YOLO v4) model. We can approximate the position of the pupil center. Then, we use a modified integro-differential operator to precisely locate the iris inner and outer boundaries. Experiment results show that iris-detection accuracy can reach 99.83% with this modified YOLO v4 model, which is higher than that of a traditional YOLO v4 model. The accuracy in locating the inner and outer boundary of the iris without glasses can reach 97.72% at a short distance and 98.32% at a long distance. The locating accuracy with glasses can obtained at 93.91% and 84%, respectively. It is much higher than the traditional Daugman’s algorithm. Extensive experiments conducted on multiple datasets demonstrate the effectiveness and robustness of our method for iris localization in non-cooperative environments. Full article
Show Figures

Figure 1

10 pages, 884 KiB  
Article
A Personalized User Authentication System Based on EEG Signals
by Christos Stergiadis, Vasiliki-Despoina Kostaridou, Simos Veloudis, Dimitrios Kazis and Manousos A. Klados
Sensors 2022, 22(18), 6929; https://doi.org/10.3390/s22186929 - 13 Sep 2022
Cited by 8 | Viewed by 2563
Abstract
Conventional biometrics have been employed in high-security user-authentication systems for over 20 years now. However, some of these modalities face low-security issues in common practice. Brainwave-based user authentication has emerged as a promising alternative method, as it overcomes some of these drawbacks and [...] Read more.
Conventional biometrics have been employed in high-security user-authentication systems for over 20 years now. However, some of these modalities face low-security issues in common practice. Brainwave-based user authentication has emerged as a promising alternative method, as it overcomes some of these drawbacks and allows for continuous user authentication. In the present study, we address the problem of individual user variability, by proposing a data-driven Electroencephalography (EEG)-based authentication method. We introduce machine learning techniques, in order to reveal the optimal classification algorithm that best fits the data of each individual user, in a fast and efficient manner. A set of 15 power spectral features (delta, theta, lower alpha, higher alpha, and alpha) is extracted from three EEG channels. The results show that our approach can reliably grant or deny access to the user (mean accuracy of 95.6%), while at the same time poses a viable option for real-time applications, as the total time of the training procedure was kept under one minute. Full article
Show Figures

Figure 1

16 pages, 5931 KiB  
Article
Acne Detection by Ensemble Neural Networks
by Hang Zhang and Tianyi Ma
Sensors 2022, 22(18), 6828; https://doi.org/10.3390/s22186828 - 09 Sep 2022
Cited by 5 | Viewed by 3373
Abstract
Acne detection, utilizing prior knowledge to diagnose acne severity, number or position through facial images, plays a very important role in medical diagnoses and treatment for patients with skin problems. Recently, deep learning algorithms were introduced in acne detection to improve detection precision. [...] Read more.
Acne detection, utilizing prior knowledge to diagnose acne severity, number or position through facial images, plays a very important role in medical diagnoses and treatment for patients with skin problems. Recently, deep learning algorithms were introduced in acne detection to improve detection precision. However, it remains challenging to diagnose acne based on the facial images of patients due to the complex context and special application scenarios. Here, we provide an ensemble neural network composed of two modules: (1) a classification module aiming to calculate the acne severity and number; (2) a localization module aiming to calculate the detection boxes. This ensemble model could precisely predict the acne severity, number, and position simultaneously, and could be an effective tool to help the patient self-test and assist the doctor in the diagnosis. Full article
Show Figures

Figure 1

13 pages, 2404 KiB  
Article
Inter-Patient Congestive Heart Failure Detection Using ECG-Convolution-Vision Transformer Network
by Taotao Liu, Yujuan Si, Weiyi Yang, Jiaqi Huang, Yongheng Yu, Gengbo Zhang and Rongrong Zhou
Sensors 2022, 22(9), 3283; https://doi.org/10.3390/s22093283 - 25 Apr 2022
Cited by 11 | Viewed by 2701
Abstract
An attack of congestive heart failure (CHF) can cause symptoms such as difficulty breathing, dizziness, or fatigue, which can be life-threatening in severe cases. An electrocardiogram (ECG) is a simple and economical method for diagnosing CHF. Due to the inherent complexity of ECGs [...] Read more.
An attack of congestive heart failure (CHF) can cause symptoms such as difficulty breathing, dizziness, or fatigue, which can be life-threatening in severe cases. An electrocardiogram (ECG) is a simple and economical method for diagnosing CHF. Due to the inherent complexity of ECGs and the subtle differences in the ECG waveform, misdiagnosis happens often. At present, the research on automatic CHF detection methods based on machine learning has become a research hotspot. However, the existing research focuses on an intra-patient experimental scheme and lacks the performance evaluation of working under noise, which cannot meet the application requirements. To solve the above issues, we propose a novel method to identify CHF using the ECG-Convolution-Vision Transformer Network (ECVT-Net). The algorithm combines the characteristics of a Convolutional Neural Network (CNN) and a Vision Transformer, which can automatically extract high-dimensional abstract features of ECGs with simple pre-processing. In this study, the model reached an accuracy of 98.88% for the inter-patient scheme. Furthermore, we added different degrees of noise to the original ECGs to verify the model’s noise robustness. The model’s performance in the above experiments proved that it could effectively identify CHF ECGs and can work under certain noise. Full article
Show Figures

Figure 1

2021

Jump to: 2023, 2022, 2020

24 pages, 5059 KiB  
Article
Vision-Based Driver’s Cognitive Load Classification Considering Eye Movement Using Machine Learning and Deep Learning
by Hamidur Rahman, Mobyen Uddin Ahmed, Shaibal Barua, Peter Funk and Shahina Begum
Sensors 2021, 21(23), 8019; https://doi.org/10.3390/s21238019 - 30 Nov 2021
Cited by 8 | Viewed by 2548
Abstract
Due to the advancement of science and technology, modern cars are highly technical, more activity occurs inside the car and driving is faster; however, statistics show that the number of road fatalities have increased in recent years because of drivers’ unsafe behaviors. Therefore, [...] Read more.
Due to the advancement of science and technology, modern cars are highly technical, more activity occurs inside the car and driving is faster; however, statistics show that the number of road fatalities have increased in recent years because of drivers’ unsafe behaviors. Therefore, to make the traffic environment safe it is important to keep the driver alert and awake both in human and autonomous driving cars. A driver’s cognitive load is considered a good indication of alertness, but determining cognitive load is challenging and the acceptance of wire sensor solutions are not preferred in real-world driving scenarios. The recent development of a non-contact approach through image processing and decreasing hardware prices enables new solutions and there are several interesting features related to the driver’s eyes that are currently explored in research. This paper presents a vision-based method to extract useful parameters from a driver’s eye movement signals and manual feature extraction based on domain knowledge, as well as automatic feature extraction using deep learning architectures. Five machine learning models and three deep learning architectures are developed to classify a driver’s cognitive load. The results show that the highest classification accuracy achieved is 92% by the support vector machine model with linear kernel function and 91% by the convolutional neural networks model. This non-contact technology can be a potential contributor in advanced driver assistive systems. Full article
Show Figures

Figure 1

15 pages, 1296 KiB  
Article
Neonatal Jaundice Diagnosis Using a Smartphone Camera Based on Eye, Skin, and Fused Features with Transfer Learning
by Alhanoof Althnian, Nada Almanea and Nourah Aloboud
Sensors 2021, 21(21), 7038; https://doi.org/10.3390/s21217038 - 23 Oct 2021
Cited by 13 | Viewed by 3954
Abstract
Neonatal jaundice is a common condition worldwide. Failure of timely diagnosis and treatment can lead to death or brain injury. Current diagnostic approaches include a painful and time-consuming invasive blood test and non-invasive tests using costly transcutaneous bilirubinometers. Since periodic monitoring is crucial, [...] Read more.
Neonatal jaundice is a common condition worldwide. Failure of timely diagnosis and treatment can lead to death or brain injury. Current diagnostic approaches include a painful and time-consuming invasive blood test and non-invasive tests using costly transcutaneous bilirubinometers. Since periodic monitoring is crucial, multiple efforts have been made to develop non-invasive diagnostic tools using a smartphone camera. However, existing works rely either on skin or eye images using statistical or traditional machine learning methods. In this paper, we adopt a deep transfer learning approach based on eye, skin, and fused images. We also trained well-known traditional machine learning models, including multi-layer perceptron (MLP), support vector machine (SVM), decision tree (DT), and random forest (RF), and compared their performance with that of the transfer learning model. We collected our dataset using a smartphone camera. Moreover, unlike most of the existing contributions, we report accuracy, precision, recall, f-score, and area under the curve (AUC) for all the experiments and analyzed their significance statistically. Our results indicate that the transfer learning model performed the best with skin images, while traditional models achieved the best performance with eyes and fused features. Further, we found that the transfer learning model with skin features performed comparably to the MLP model with eye features. Full article
Show Figures

Figure 1

15 pages, 2003 KiB  
Article
Performance Evaluation of Machine Learning Frameworks for Aphasia Assessment
by Seedahmed S. Mahmoud, Akshay Kumar, Youcun Li, Yiting Tang and Qiang Fang
Sensors 2021, 21(8), 2582; https://doi.org/10.3390/s21082582 - 07 Apr 2021
Cited by 12 | Viewed by 2771
Abstract
Speech assessment is an essential part of the rehabilitation procedure for patients with aphasia (PWA). It is a comprehensive and time-consuming process that aims to discriminate between healthy individuals and aphasic patients, determine the type of aphasia syndrome, and determine the patients’ impairment [...] Read more.
Speech assessment is an essential part of the rehabilitation procedure for patients with aphasia (PWA). It is a comprehensive and time-consuming process that aims to discriminate between healthy individuals and aphasic patients, determine the type of aphasia syndrome, and determine the patients’ impairment severity levels (these are referred to here as aphasia assessment tasks). Hence, the automation of aphasia assessment tasks is essential. In this study, the performance of three automatic speech assessment models based on the speech dataset-type was investigated. Three types of datasets were used: healthy subjects’ dataset, aphasic patients’ dataset, and a combination of healthy and aphasic datasets. Two machine learning (ML)-based frameworks, classical machine learning (CML) and deep neural network (DNN), were considered in the design of the proposed speech assessment models. In this paper, the DNN-based framework was based on a convolutional neural network (CNN). Direct or indirect transformation of these models to achieve the aphasia assessment tasks was investigated. Comparative performance results for each of the speech assessment models showed that quadrature-based high-resolution time-frequency images with a CNN framework outperformed all the CML frameworks over the three dataset-types. The CNN-based framework reported an accuracy of 99.23 ± 0.003% with the healthy individuals’ dataset and 67.78 ± 0.047% with the aphasic patients’ dataset. Moreover, direct or transformed relationships between the proposed speech assessment models and the aphasia assessment tasks are attainable, given a suitable dataset-type, a reasonably sized dataset, and appropriate decision logic in the ML framework. Full article
Show Figures

Figure 1

14 pages, 3474 KiB  
Article
A Video-Based Technique for Heart Rate and Eye Blinks Rate Estimation: A Potential Solution for Telemonitoring and Remote Healthcare
by Vincenzo Ronca, Andrea Giorgi, Dario Rossi, Antonello Di Florio, Gianluca Di Flumeri, Pietro Aricò, Nicolina Sciaraffa, Alessia Vozzi, Luca Tamborra, Ilaria Simonetti and Gianluca Borghini
Sensors 2021, 21(5), 1607; https://doi.org/10.3390/s21051607 - 25 Feb 2021
Cited by 13 | Viewed by 2942
Abstract
Current telemedicine and remote healthcare applications foresee different interactions between the doctor and the patient relying on the use of commercial and medical wearable sensors and internet-based video conferencing platforms. Nevertheless, the existing applications necessarily require a contact between the patient and sensors [...] Read more.
Current telemedicine and remote healthcare applications foresee different interactions between the doctor and the patient relying on the use of commercial and medical wearable sensors and internet-based video conferencing platforms. Nevertheless, the existing applications necessarily require a contact between the patient and sensors for an objective evaluation of the patient’s state. The proposed study explored an innovative video-based solution for monitoring neurophysiological parameters of potential patients and assessing their mental state. In particular, we investigated the possibility to estimate the heart rate (HR) and eye blinks rate (EBR) of participants while performing laboratory tasks by mean of facial—video analysis. The objectives of the study were focused on: (i) assessing the effectiveness of the proposed technique in estimating the HR and EBR by comparing them with laboratory sensor-based measures and (ii) assessing the capability of the video—based technique in discriminating between the participant’s resting state (Nominal condition) and their active state (Non-nominal condition). The results demonstrated that the HR and EBR estimated through the facial—video technique or the laboratory equipment did not statistically differ (p > 0.1), and that these neurophysiological parameters allowed to discriminate between the Nominal and Non-nominal states (p < 0.02). Full article
Show Figures

Figure 1

2020

Jump to: 2023, 2022, 2021

17 pages, 860 KiB  
Article
InstanceEasyTL: An Improved Transfer-Learning Method for EEG-Based Cross-Subject Fatigue Detection
by Hong Zeng, Jiaming Zhang, Wael Zakaria, Fabio Babiloni, Borghini Gianluca, Xiufeng Li and Wanzeng Kong
Sensors 2020, 20(24), 7251; https://doi.org/10.3390/s20247251 - 17 Dec 2020
Cited by 15 | Viewed by 2798
Abstract
Electroencephalogram (EEG) is an effective indicator for the detection of driver fatigue. Due to the significant differences in EEG signals across subjects, and difficulty in collecting sufficient EEG samples for analysis during driving, detecting fatigue across subjects through using EEG signals remains a [...] Read more.
Electroencephalogram (EEG) is an effective indicator for the detection of driver fatigue. Due to the significant differences in EEG signals across subjects, and difficulty in collecting sufficient EEG samples for analysis during driving, detecting fatigue across subjects through using EEG signals remains a challenge. EasyTL is a kind of transfer-learning model, which has demonstrated better performance in the field of image recognition, but not yet been applied in cross-subject EEG-based applications. In this paper, we propose an improved EasyTL-based classifier, the InstanceEasyTL, to perform EEG-based analysis for cross-subject fatigue mental-state detection. Experimental results show that InstanceEasyTL not only requires less EEG data, but also obtains better performance in accuracy and robustness than EasyTL, as well as existing machine-learning models such as Support Vector Machine (SVM), Transfer Component Analysis (TCA), Geodesic Flow Kernel (GFK), and Domain-adversarial Neural Networks (DANN), etc. Full article
Show Figures

Figure 1

25 pages, 1016 KiB  
Article
The Probability of Ischaemic Stroke Prediction with a Multi-Neural-Network Model
by Yan Liu, Bo Yin and Yanping Cong
Sensors 2020, 20(17), 4995; https://doi.org/10.3390/s20174995 - 03 Sep 2020
Cited by 10 | Viewed by 3528
Abstract
As is known, cerebral stroke has become one of the main diseases endangering people’s health; ischaemic strokes accounts for approximately 85% of cerebral strokes. According to research, early prediction and prevention can effectively reduce the incidence rate of the disease. However, it is [...] Read more.
As is known, cerebral stroke has become one of the main diseases endangering people’s health; ischaemic strokes accounts for approximately 85% of cerebral strokes. According to research, early prediction and prevention can effectively reduce the incidence rate of the disease. However, it is difficult to predict the ischaemic stroke because the data related to the disease are multi-modal. To achieve high accuracy of prediction and combine the stroke risk predictors obtained by previous researchers, a method for predicting the probability of stroke occurrence based on a multi-model fusion convolutional neural network structure is proposed. In such a way, the accuracy of ischaemic stroke prediction is improved by processing multi-modal data through multiple end-to-end neural networks. In this method, the feature extraction of structured data (age, gender, history of hypertension, etc.) and streaming data (heart rate, blood pressure, etc.) based on a convolutional neural network is first realized. A neural network model for feature fusion is then constructed to realize the feature fusion of structured data and streaming data. Finally, a predictive model for predicting the probability of stroke is obtained by training. As shown in the experimental results, the accuracy of ischaemic stroke prediction reached 98.53%. Such a high prediction accuracy will be helpful for preventing the occurrence of stroke. Full article
Show Figures

Figure 1

24 pages, 5220 KiB  
Article
A Robust Multilevel DWT Densely Network for Cardiovascular Disease Classification
by Gong Zhang, Yujuan Si, Weiyi Yang and Di Wang
Sensors 2020, 20(17), 4777; https://doi.org/10.3390/s20174777 - 24 Aug 2020
Cited by 6 | Viewed by 2167
Abstract
Cardiovascular disease is the leading cause of death worldwide. Immediate and accurate diagnoses of cardiovascular disease are essential for saving lives. Although most of the previously reported works have tried to classify heartbeats accurately based on the intra-patient paradigm, they suffer from category [...] Read more.
Cardiovascular disease is the leading cause of death worldwide. Immediate and accurate diagnoses of cardiovascular disease are essential for saving lives. Although most of the previously reported works have tried to classify heartbeats accurately based on the intra-patient paradigm, they suffer from category imbalance issues since abnormal heartbeats appear much less regularly than normal heartbeats. Furthermore, most existing methods rely on data preprocessing steps, such as noise removal and R-peak location. In this study, we present a robust classification system using a multilevel discrete wavelet transform densely network (MDD-Net) for the accurate detection of normal, coronary artery disease (CAD), myocardial infarction (MI) and congestive heart failure (CHF). First, the raw ECG signals from different databases are divided into same-size segments using an original adaptive sample frequency segmentation algorithm (ASFS). Then, the fusion features are extracted from the MDD-Net to achieve great classification performance. We evaluated the proposed method considering the intra-patient and inter-patient paradigms. The average accuracy, positive predictive value, sensitivity and specificity were 99.74%, 99.09%, 98.67% and 99.83%, respectively, under the intra-patient paradigm, and 96.92%, 92.17%, 89.18% and 97.77%, respectively, under the inter-patient paradigm. Moreover, the experimental results demonstrate that our model is robust to noise and class imbalance issues. Full article
Show Figures

Figure 1

Back to TopTop