entropy-logo

Journal Browser

Journal Browser

Selected Papers from 38th Annual Conference of Spanish Society of Biomedical Engineering

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Entropy and Biology".

Deadline for manuscript submissions: closed (30 April 2021) | Viewed by 18004

Special Issue Editors


E-Mail Website
Guest Editor

E-Mail Website
Guest Editor
Biomedical Signal Processing and Interpretation Group, Institute for Bioengineering of Catalonia (IBEC), Universitat Politècnica de Catalunya·BarcelonaTech (UPC), 08019 Barcelona, Spain
Interests: multimodal and multiscale biomedical signal processing in cardiorespiratory and neurological diseases and sleep disorders
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Communications Engineering, University of Basque Country, (UPV/EHU), 48013 Bilbao, Spain
Interests: biomedical signal processing; automated algorithms during resuscitation; management of large resuscitation datasets
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Biomedical Engineering Group, University of Valladolid, C/Plaza de Santa Cruz, 8, 47002 Valladolid, Spain
Interests: Alzheimer’s disease; electroencephalography (EEG); magnetoencephalography (MEG); biomedical engineering; signal processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Every year, the Spanish Society of Biomedical Engineering (SEIB) runs a conference to bring together researchers, students, and professionals working in biomedical engineering. The objective is to close the gap between engineering and medicine and, thus, to advance diagnosis, monitoring, and therapy of a variety of diseases. This year, the SEIB Annual Conference (CASEIB) will be held on 25th–27th of November virtually and, like the previous two years, will be supported by Entropy. This is an international journal dealing with the development and/or application of entropy or information-theoretic concepts in a wide variety of applications (for more details, see https://www.mdpi.com/journal/entropy/about). Thus, this Special Issue will collect the most relevant papers dealing with entropy and information theory-based applications presented in this conference. More details can be found on the website: http://caseib.es/2020/revista-entropy/

We look forward to receiving your support for this initiative.

Dr. Raúl Alcaraz
Dr. Raimon Jané
Dr. Elisabete Aramendi
Prof. Dr. Jesús Poza
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 2964 KiB  
Article
Choosing Strategies to Deal with Artifactual EEG Data in Children with Cognitive Impairment
by Ana Tost, Carolina Migliorelli, Alejandro Bachiller, Inés Medina-Rivera, Sergio Romero, Ángeles García-Cazorla and Miguel A. Mañanas
Entropy 2021, 23(8), 1030; https://doi.org/10.3390/e23081030 - 11 Aug 2021
Cited by 4 | Viewed by 1730
Abstract
Rett syndrome is a disease that involves acute cognitive impairment and, consequently, a complex and varied symptomatology. This study evaluates the EEG signals of twenty-nine patients and classify them according to the level of movement artifact. The main goal is to achieve an [...] Read more.
Rett syndrome is a disease that involves acute cognitive impairment and, consequently, a complex and varied symptomatology. This study evaluates the EEG signals of twenty-nine patients and classify them according to the level of movement artifact. The main goal is to achieve an artifact rejection strategy that performs well in all signals, regardless of the artifact level. Two different methods have been studied: one based on the data distribution and the other based on the energy function, with entropy as its main component. The method based on the data distribution shows poor performance with signals containing high amplitude outliers. On the contrary, the method based on the energy function is more robust to outliers. As it does not depend on the data distribution, it is not affected by artifactual events. A double rejection strategy has been chosen, first on a motion signal (accelerometer or EEG low-pass filtered between 1 and 10 Hz) and then on the EEG signal. The results showed a higher performance when working combining both artifact rejection methods. The energy-based method, to isolate motion artifacts, and the data-distribution-based method, to eliminate the remaining lower amplitude artifacts were used. In conclusion, a new method that proves to be robust for all types of signals is designed. Full article
Show Figures

Figure 1

16 pages, 3314 KiB  
Article
Supervised Domain Adaptation for Automated Semantic Segmentation of the Atrial Cavity
by Marta Saiz-Vivó, Adrián Colomer, Carles Fonfría, Luis Martí-Bonmatí and Valery Naranjo
Entropy 2021, 23(7), 898; https://doi.org/10.3390/e23070898 - 14 Jul 2021
Cited by 3 | Viewed by 2448
Abstract
Atrial fibrillation (AF) is the most common cardiac arrhythmia. At present, cardiac ablation is the main treatment procedure for AF. To guide and plan this procedure, it is essential for clinicians to obtain patient-specific 3D geometrical models of the atria. For this, there [...] Read more.
Atrial fibrillation (AF) is the most common cardiac arrhythmia. At present, cardiac ablation is the main treatment procedure for AF. To guide and plan this procedure, it is essential for clinicians to obtain patient-specific 3D geometrical models of the atria. For this, there is an interest in automatic image segmentation algorithms, such as deep learning (DL) methods, as opposed to manual segmentation, an error-prone and time-consuming method. However, to optimize DL algorithms, many annotated examples are required, increasing acquisition costs. The aim of this work is to develop automatic and high-performance computational models for left and right atrium (LA and RA) segmentation from a few labelled MRI volumetric images with a 3D Dual U-Net algorithm. For this, a supervised domain adaptation (SDA) method is introduced to infer knowledge from late gadolinium enhanced (LGE) MRI volumetric training samples (80 LA annotated samples) to a network trained with balanced steady-state free precession (bSSFP) MR images of limited number of annotations (19 RA and LA annotated samples). The resulting knowledge-transferred model SDA outperformed the same network trained from scratch in both RA (Dice equals 0.9160) and LA (Dice equals 0.8813) segmentation tasks. Full article
Show Figures

Figure 1

16 pages, 1751 KiB  
Article
A Machine Learning Model for the Prognosis of Pulseless Electrical Activity during Out-of-Hospital Cardiac Arrest
by Jon Urteaga, Elisabete Aramendi, Andoni Elola, Unai Irusta and Ahamed Idris
Entropy 2021, 23(7), 847; https://doi.org/10.3390/e23070847 - 30 Jun 2021
Cited by 6 | Viewed by 4149
Abstract
Pulseless electrical activity (PEA) is characterized by the disassociation of the mechanical and electrical activity of the heart and appears as the initial rhythm in 20–30% of out-of-hospital cardiac arrest (OHCA) cases. Predicting whether a patient in PEA will convert to return of [...] Read more.
Pulseless electrical activity (PEA) is characterized by the disassociation of the mechanical and electrical activity of the heart and appears as the initial rhythm in 20–30% of out-of-hospital cardiac arrest (OHCA) cases. Predicting whether a patient in PEA will convert to return of spontaneous circulation (ROSC) is important because different therapeutic strategies are needed depending on the type of PEA. The aim of this study was to develop a machine learning model to differentiate PEA with unfavorable (unPEA) and favorable (faPEA) evolution to ROSC. An OHCA dataset of 1921 5s PEA signal segments from defibrillator files was used, 703 faPEA segments from 107 patients with ROSC and 1218 unPEA segments from 153 patients with no ROSC. The solution consisted of a signal-processing stage of the ECG and the thoracic impedance (TI) and the extraction of the TI circulation component (ICC), which is associated with ventricular wall movement. Then, a set of 17 features was obtained from the ECG and ICC signals, and a random forest classifier was used to differentiate faPEA from unPEA. All models were trained and tested using patientwise and stratified 10-fold cross-validation partitions. The best model showed a median (interquartile range) area under the curve (AUC) of 85.7(9.8)% and a balance accuracy of 78.8(9.8)%, improving the previously available solutions at more than four points in the AUC and three points in balanced accuracy. It was demonstrated that the evolution of PEA can be predicted using the ECG and TI signals, opening the possibility of targeted PEA treatment in OHCA. Full article
Show Figures

Figure 1

13 pages, 2701 KiB  
Article
Real-Time Tool Detection for Workflow Identification in Open Cranial Vault Remodeling
by Alicia Pose Díez de la Lastra, Lucía García-Duarte Sáenz, David García-Mato, Luis Hernández-Álvarez, Santiago Ochandiano and Javier Pascau
Entropy 2021, 23(7), 817; https://doi.org/10.3390/e23070817 - 26 Jun 2021
Cited by 2 | Viewed by 2581
Abstract
Deep learning is a recent technology that has shown excellent capabilities for recognition and identification tasks. This study applies these techniques in open cranial vault remodeling surgeries performed to correct craniosynostosis. The objective was to automatically recognize surgical tools in real-time and estimate [...] Read more.
Deep learning is a recent technology that has shown excellent capabilities for recognition and identification tasks. This study applies these techniques in open cranial vault remodeling surgeries performed to correct craniosynostosis. The objective was to automatically recognize surgical tools in real-time and estimate the surgical phase based on those predictions. For this purpose, we implemented, trained, and tested three algorithms based on previously proposed Convolutional Neural Network architectures (VGG16, MobileNetV2, and InceptionV3) and one new architecture with fewer parameters (CranioNet). A novel 3D Slicer module was specifically developed to implement these networks and recognize surgical tools in real time via video streaming. The training and test data were acquired during a surgical simulation using a 3D printed patient-based realistic phantom of an infant’s head. The results showed that CranioNet presents the lowest accuracy for tool recognition (93.4%), while the highest accuracy is achieved by the MobileNetV2 model (99.6%), followed by VGG16 and InceptionV3 (98.8% and 97.2%, respectively). Regarding phase detection, InceptionV3 and VGG16 obtained the best results (94.5% and 94.4%), whereas MobileNetV2 and CranioNet presented worse values (91.1% and 89.8%). Our results prove the feasibility of applying deep learning architectures for real-time tool detection and phase estimation in craniosynostosis surgeries. Full article
Show Figures

Figure 1

23 pages, 6369 KiB  
Article
Assessment of Classification Models and Relevant Features on Nonalcoholic Steatohepatitis Using Random Forest
by Rafael García-Carretero, Roberto Holgado-Cuadrado and Óscar Barquero-Pérez
Entropy 2021, 23(6), 763; https://doi.org/10.3390/e23060763 - 17 Jun 2021
Cited by 12 | Viewed by 2818
Abstract
Nonalcoholic fatty liver disease (NAFLD) is the hepatic manifestation of metabolic syndrome and is the most common cause of chronic liver disease in developed countries. Certain conditions, including mild inflammation biomarkers, dyslipidemia, and insulin resistance, can trigger a progression to nonalcoholic steatohepatitis (NASH), [...] Read more.
Nonalcoholic fatty liver disease (NAFLD) is the hepatic manifestation of metabolic syndrome and is the most common cause of chronic liver disease in developed countries. Certain conditions, including mild inflammation biomarkers, dyslipidemia, and insulin resistance, can trigger a progression to nonalcoholic steatohepatitis (NASH), a condition characterized by inflammation and liver cell damage. We demonstrate the usefulness of machine learning with a case study to analyze the most important features in random forest (RF) models for predicting patients at risk of developing NASH. We collected data from patients who attended the Cardiovascular Risk Unit of Mostoles University Hospital (Madrid, Spain) from 2005 to 2021. We reviewed electronic health records to assess the presence of NASH, which was used as the outcome. We chose RF as the algorithm to develop six models using different pre-processing strategies. The performance metrics was evaluated to choose an optimized model. Finally, several interpretability techniques, such as feature importance, contribution of each feature to predictions, and partial dependence plots, were used to understand and explain the model to help obtain a better understanding of machine learning-based predictions. In total, 1525 patients met the inclusion criteria. The mean age was 57.3 years, and 507 patients had NASH (prevalence of 33.2%). Filter methods (the chi-square and Mann–Whitney–Wilcoxon tests) did not produce additional insight in terms of interactions, contributions, or relationships among variables and their outcomes. The random forest model correctly classified patients with NASH to an accuracy of 0.87 in the best model and to 0.79 in the worst one. Four features were the most relevant: insulin resistance, ferritin, serum levels of insulin, and triglycerides. The contribution of each feature was assessed via partial dependence plots. Random forest-based modeling demonstrated that machine learning can be used to improve interpretability, produce understanding of the modeled behavior, and demonstrate how far certain features can contribute to predictions. Full article
Show Figures

Figure 1

15 pages, 2262 KiB  
Article
Foveal Pit Morphology Characterization: A Quantitative Analysis of the Key Methodological Steps
by David Romero-Bascones, Maitane Barrenechea, Ane Murueta-Goyena, Marta Galdós, Juan Carlos Gómez-Esteban, Iñigo Gabilondo and Unai Ayala
Entropy 2021, 23(6), 699; https://doi.org/10.3390/e23060699 - 01 Jun 2021
Cited by 3 | Viewed by 3006
Abstract
Disentangling the cellular anatomy that gives rise to human visual perception is one of the main challenges of ophthalmology. Of particular interest is the foveal pit, a concave depression located at the center of the retina that captures light from the gaze center. [...] Read more.
Disentangling the cellular anatomy that gives rise to human visual perception is one of the main challenges of ophthalmology. Of particular interest is the foveal pit, a concave depression located at the center of the retina that captures light from the gaze center. In recent years, there has been a growing interest in studying the morphology of the foveal pit by extracting geometrical features from optical coherence tomography (OCT) images. Despite this, research has devoted little attention to comparing existing approaches for two key methodological steps: the location of the foveal center and the mathematical modelling of the foveal pit. Building upon a dataset of 185 healthy subjects imaged twice, in the present paper the image alignment accuracy of four different foveal center location methods is studied in the first place. Secondly, state-of-the-art foveal pit mathematical models are compared in terms of fitting error, repeatability, and bias. The results indicate the importance of using a robust foveal center location method to align images. Moreover, we show that foveal pit models can improve the agreement between different acquisition protocols. Nevertheless, they can also introduce important biases in the parameter estimates that should be considered. Full article
Show Figures

Figure 1

Back to TopTop