sensors-logo

Journal Browser

Journal Browser

AI for Biomedical Sensing and Imaging

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (31 May 2023) | Viewed by 23823

Special Issue Editors


E-Mail Website
Guest Editor
Department of Data Science and Knowledge Engineering, Maastricht University, 6229 Maastricht, The Netherlands
Interests: machine learning; deep learning; (bio)signal processing and analysis; medical imaging; electroencephalography
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
HUman Robotics Group, Universitat d'Alacant, Alicante, Spain
Interests: neurorehabilitation; myoelectric control or brain control; human–robot interaction
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Current advances in artificial intelligence techniques are having an important impact on the development of new approaches to improving biomedicine. Using AI (particularly, but not limited to, deep learning), several applications in the field of biomedicine have been boosted, facilitating the detection, analysis and treatment of multiple diseases or disorders.

In that sense, bio-signals are a core source of information. Using, for example, measurements of the electrical activity in the brain or muscles, researchers have proposed solutions in fields such as mental disorders or the rehabilitation of motor impairments. Similarly, the current developments in deep learning have drawn great interest in the medical imaging field. In this regard, researchers have applied deep learning techniques for a plethora of applications including 2D/3D/4D segmentation, diagnosis of tumours/cancer and the detection and classification of diseases, among others.

This Special Issue addresses innovative developments related to the application of artificial intelligence in the field of biomedicine, particularly in 1) bio-signals processing and analysis as well as 2) medical imaging. This Special Issue will welcome submissions of original research articles, case studies, and critical reviews on a wide range of topics including, but not limited to:

  • Biomedical applications using biosignals (EEG, EMG, etc.);
  • AI-boosted assistive technologies using biosignals (EEG, EMG, etc.);
  • Kinematics and motion analysis using bio-signals and AI;
  • Artificial intelligence solutions for the analysis of medical images;
  • AI-based detection and classification of diseases;
  • Image segmentation, reconstruction/enhancement, and classification using deep learning;
  • Artificial Intelligence in tumour/cancer imaging diagnoses;
  • AI applications for improving cancer treatment.

Dr. Enrique Hortal
Dr. Andres Ubeda
Prof. Dr. Miguel Cazorla
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 12696 KiB  
Communication
Explainable Automated TI-RADS Evaluation of Thyroid Nodules
by Alisa Kunapinun, Dittapong Songsaeng, Sittaya Buathong, Matthew N. Dailey, Chadaporn Keatmanee and Mongkol Ekpanyapong
Sensors 2023, 23(16), 7289; https://doi.org/10.3390/s23167289 - 21 Aug 2023
Viewed by 5131
Abstract
A thyroid nodule, a common abnormal growth within the thyroid gland, is often identified through ultrasound imaging of the neck. These growths may be solid- or fluid-filled, and their treatment is influenced by factors such as size and location. The Thyroid Imaging Reporting [...] Read more.
A thyroid nodule, a common abnormal growth within the thyroid gland, is often identified through ultrasound imaging of the neck. These growths may be solid- or fluid-filled, and their treatment is influenced by factors such as size and location. The Thyroid Imaging Reporting and Data System (TI-RADS) is a classification method that categorizes thyroid nodules into risk levels based on features such as size, echogenicity, margin, shape, and calcification. It guides clinicians in deciding whether a biopsy or other further evaluation is needed. Machine learning (ML) can complement TI-RADS classification, thereby improving the detection of malignant tumors. When combined with expert rules (TI-RADS) and explanations, ML models may uncover elements that TI-RADS misses, especially when TI-RADS training data are scarce. In this paper, we present an automated system for classifying thyroid nodules according to TI-RADS and assessing malignancy effectively. We use ResNet-101 and DenseNet-201 models to classify thyroid nodules according to TI-RADS and malignancy. By analyzing the models’ last layer using the Grad-CAM algorithm, we demonstrate that these models can identify risk areas and detect nodule features relevant to the TI-RADS score. By integrating Grad-CAM results with feature probability calculations, we provide a precise heat map, visualizing specific features within the nodule and potentially assisting doctors in their assessments. Our experiments show that the utilization of ResNet-101 and DenseNet-201 models, in conjunction with Grad-CAM visualization analysis, improves TI-RADS classification accuracy by up to 10%. This enhancement, achieved through iterative analysis and re-training, underscores the potential of machine learning in advancing thyroid nodule diagnosis, offering a promising direction for further exploration and clinical application. Full article
(This article belongs to the Special Issue AI for Biomedical Sensing and Imaging)
Show Figures

Figure 1

12 pages, 2823 KiB  
Article
Thermal Time Constant CNN-Based Spectrometry for Biomedical Applications
by Maria Strąkowska and Michał Strzelecki
Sensors 2023, 23(15), 6658; https://doi.org/10.3390/s23156658 - 25 Jul 2023
Cited by 1 | Viewed by 611
Abstract
This paper presents a novel method based on a convolutional neural network to recover thermal time constants from a temperature–time curve after thermal excitation. The thermal time constants are then used to detect the pathological states of the skin. The thermal system is [...] Read more.
This paper presents a novel method based on a convolutional neural network to recover thermal time constants from a temperature–time curve after thermal excitation. The thermal time constants are then used to detect the pathological states of the skin. The thermal system is modeled as a Foster Network consisting of R-C thermal elements. Each component is represented by a time constant and an amplitude that can be retrieved using the deep learning system. The presented method was verified on artificially generated training data and then tested on real, measured thermographic signals from a patient suffering from psoriasis. The results show proper estimation both in time constants and in temperature evaluation over time. The error of the recovered time constants is below 1% for noiseless input data, and it does not exceed 5% for noisy signals. Full article
(This article belongs to the Special Issue AI for Biomedical Sensing and Imaging)
Show Figures

Figure 1

16 pages, 2968 KiB  
Article
Influence of the Tikhonov Regularization Parameter on the Accuracy of the Inverse Problem in Electrocardiography
by Tiantian Wang, Joël Karel, Pietro Bonizzi and Ralf L. M. Peeters
Sensors 2023, 23(4), 1841; https://doi.org/10.3390/s23041841 - 07 Feb 2023
Cited by 2 | Viewed by 1707
Abstract
The electrocardiogram (ECG) is the standard method in clinical practice to non-invasively analyze the electrical activity of the heart, from electrodes placed on the body’s surface. The ECG can provide a cardiologist with relevant information to assess the condition of the heart and [...] Read more.
The electrocardiogram (ECG) is the standard method in clinical practice to non-invasively analyze the electrical activity of the heart, from electrodes placed on the body’s surface. The ECG can provide a cardiologist with relevant information to assess the condition of the heart and the possible presence of cardiac pathology. Nonetheless, the global view of the heart’s electrical activity given by the ECG cannot provide fully detailed and localized information about abnormal electrical propagation patterns and corresponding substrates on the surface of the heart. Electrocardiographic imaging, also known as the inverse problem in electrocardiography, tries to overcome these limitations by non-invasively reconstructing the heart surface potentials, starting from the corresponding body surface potentials, and the geometry of the torso and the heart. This problem is ill-posed, and regularization techniques are needed to achieve a stable and accurate solution. The standard approach is to use zero-order Tikhonov regularization and the L-curve approach to choose the optimal value for the regularization parameter. However, different methods have been proposed for computing the optimal value of the regularization parameter. Moreover, regardless of the estimation method used, this may still lead to over-regularization or under-regularization. In order to gain a better understanding of the effects of the choice of regularization parameter value, in this study, we first focused on the regularization parameter itself, and investigated its influence on the accuracy of the reconstruction of heart surface potentials, by assessing the reconstruction accuracy with high-precision simultaneous heart and torso recordings from four dogs. For this, we analyzed a sufficiently large range of parameter values. Secondly, we evaluated the performance of five different methods for the estimation of the regularization parameter, also in view of the results of the first analysis. Thirdly, we investigated the effect of using a fixed value of the regularization parameter across all reconstructed beats. Accuracy was measured in terms of the quality of reconstruction of the heart surface potentials and estimation of the activation and recovery times, when compared with ground truth recordings from the experimental dog data. Results show that values of the regularization parameter in the range (0.01–0.03) provide the best accuracy, and that the three best-performing estimation methods (L-Curve, Zero-Crossing, and CRESO) give values in this range. Moreover, a fixed value of the regularization parameter could achieve very similar performance to the beat-specific parameter values calculated by the different estimation methods. These findings are relevant as they suggest that regularization parameter estimation methods may provide the accurate reconstruction of heart surface potentials only for specific ranges of regularization parameter values, and that using a fixed value of the regularization parameter may represent a valid alternative, especially when computational efficiency or consistency across time is required. Full article
(This article belongs to the Special Issue AI for Biomedical Sensing and Imaging)
Show Figures

Figure 1

12 pages, 5034 KiB  
Article
CNN–RNN Network Integration for the Diagnosis of COVID-19 Using Chest X-ray and CT Images
by Isoon Kanjanasurat, Kasi Tenghongsakul, Boonchana Purahong and Attasit Lasakul
Sensors 2023, 23(3), 1356; https://doi.org/10.3390/s23031356 - 25 Jan 2023
Cited by 12 | Viewed by 2433
Abstract
The 2019 coronavirus disease (COVID-19) has rapidly spread across the globe. It is crucial to identify positive cases as rapidly as humanely possible to provide appropriate treatment for patients and prevent the pandemic from spreading further. Both chest X-ray and computed tomography (CT) [...] Read more.
The 2019 coronavirus disease (COVID-19) has rapidly spread across the globe. It is crucial to identify positive cases as rapidly as humanely possible to provide appropriate treatment for patients and prevent the pandemic from spreading further. Both chest X-ray and computed tomography (CT) images are capable of accurately diagnosing COVID-19. To distinguish lung illnesses (i.e., COVID-19 and pneumonia) from normal cases using chest X-ray and CT images, we combined convolutional neural network (CNN) and recurrent neural network (RNN) models by replacing the fully connected layers of CNN with a version of RNN. In this framework, the attributes of CNNs were utilized to extract features and those of RNNs to calculate dependencies and classification base on extracted features. CNN models VGG19, ResNet152V2, and DenseNet121 were combined with long short-term memory (LSTM) and gated recurrent unit (GRU) RNN models, which are convenient to develop because these networks are all available as features on many platforms. The proposed method is evaluated using a large dataset totaling 16,210 X-ray and CT images (5252 COVID-19 images, 6154 pneumonia images, and 4804 normal images) were taken from several databases, which had various image sizes, brightness levels, and viewing angles. Their image quality was enhanced via normalization, gamma correction, and contrast-limited adaptive histogram equalization. The ResNet152V2 with GRU model achieved the best architecture with an accuracy of 93.37%, an F1 score of 93.54%, a precision of 93.73%, and a recall of 93.47%. From the experimental results, the proposed method is highly effective in distinguishing lung diseases. Furthermore, both CT and X-ray images can be used as input for classification, allowing for the rapid and easy detection of COVID-19. Full article
(This article belongs to the Special Issue AI for Biomedical Sensing and Imaging)
Show Figures

Figure 1

20 pages, 14769 KiB  
Article
Frequency-Domain-Based Structure Losses for CycleGAN-Based Cone-Beam Computed Tomography Translation
by Suraj Pai, Ibrahim Hadzic, Chinmay Rao, Ivan Zhovannik, Andre Dekker, Alberto Traverso, Stylianos Asteriadis and Enrique Hortal
Sensors 2023, 23(3), 1089; https://doi.org/10.3390/s23031089 - 17 Jan 2023
Viewed by 1619
Abstract
Research exploring CycleGAN-based synthetic image generation has recently accelerated in the medical community due to its ability to leverage unpaired images effectively. However, a commonly established drawback of the CycleGAN, the introduction of artifacts in generated images, makes it unreliable for medical imaging [...] Read more.
Research exploring CycleGAN-based synthetic image generation has recently accelerated in the medical community due to its ability to leverage unpaired images effectively. However, a commonly established drawback of the CycleGAN, the introduction of artifacts in generated images, makes it unreliable for medical imaging use cases. In an attempt to address this, we explore the effect of structure losses on the CycleGAN and propose a generalized frequency-based loss that aims at preserving the content in the frequency domain. We apply this loss to the use-case of cone-beam computed tomography (CBCT) translation to computed tomography (CT)-like quality. Synthetic CT (sCT) images generated from our methods are compared against baseline CycleGAN along with other existing structure losses proposed in the literature. Our methods (MAE: 85.5, MSE: 20433, NMSE: 0.026, PSNR: 30.02, SSIM: 0.935) quantitatively and qualitatively improve over the baseline CycleGAN (MAE: 88.8, MSE: 24244, NMSE: 0.03, PSNR: 29.37, SSIM: 0.935) across all investigated metrics and are more robust than existing methods. Furthermore, no observable artifacts or loss in image quality were observed. Finally, we demonstrated that sCTs generated using our methods have superior performance compared to the original CBCT images on selected downstream tasks. Full article
(This article belongs to the Special Issue AI for Biomedical Sensing and Imaging)
Show Figures

Figure 1

15 pages, 4232 KiB  
Article
Wearable Epileptic Seizure Prediction System Based on Machine Learning Techniques Using ECG, PPG and EEG Signals
by David Zambrana-Vinaroz, Jose Maria Vicente-Samper, Juliana Manrique-Cordoba and Jose Maria Sabater-Navarro
Sensors 2022, 22(23), 9372; https://doi.org/10.3390/s22239372 - 01 Dec 2022
Cited by 12 | Viewed by 4641
Abstract
Epileptic seizures have a great impact on the quality of life of people who suffer from them and further limit their independence. For this reason, a device that would be able to monitor patients’ health status and warn them for a possible epileptic [...] Read more.
Epileptic seizures have a great impact on the quality of life of people who suffer from them and further limit their independence. For this reason, a device that would be able to monitor patients’ health status and warn them for a possible epileptic seizure would improve their quality of life. With this aim, this article proposes the first seizure predictive model based on Ear EEG, ECG and PPG signals obtained by means of a device that can be used in a static and outpatient setting. This device has been tested with epileptic people in a clinical environment. By processing these data and using supervised machine learning techniques, different predictive models capable of classifying the state of the epileptic person into normal, pre-seizure and seizure have been developed. Subsequently, a reduced model based on Boosted Trees has been validated, obtaining a prediction accuracy of 91.5% and a sensitivity of 85.4%. Thus, based on the accuracy of the predictive model obtained, it can potentially serve as a support tool to determine the status epilepticus and prevent a seizure, thereby improving the quality of life of these people. Full article
(This article belongs to the Special Issue AI for Biomedical Sensing and Imaging)
Show Figures

Figure 1

14 pages, 5577 KiB  
Article
Deep-E Enhanced Photoacoustic Tomography Using Three-Dimensional Reconstruction for High-Quality Vascular Imaging
by Wenhan Zheng, Huijuan Zhang, Chuqin Huang, Kaylin McQuillan, Huining Li, Wenyao Xu and Jun Xia
Sensors 2022, 22(20), 7725; https://doi.org/10.3390/s22207725 - 12 Oct 2022
Cited by 4 | Viewed by 1616
Abstract
Linear-array-based photoacoustic computed tomography (PACT) has been widely used in vascular imaging due to its low cost and high compatibility with current ultrasound systems. However, linear-array transducers have inherent limitations for three-dimensional imaging due to the poor elevation resolution. In this study, we [...] Read more.
Linear-array-based photoacoustic computed tomography (PACT) has been widely used in vascular imaging due to its low cost and high compatibility with current ultrasound systems. However, linear-array transducers have inherent limitations for three-dimensional imaging due to the poor elevation resolution. In this study, we introduced a deep learning-assisted data process algorithm to enhance the image quality in linear-array-based PACT. Compared to our earlier study where training was performed on 2D reconstructed data, here, we utilized 2D and 3D reconstructed data to train the two networks separately. We then fused the image data from both 2D and 3D training to get features from both algorithms. The numerical and in vivo validations indicate that our approach can improve elevation resolution, recover the true size of the object, and enhance deep vessels. Our deep learning-assisted approach can be applied to translational imaging applications that require detailed visualization of vascular features. Full article
(This article belongs to the Special Issue AI for Biomedical Sensing and Imaging)
Show Figures

Figure 1

38 pages, 1958 KiB  
Article
E-Prevention: Advanced Support System for Monitoring and Relapse Prevention in Patients with Psychotic Disorders Analyzing Long-Term Multimodal Data from Wearables and Video Captures
by Athanasia Zlatintsi, Panagiotis P. Filntisis, Christos Garoufis, Niki Efthymiou, Petros Maragos, Andreas Menychtas, Ilias Maglogiannis, Panayiotis Tsanakas, Thomas Sounapoglou, Emmanouil Kalisperakis, Thomas Karantinos, Marina Lazaridi, Vasiliki Garyfalli, Asimakis Mantas, Leonidas Mantonakis and Nikolaos Smyrnis
Sensors 2022, 22(19), 7544; https://doi.org/10.3390/s22197544 - 05 Oct 2022
Cited by 11 | Viewed by 2825
Abstract
Wearable technologies and digital phenotyping foster unique opportunities for designing novel intelligent electronic services that can address various well-being issues in patients with mental disorders (i.e., schizophrenia and bipolar disorder), thus having the potential to revolutionize psychiatry and its clinical practice. In this [...] Read more.
Wearable technologies and digital phenotyping foster unique opportunities for designing novel intelligent electronic services that can address various well-being issues in patients with mental disorders (i.e., schizophrenia and bipolar disorder), thus having the potential to revolutionize psychiatry and its clinical practice. In this paper, we present e-Prevention, an innovative integrated system for medical support that facilitates effective monitoring and relapse prevention in patients with mental disorders. The technologies offered through e-Prevention include: (i) long-term continuous recording of biometric and behavioral indices through a smartwatch; (ii) video recordings of patients while being interviewed by a clinician, using a tablet; (iii) automatic and systematic storage of these data in a dedicated Cloud server and; (iv) the ability of relapse detection and prediction. This paper focuses on the description of the e-Prevention system and the methodologies developed for the identification of feature representations that correlate with and can predict psychopathology and relapses in patients with mental disorders. Specifically, we tackle the problem of relapse detection and prediction using Machine and Deep Learning techniques on all collected data. The results are promising, indicating that such predictions could be made and leading eventually to the prediction of psychopathology and the prevention of relapses. Full article
(This article belongs to the Special Issue AI for Biomedical Sensing and Imaging)
Show Figures

Figure 1

15 pages, 3600 KiB  
Article
Selection of the Best Set of Features for sEMG-Based Hand Gesture Recognition Applying a CNN Architecture
by Jorge Arturo Sandoval-Espino, Alvaro Zamudio-Lara, José Antonio Marbán-Salgado, J. Jesús Escobedo-Alatorre, Omar Palillero-Sandoval and J. Guadalupe Velásquez-Aguilar
Sensors 2022, 22(13), 4972; https://doi.org/10.3390/s22134972 - 30 Jun 2022
Cited by 5 | Viewed by 1842
Abstract
The classification of surface myoelectric signals (sEMG) remains a great challenge when focused on its implementation in an electromechanical hand prosthesis, due to its nonlinear and stochastic nature, as well as the great difference between models applied offline and online. In this work, [...] Read more.
The classification of surface myoelectric signals (sEMG) remains a great challenge when focused on its implementation in an electromechanical hand prosthesis, due to its nonlinear and stochastic nature, as well as the great difference between models applied offline and online. In this work, the selection of the set of the features that allowed us to obtain the best results for the classification of this type of signals is presented. In order to compare the results obtained, the Nina PRO DB2 and DB3 databases were used, which contain information on 50 different movements of 40 healthy subjects and 11 amputated subjects, respectively. The sEMG of each subject was acquired through 12 channels in a bipolar configuration. To carry out the classification, a convolutional neural network (CNN) was used and a comparison of four sets of features extracted in the time domain was made, three of which have shown good performance in previous works and one more that was used for the first time to train this type of network. Set one is composed of six features in the time domain (TD1), Set two has 10 features also in the time domain (TD2) including the autoregression model (AR), the third set has two features in the time domain derived from spectral moments (TD-PSD1), and finally, a set of five features also has information on the power spectrum of the signal obtained in the time domain (TD-PSD2). The selected features in each set were organized in four different ways for the formation of the training images. The results obtained show that the set of features TD-PSD2 obtained the best performance for all cases. With the set of features and the formation of images proposed, an increase in the accuracies of the models of 8.16% and 8.56% was obtained for the DB2 and DB3 databases, respectively, compared to the current state of the art that has used these databases. Full article
(This article belongs to the Special Issue AI for Biomedical Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop