Artificial Intelligence in Medical Image Processing and Segmentation, 2nd Edition

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: 30 June 2024 | Viewed by 5661

Special Issue Editors


E-Mail Website
Guest Editor
Department of Experimental and Clinical Medicine, Magna Graecia University, 88100 Catanzaro, Italy
Interests: medical image processing; radiotherapy; image guided surgery; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Biomedical Engineering, Karlsruhe Institute of Technology (KIT), D-76131 Karlsruhe, Germany
Interests: radiation therapy; biomedical imaging; 3D image processing; biomedical engineering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, Artificial Intelligence (AI) has deeply revolutionized the field of medical image processing. In particular, image segmentation has been the task that most benefited from such an innovation.

This boost led to great advancements in the translation of AI algorithms from the laboratory to real clinical practice, especially for computer-aided diagnosis and image-guided surgery.

As a result, the first medical devices relying on AI algorithms to treat or diagnose patients were recently introduced to the market.

We are pleased to invite you to submit your work to this Special Issue, which will focus on the cutting-edge developments of AI applied to the medical image field.

The journal will be accepting contributions (both original articles and reviews) mainly centered on the following topics:

    Medical image segmentation;

    AI-based medical image registration;

    Medical image recognition;

    Patient/treatment stratification based on AI image processing;

    Synthetic medical image generation;

    Image-guided surgery/radiotherapy based on AI;

    Radiomics;

    Explainable AI in medicine.

Dr. Paolo Zaffino
Prof. Dr. Maria Francesca Spadea
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • keywords medical image processing
  • image segmentation
  • computer-aided diagnosis
  • image guided surgery
  • artificial intelligence

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 2799 KiB  
Article
Development of the AI Pipeline for Corneal Opacity Detection
by Kenji Yoshitsugu, Eisuke Shimizu, Hiroki Nishimura, Rohan Khemlani, Shintaro Nakayama and Tadamasa Takemura
Bioengineering 2024, 11(3), 273; https://doi.org/10.3390/bioengineering11030273 - 12 Mar 2024
Viewed by 913
Abstract
Ophthalmological services face global inadequacies, especially in low- and middle-income countries, which are marked by a shortage of practitioners and equipment. This study employed a portable slit lamp microscope with video capabilities and cloud storage for more equitable global diagnostic resource distribution. To [...] Read more.
Ophthalmological services face global inadequacies, especially in low- and middle-income countries, which are marked by a shortage of practitioners and equipment. This study employed a portable slit lamp microscope with video capabilities and cloud storage for more equitable global diagnostic resource distribution. To enhance accessibility and quality of care, this study targets corneal opacity, which is a global cause of blindness. This study has two purposes. The first is to detect corneal opacity from videos in which the anterior segment of the eye is captured. The other is to develop an AI pipeline to detect corneal opacities. First, we extracted image frames from videos and processed them using a convolutional neural network (CNN) model. Second, we manually annotated the images to extract only the corneal margins, adjusted the contrast with CLAHE, and processed them using the CNN model. Finally, we performed semantic segmentation of the cornea using annotated data. The results showed an accuracy of 0.8 for image frames and 0.96 for corneal margins. Dice and IoU achieved a score of 0.94 for semantic segmentation of the corneal margins. Although corneal opacity detection from video frames seemed challenging in the early stages of this study, manual annotation, corneal extraction, and CLAHE contrast adjustment significantly improved accuracy. The incorporation of manual annotation into the AI pipeline, through semantic segmentation, facilitated high accuracy in detecting corneal opacity. Full article
Show Figures

Figure 1

17 pages, 3108 KiB  
Article
Deep Learning for Delineation of the Spinal Canal in Whole-Body Diffusion-Weighted Imaging: Normalising Inter- and Intra-Patient Intensity Signal in Multi-Centre Datasets
by Antonio Candito, Richard Holbrey, Ana Ribeiro, Christina Messiou, Nina Tunariu, Dow-Mu Koh and Matthew D. Blackledge
Bioengineering 2024, 11(2), 130; https://doi.org/10.3390/bioengineering11020130 - 29 Jan 2024
Viewed by 920
Abstract
Background: Whole-Body Diffusion-Weighted Imaging (WBDWI) is an established technique for staging and evaluating treatment response in patients with multiple myeloma (MM) and advanced prostate cancer (APC). However, WBDWI scans show inter- and intra-patient intensity signal variability. This variability poses challenges in accurately quantifying [...] Read more.
Background: Whole-Body Diffusion-Weighted Imaging (WBDWI) is an established technique for staging and evaluating treatment response in patients with multiple myeloma (MM) and advanced prostate cancer (APC). However, WBDWI scans show inter- and intra-patient intensity signal variability. This variability poses challenges in accurately quantifying bone disease, tracking changes over follow-up scans, and developing automated tools for bone lesion delineation. Here, we propose a novel automated pipeline for inter-station, inter-scan image signal standardisation on WBDWI that utilizes robust segmentation of the spinal canal through deep learning. Methods: We trained and validated a supervised 2D U-Net model to automatically delineate the spinal canal (both the spinal cord and surrounding cerebrospinal fluid, CSF) in an initial cohort of 40 patients who underwent WBDWI for treatment response evaluation (80 scans in total). Expert-validated contours were used as the target standard. The algorithm was further semi-quantitatively validated on four additional datasets (three internal, one external, 207 scans total) by comparing the distributions of average apparent diffusion coefficient (ADC) and volume of the spinal cord derived from a two-component Gaussian mixture model of segmented regions. Our pipeline subsequently standardises WBDWI signal intensity through two stages: (i) normalisation of signal between imaging stations within each patient through histogram equalisation of slices acquired on either side of the station gap, and (ii) inter-scan normalisation through histogram equalisation of the signal derived within segmented spinal canal regions. This approach was semi-quantitatively validated in all scans available to the study (N = 287). Results: The test dice score, precision, and recall of the spinal canal segmentation model were all above 0.87 when compared to manual delineation. The average ADC for the spinal cord (1.7 × 10−3 mm2/s) showed no significant difference from the manual contours. Furthermore, no significant differences were found between the average ADC values of the spinal cord across the additional four datasets. The signal-normalised, high-b-value images were visualised using a fixed contrast window level and demonstrated qualitatively better signal homogeneity across scans than scans that were not signal-normalised. Conclusion: Our proposed intensity signal WBDWI normalisation pipeline successfully harmonises intensity values across multi-centre cohorts. The computational time required is less than 10 s, preserving contrast-to-noise and signal-to-noise ratios in axial diffusion-weighted images. Importantly, no changes to the clinical MRI protocol are expected, and there is no need for additional reference MRI data or follow-up scans. Full article
Show Figures

Figure 1

12 pages, 2785 KiB  
Article
Using AI Segmentation Models to Improve Foreign Body Detection and Triage from Ultrasound Images
by Lawrence Holland, Sofia I. Hernandez Torres and Eric J. Snider
Bioengineering 2024, 11(2), 128; https://doi.org/10.3390/bioengineering11020128 - 29 Jan 2024
Viewed by 929
Abstract
Medical imaging can be a critical tool for triaging casualties in trauma situations. In remote or military medicine scenarios, triage is essential for identifying how to use limited resources or prioritize evacuation for the most serious cases. Ultrasound imaging, while portable and often [...] Read more.
Medical imaging can be a critical tool for triaging casualties in trauma situations. In remote or military medicine scenarios, triage is essential for identifying how to use limited resources or prioritize evacuation for the most serious cases. Ultrasound imaging, while portable and often available near the point of injury, can only be used for triage if images are properly acquired, interpreted, and objectively triage scored. Here, we detail how AI segmentation models can be used for improving image interpretation and objective triage evaluation for a medical application focused on foreign bodies embedded in tissues at variable distances from critical neurovascular features. Ultrasound images previously collected in a tissue phantom with or without neurovascular features were labeled with ground truth masks. These image sets were used to train two different segmentation AI frameworks: YOLOv7 and U-Net segmentation models. Overall, both approaches were successful in identifying shrapnel in the image set, with U-Net outperforming YOLOv7 for single-class segmentation. Both segmentation models were also evaluated with a more complex image set containing shrapnel, artery, vein, and nerve features. YOLOv7 obtained higher precision scores across multiple classes whereas U-Net achieved higher recall scores. Using each AI model, a triage distance metric was adapted to measure the proximity of shrapnel to the nearest neurovascular feature, with U-Net more closely mirroring the triage distances measured from ground truth labels. Overall, the segmentation AI models were successful in detecting shrapnel in ultrasound images and could allow for improved injury triage in emergency medicine scenarios. Full article
Show Figures

Figure 1

18 pages, 2992 KiB  
Article
Automatic Detection and Classification of Hypertensive Retinopathy with Improved Convolution Neural Network and Improved SVM
by Usharani Bhimavarapu, Nalini Chintalapudi and Gopi Battineni
Bioengineering 2024, 11(1), 56; https://doi.org/10.3390/bioengineering11010056 - 05 Jan 2024
Viewed by 1086
Abstract
Hypertensive retinopathy (HR) results from the microvascular retinal changes triggered by hypertension, which is the most common leading cause of preventable blindness worldwide. Therefore, it is necessary to develop an automated system for HR detection and evaluation using retinal images. We aimed to [...] Read more.
Hypertensive retinopathy (HR) results from the microvascular retinal changes triggered by hypertension, which is the most common leading cause of preventable blindness worldwide. Therefore, it is necessary to develop an automated system for HR detection and evaluation using retinal images. We aimed to propose an automated approach to identify and categorize the various degrees of HR severity. A new network called the spatial convolution module (SCM) combines cross-channel and spatial information, and the convolution operations extract helpful features. The present model is evaluated using publicly accessible datasets ODIR, INSPIREVR, and VICAVR. We applied the augmentation to artificially increase the dataset of 1200 fundus images. The different HR severity levels of normal, mild, moderate, severe, and malignant are finally classified with the reduced time when compared to the existing models because in the proposed model, convolutional layers run only once on the input fundus images, which leads to a speedup and reduces the processing time in detecting the abnormalities in the vascular structure. According to the findings, the improved SVM had the highest detection and classification accuracy rate in the vessel classification with an accuracy of 98.99% and completed the task in 160.4 s. The ten-fold classification achieved the highest accuracy of 98.99%, i.e., 0.27 higher than the five-fold classification accuracy and the improved KNN classifier achieved an accuracy of 98.72%. When computation efficiency is a priority, the proposed model’s ability to quickly recognize different HR severity levels is significant. Full article
Show Figures

Figure 1

16 pages, 3084 KiB  
Article
COVID-19 Detection via Ultra-Low-Dose X-ray Images Enabled by Deep Learning
by Isah Salim Ahmad, Na Li, Tangsheng Wang, Xuan Liu, Jingjing Dai, Yinping Chan, Haoyang Liu, Junming Zhu, Weibin Kong, Zefeng Lu, Yaoqin Xie and Xiaokun Liang
Bioengineering 2023, 10(11), 1314; https://doi.org/10.3390/bioengineering10111314 - 14 Nov 2023
Cited by 1 | Viewed by 1203
Abstract
The detection of Coronavirus disease 2019 (COVID-19) is crucial for controlling the spread of the virus. Current research utilizes X-ray imaging and artificial intelligence for COVID-19 diagnosis. However, conventional X-ray scans expose patients to excessive radiation, rendering repeated examinations impractical. Ultra-low-dose X-ray imaging [...] Read more.
The detection of Coronavirus disease 2019 (COVID-19) is crucial for controlling the spread of the virus. Current research utilizes X-ray imaging and artificial intelligence for COVID-19 diagnosis. However, conventional X-ray scans expose patients to excessive radiation, rendering repeated examinations impractical. Ultra-low-dose X-ray imaging technology enables rapid and accurate COVID-19 detection with minimal additional radiation exposure. In this retrospective cohort study, ULTRA-X-COVID, a deep neural network specifically designed for automatic detection of COVID-19 infections using ultra-low-dose X-ray images, is presented. The study included a multinational and multicenter dataset consisting of 30,882 X-ray images obtained from approximately 16,600 patients across 51 countries. It is important to note that there was no overlap between the training and test sets. The data analysis was conducted from 1 April 2020 to 1 January 2022. To evaluate the effectiveness of the model, various metrics such as the area under the receiver operating characteristic curve, receiver operating characteristic, accuracy, specificity, and F1 score were utilized. In the test set, the model demonstrated an AUC of 0.968 (95% CI, 0.956–0.983), accuracy of 94.3%, specificity of 88.9%, and F1 score of 99.0%. Notably, the ULTRA-X-COVID model demonstrated a performance comparable to conventional X-ray doses, with a prediction time of only 0.1 s per image. These findings suggest that the ULTRA-X-COVID model can effectively identify COVID-19 cases using ultra-low-dose X-ray scans, providing a novel alternative for COVID-19 detection. Moreover, the model exhibits potential adaptability for diagnoses of various other diseases. Full article
Show Figures

Figure 1

Back to TopTop