Machine Learning Algorithms for Medical Image Processing

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Evolutionary Algorithms and Machine Learning".

Deadline for manuscript submissions: closed (15 December 2023) | Viewed by 7367

Special Issue Editor


E-Mail Website
Guest Editor
Departmnet of Radiology, Leiden University Medical Center (LUMC), 2311 EZ Leiden, The Netherlands
Interests: artificial intelligence; medical image processing; deep learning; neuroevolutionary algorithm; optimization

Special Issue Information

Dear Colleagues,

The field of medical image processing has been revolutionized by the advent of machine learning algorithms, which have significantly enhanced the accuracy, efficiency, and potential of diagnostic and therapeutic procedures. This Special Issue aims to showcase the latest advancements in the application of machine learning algorithms to medical image processing, providing a comprehensive overview of the current state-of-the-art techniques and their impact on healthcare.

This Special Issue encompasses a wide range of medical imaging modalities, including, but not limited to, X-ray, computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, and positron emission tomography (PET). It focuses on the development and implementation of machine learning algorithms, such as deep learning, convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs), for various tasks in medical image analysis.

This Special Issue welcomes original research articles, reviews, and case studies that address these topics, providing valuable insights into the challenges, methodologies, and potential applications of machine learning algorithms in medical image processing.

In conclusion, this Special Issue on machine learning algorithms for medical image processing aims to showcase cutting-edge research and developments in this rapidly evolving field. It serves as a platform for researchers, clinicians, and industry professionals to exchange ideas, share innovative approaches, and promote collaboration to harness the full potential of machine learning algorithms for enhancing medical image analysis and improving patient outcomes.

Dr. Tahereh Hassanzadeh
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • deep learning
  • medical imaging
  • medical image analysis
  • convolutional neural networks (CNNs)
  • segmentation
  • classification
  • recognition
  • image registration
  • image fusion
  • image reconstruction
  • image enhancement
  • image synthesis
  • data augmentation

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 2027 KiB  
Article
Deep Neural Networks for HER2 Grading of Whole Slide Images with Subclasses Levels
by Anibal Pedraza, Lucia Gonzalez, Oscar Deniz and Gloria Bueno
Algorithms 2024, 17(3), 97; https://doi.org/10.3390/a17030097 - 23 Feb 2024
Viewed by 1221
Abstract
HER2 overexpression is a prognostic and predictive factor observed in about 15% to 20% of breast cancer cases. The assessment of its expression directly affects the selection of treatment and prognosis. The measurement of HER2 status is performed by an expert pathologist who [...] Read more.
HER2 overexpression is a prognostic and predictive factor observed in about 15% to 20% of breast cancer cases. The assessment of its expression directly affects the selection of treatment and prognosis. The measurement of HER2 status is performed by an expert pathologist who assigns a score of 0, 1, 2+, or 3+ based on the gene expression. There is a high probability of interobserver variability in this evaluation, especially when it comes to class 2+. This is reasonable as the primary cause of error in multiclass classification problems typically arises in the intermediate classes. This work proposes a novel approach to expand the decision limit and divide it into two additional classes, that is 1.5+ and 2.5+. This subdivision facilitates both feature learning and pathology assessment. The method was evaluated using various neural networks models capable of performing patch-wise grading of HER2 whole slide images (WSI). Then, the outcomes of the 7-class classification were merged back into 5 classes in accordance with the pathologists’ criteria and to compare the results with the initial 5-class model. Optimal outcomes were achieved by employing colour transfer for data augmentation, and the ResNet-101 architecture with 7 classes. A sensitivity of 0.91 was achieved for class 2+ and 0.97 for 3+. Furthermore, this model offers the highest level of confidence, ranging from 92% to 94% for 2+ and 96% to 97% for 3+. In contrast, a dataset containing only 5 classes demonstrates a sensitivity performance that is 5% lower for the same network. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Medical Image Processing)
Show Figures

Figure 1

23 pages, 9853 KiB  
Article
Exploring the Efficacy of Base Data Augmentation Methods in Deep Learning-Based Radiograph Classification of Knee Joint Osteoarthritis
by Fabi Prezja, Leevi Annala, Sampsa Kiiskinen and Timo Ojala
Algorithms 2024, 17(1), 8; https://doi.org/10.3390/a17010008 - 24 Dec 2023
Viewed by 1677
Abstract
Diagnosing knee joint osteoarthritis (KOA), a major cause of disability worldwide, is challenging due to subtle radiographic indicators and the varied progression of the disease. Using deep learning for KOA diagnosis requires broad, comprehensive datasets. However, obtaining these datasets poses significant challenges due [...] Read more.
Diagnosing knee joint osteoarthritis (KOA), a major cause of disability worldwide, is challenging due to subtle radiographic indicators and the varied progression of the disease. Using deep learning for KOA diagnosis requires broad, comprehensive datasets. However, obtaining these datasets poses significant challenges due to patient privacy and data collection restrictions. Additive data augmentation, which enhances data variability, emerges as a promising solution. Yet, it’s unclear which augmentation techniques are most effective for KOA. Our study explored data augmentation methods, including adversarial techniques. We used strategies like horizontal cropping and region of interest (ROI) extraction, alongside adversarial methods such as noise injection and ROI removal. Interestingly, rotations improved performance, while methods like horizontal split were less effective. We discovered potential confounding regions using adversarial augmentation, shown in our models’ accurate classification of extreme KOA grades, even without the knee joint. This indicated a potential model bias towards irrelevant radiographic features. Removing the knee joint paradoxically increased accuracy in classifying early-stage KOA. Grad-CAM visualizations helped elucidate these effects. Our study contributed to the field by pinpointing augmentation techniques that either improve or impede model performance, in addition to recognizing potential confounding regions within radiographic images of knee osteoarthritis. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Medical Image Processing)
Show Figures

Figure 1

16 pages, 3171 KiB  
Article
Deep Learning-Based Visual Complexity Analysis of Electroencephalography Time-Frequency Images: Can It Localize the Epileptogenic Zone in the Brain?
by Navaneethakrishna Makaram, Sarvagya Gupta, Matthew Pesce, Jeffrey Bolton, Scellig Stone, Daniel Haehn, Marc Pomplun, Christos Papadelis, Phillip Pearl, Alexander Rotenberg, Patricia Ellen Grant and Eleonora Tamilia
Algorithms 2023, 16(12), 567; https://doi.org/10.3390/a16120567 - 15 Dec 2023
Viewed by 1676
Abstract
In drug-resistant epilepsy, a visual inspection of intracranial electroencephalography (iEEG) signals is often needed to localize the epileptogenic zone (EZ) and guide neurosurgery. The visual assessment of iEEG time-frequency (TF) images is an alternative to signal inspection, but subtle variations may escape the [...] Read more.
In drug-resistant epilepsy, a visual inspection of intracranial electroencephalography (iEEG) signals is often needed to localize the epileptogenic zone (EZ) and guide neurosurgery. The visual assessment of iEEG time-frequency (TF) images is an alternative to signal inspection, but subtle variations may escape the human eye. Here, we propose a deep learning-based metric of visual complexity to interpret TF images extracted from iEEG data and aim to assess its ability to identify the EZ in the brain. We analyzed interictal iEEG data from 1928 contacts recorded from 20 children with drug-resistant epilepsy who became seizure-free after neurosurgery. We localized each iEEG contact in the MRI, created TF images (1–70 Hz) for each contact, and used a pre-trained VGG16 network to measure their visual complexity by extracting unsupervised activation energy (UAE) from 13 convolutional layers. We identified points of interest in the brain using the UAE values via patient- and layer-specific thresholds (based on extreme value distribution) and using a support vector machine classifier. Results show that contacts inside the seizure onset zone exhibit lower UAE than outside, with larger differences in deep layers (L10, L12, and L13: p < 0.001). Furthermore, the points of interest identified using the support vector machine, localized the EZ with 7 mm accuracy. In conclusion, we presented a pre-surgical computerized tool that facilitates the EZ localization in the patient’s MRI without requiring long-term iEEG inspection. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Medical Image Processing)
Show Figures

Figure 1

19 pages, 2441 KiB  
Article
Robustness of Single- and Dual-Energy Deep-Learning-Based Scatter Correction Models on Simulated and Real Chest X-rays
by Clara Freijo, Joaquin L. Herraiz, Fernando Arias-Valcayo, Paula Ibáñez, Gabriela Moreno, Amaia Villa-Abaunza and José Manuel Udías
Algorithms 2023, 16(12), 565; https://doi.org/10.3390/a16120565 - 12 Dec 2023
Viewed by 1347
Abstract
Chest X-rays (CXRs) represent the first tool globally employed to detect cardiopulmonary pathologies. These acquisitions are highly affected by scattered photons due to the large field of view required. Scatter in CXRs introduces background in the images, which reduces their contrast. We developed [...] Read more.
Chest X-rays (CXRs) represent the first tool globally employed to detect cardiopulmonary pathologies. These acquisitions are highly affected by scattered photons due to the large field of view required. Scatter in CXRs introduces background in the images, which reduces their contrast. We developed three deep-learning-based models to estimate and correct scatter contribution to CXRs. We used a Monte Carlo (MC) ray-tracing model to simulate CXRs from human models obtained from CT scans using different configurations (depending on the availability of dual-energy acquisitions). The simulated CXRs contained the separated contribution of direct and scattered X-rays in the detector. These simulated datasets were then used as the reference for the supervised training of several NNs. Three NN models (single and dual energy) were trained with the MultiResUNet architecture. The performance of the NN models was evaluated on CXRs obtained, with an MC code, from chest CT scans of patients affected by COVID-19. The results show that the NN models were able to estimate and correct the scatter contribution to CXRs with an error of <5%, being robust to variations in the simulation setup and improving contrast in soft tissue. The single-energy model was tested on real CXRs, providing robust estimations of the scatter-corrected CXRs. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Medical Image Processing)
Show Figures

Figure 1

Back to TopTop