Deep Learning in Oncological Image Analysis

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: closed (20 December 2023) | Viewed by 8830

Special Issue Editor


E-Mail Website
Guest Editor
Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
Interests: computational pathology; precision pathology
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Deep learning approaches to big data analysis open new possibilities in oncology and could have a substantial impact on clinical oncology. Histopathology and cytology slides are usually used to diagnose cancer subtypes and conduct tumor staging. Effective deep learning methods combined with the virtual microscope of the whole-slide images (WSIs) of tissue biopsies enable the high-throughput screening of a large number of patients. However, WSIs are gigapixel images of extremely large dimensions with common sizes larger than 100,000 by 100,000 pixels, and the massive size and complex content of WSIs pose technical challenges for developing automated analysis technology fast and efficient enough for practical clinical usage. Other hurdles, such as a lack of labelled data, also need to be overcome in order to make oncological image analysis available in clinical practice. This Special Issue aims to gather a collection of cutting-edge research on deep learning methods in oncological image analysis that have specific implications for cancer diagnosis and treatment. We welcome submissions on, but not limited to, the following topics:

  • Diagnostic deep learning methods using whole-slide images (focusing on early detection is preferred);
  • Prognostic deep learning methods using whole-slide images (focusing on precision oncology is preferred);
  • Diagnostic or prognostic deep learning methods in quantitative cancer image analysis (PET, CT, MRI, etc.);
  • Diagnostic or prognostic deep learning methods using cytological, histopathological or liquid biopsy images (focusing on early detection is preferred);
  • Treatment outcome assessment and prediction.

Prof. Dr. Ching-Wei Wang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • cancer diagnosis
  • precision oncology
  • deep learning
  • oncological image analysis

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 4003 KiB  
Article
Dual Deep CNN for Tumor Brain Classification
by Aya M. Al-Zoghby, Esraa Mohamed K. Al-Awadly, Ahmad Moawad, Noura Yehia and Ahmed Ismail Ebada
Diagnostics 2023, 13(12), 2050; https://doi.org/10.3390/diagnostics13122050 - 13 Jun 2023
Cited by 5 | Viewed by 2888
Abstract
Brain tumor (BT) is a serious issue and potentially deadly disease that receives much attention. However, early detection and identification of tumor type and location are crucial for effective treatment and saving lives. Manual diagnoses are time-consuming and depend on radiologist experts; the [...] Read more.
Brain tumor (BT) is a serious issue and potentially deadly disease that receives much attention. However, early detection and identification of tumor type and location are crucial for effective treatment and saving lives. Manual diagnoses are time-consuming and depend on radiologist experts; the increasing number of new cases of brain tumors makes it difficult to process massive and large amounts of data rapidly, as time is a critical factor in patients’ lives. Hence, artificial intelligence (AI) is vital for understanding disease and its various types. Several studies proposed different techniques for BT detection and classification. These studies are on machine learning (ML) and deep learning (DL). The ML-based method requires handcrafted or automatic feature extraction algorithms; however, DL becomes superior in self-learning and robust in classification and recognition tasks. This research focuses on classifying three types of tumors using MRI imaging: meningioma, glioma, and pituitary tumors. The proposed DCTN model depends on dual convolutional neural networks with VGG-16 architecture concatenated with custom CNN (convolutional neural networks) architecture. After conducting approximately 22 experiments with different architectures and models, our model reached 100% accuracy during training and 99% during testing. The proposed methodology obtained the highest possible improvement over existing research studies. The solution provides a revolution for healthcare providers that can be used as a different disease classification in the future and save human lives. Full article
(This article belongs to the Special Issue Deep Learning in Oncological Image Analysis)
Show Figures

Figure 1

11 pages, 2068 KiB  
Article
Body Composition to Define Prognosis of Cancers Treated by Anti-Angiogenic Drugs
by Pierre Decazes, Samy Ammari, Antoine De Prévia, Léo Mottay, Littisha Lawrance, Younes Belkouchi, Baya Benatsou, Laurence Albiges, Corinne Balleyguier, Pierre Vera and Nathalie Lassau
Diagnostics 2023, 13(2), 205; https://doi.org/10.3390/diagnostics13020205 - 05 Jan 2023
Cited by 2 | Viewed by 1324
Abstract
Background: Body composition could help to better define the prognosis of cancers treated with anti-angiogenics. The aim of this study is to evaluate the prognostic value of 3D and 2D anthropometric parameters in patients given anti-angiogenic treatments. Methods: 526 patients with different types [...] Read more.
Background: Body composition could help to better define the prognosis of cancers treated with anti-angiogenics. The aim of this study is to evaluate the prognostic value of 3D and 2D anthropometric parameters in patients given anti-angiogenic treatments. Methods: 526 patients with different types of cancers were retrospectively included. The software Anthropometer3DNet was used to measure automatically fat body mass (FBM3D), muscle body mass (MBM3D), visceral fat mass (VFM3D) and subcutaneous fat mass (SFM3D) in 3D computed tomography. For comparison, equivalent two-dimensional measurements at the L3 level were also measured. The area under the curve (AUC) of the receiver operator characteristics (ROC) was used to determine the parameters’ predictive power and optimal cut-offs. A univariate analysis was performed using Kaplan–Meier on the overall survival (OS). Results: In ROC analysis, all 3D parameters appeared statistically significant: VFM3D (AUC = 0.554, p = 0.02, cutoff = 0.72 kg/m2), SFM3D (AUC = 0.544, p = 0.047, cutoff = 3.05 kg/m2), FBM3D (AUC = 0.550, p = 0.03, cutoff = 4.32 kg/m2) and MBM3D (AUC = 0.565, p = 0.007, cutoff = 5.47 kg/m2), but only one 2D parameter (visceral fat area VFA2D AUC = 0.548, p = 0.034). In log-rank tests, low VFM3D (p = 0.014), low SFM3D (p < 0.0001), low FBM3D (p = 0.00019) and low VFA2D (p = 0.0063) were found as a significant risk factor. Conclusion: automatic and 3D body composition on pre-therapeutic CT is feasible and can improve prognostication in patients treated with anti-angiogenic drugs. Moreover, the 3D measurements appear to be more effective than their 2D counterparts. Full article
(This article belongs to the Special Issue Deep Learning in Oncological Image Analysis)
Show Figures

Figure 1

20 pages, 4907 KiB  
Article
A Comparative Analysis of Deep Learning Models for Automated Cross-Preparation Diagnosis of Multi-Cell Liquid Pap Smear Images
by Yasmin Karasu Benyes, E. Celeste Welch, Abhinav Singhal, Joyce Ou and Anubhav Tripathi
Diagnostics 2022, 12(8), 1838; https://doi.org/10.3390/diagnostics12081838 - 29 Jul 2022
Cited by 11 | Viewed by 3324
Abstract
Routine Pap smears can facilitate early detection of cervical cancer and improve patient outcomes. The objective of this work is to develop an automated, clinically viable deep neural network for the multi-class Bethesda System diagnosis of multi-cell images in Liquid Pap smear samples. [...] Read more.
Routine Pap smears can facilitate early detection of cervical cancer and improve patient outcomes. The objective of this work is to develop an automated, clinically viable deep neural network for the multi-class Bethesda System diagnosis of multi-cell images in Liquid Pap smear samples. 8 deep learning models were trained on a publicly available multi-class SurePath preparation dataset. This included the 5 best-performing transfer learning models, an ensemble, a novel convolutional neural network (CNN), and a CNN + autoencoder (AE). Additionally, each model was tested on a novel ThinPrep Pap dataset to determine model generalizability across different liquid Pap preparation methods with and without Deep CORAL domain adaptation. All models achieved accuracies >90% when classifying SurePath images. The AE CNN model, 99.80% smaller than the average transfer model, maintained an accuracy of 96.54%. During consecutive training attempts, individual transfer models had high variability in performance, whereas the CNN, AE CNN, and ensemble did not. ThinPrep Pap classification accuracies were notably lower but increased with domain adaptation, with ResNet101 achieving the highest accuracy at 92.65%. This indicates a potential area for future improvement: development of a globally relevant model that can function across different slide preparation methods. Full article
(This article belongs to the Special Issue Deep Learning in Oncological Image Analysis)
Show Figures

Figure 1

Back to TopTop