Topic Editors

Department of Radiology, Kobe University Hospital, 7-5-2 Kusunokicho, Chuo-ku, Kobe 650-0017, Japan
Dr. Koji Fujimoto
Department of Advanced Imaging in Medical Magnetic Resonance, Kyoto University, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507, Japan

Deep Learning for Medical Image Analysis and Medical Natural Language Processing

Abstract submission deadline
20 August 2024
Manuscript submission deadline
20 November 2024
Viewed by
13854

Topic Information

Dear Colleagues,

This Special Issue mainly focuses on the application of deep learning to medical image analysis and medical natural language processing. We welcome original papers and review papers related to the topics below. In particular, this Special Issue welcomes the papers where both medical image analysis and medical natural language processing are used as multi-modal deep learning.
Research Topics:

  • Cutting-edge methodology/algorithm of deep learning for medical image analysis and medical natural language processing.
  • Clinical application of deep learning to medical image analysis and medical natural language processing which mainly focuses on cancer diagnosis and treatment.
  • Open-source software of deep learning which is used for cancer diagnosis and treatment.
  • Open data for medical image analysis and medical natural language processing which are useful for development and validation of deep learning.
  • Reproducibility/validation study for open-source software of deep learning used for cancer diagnosis and treatment.

Dr. Mizuho Nishio
Dr. Koji Fujimoto
Topic Editors

Keywords

  • deep learning
  • medical image analysis
  • natural language process
  • medical imaging
  • cancer

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.7 4.5 2011 16.9 Days CHF 2400 Submit
Cancers
cancers
5.2 7.4 2009 17.9 Days CHF 2900 Submit
Diagnostics
diagnostics
3.6 3.6 2011 20.7 Days CHF 2600 Submit
Tomography
tomography
1.9 2.3 2015 24.5 Days CHF 2400 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (8 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
21 pages, 24110 KiB  
Article
Magnifying Networks for Histopathological Images with Billions of Pixels
by Neofytos Dimitriou, Ognjen Arandjelović and David J. Harrison
Diagnostics 2024, 14(5), 524; https://doi.org/10.3390/diagnostics14050524 - 01 Mar 2024
Viewed by 588
Abstract
Amongst the other benefits conferred by the shift from traditional to digital pathology is the potential to use machine learning for diagnosis, prognosis, and personalization. A major challenge in the realization of this potential emerges from the extremely large size of digitized images, [...] Read more.
Amongst the other benefits conferred by the shift from traditional to digital pathology is the potential to use machine learning for diagnosis, prognosis, and personalization. A major challenge in the realization of this potential emerges from the extremely large size of digitized images, which are often in excess of 100,000 × 100,000 pixels. In this paper, we tackle this challenge head-on by diverging from the existing approaches in the literature—which rely on the splitting of the original images into small patches—and introducing magnifying networks (MagNets). By using an attention mechanism, MagNets identify the regions of the gigapixel image that benefit from an analysis on a finer scale. This process is repeated, resulting in an attention-driven coarse-to-fine analysis of only a small portion of the information contained in the original whole-slide images. Importantly, this is achieved using minimal ground truth annotation, namely, using only global, slide-level labels. The results from our tests on the publicly available Camelyon16 and Camelyon17 datasets demonstrate the effectiveness of MagNets—as well as the proposed optimization framework—in the task of whole-slide image classification. Importantly, MagNets process at least five times fewer patches from each whole-slide image than any of the existing end-to-end approaches. Full article
Show Figures

Figure 1

16 pages, 4554 KiB  
Article
Identifying Diabetic Retinopathy in the Human Eye: A Hybrid Approach Based on a Computer-Aided Diagnosis System Combined with Deep Learning
by Şükran Yaman Atcı, Ali Güneş, Metin Zontul and Zafer Arslan
Tomography 2024, 10(2), 215-230; https://doi.org/10.3390/tomography10020017 - 05 Feb 2024
Viewed by 852
Abstract
Diagnosing and screening for diabetic retinopathy is a well-known issue in the biomedical field. A component of computer-aided diagnosis that has advanced significantly over the past few years as a result of the development and effectiveness of deep learning is the use of [...] Read more.
Diagnosing and screening for diabetic retinopathy is a well-known issue in the biomedical field. A component of computer-aided diagnosis that has advanced significantly over the past few years as a result of the development and effectiveness of deep learning is the use of medical imagery from a patient’s eye to identify the damage caused to blood vessels. Issues with unbalanced datasets, incorrect annotations, a lack of sample images, and improper performance evaluation measures have negatively impacted the performance of deep learning models. Using three benchmark datasets of diabetic retinopathy, we conducted a detailed comparison study comparing various state-of-the-art approaches to address the effect caused by class imbalance, with precision scores of 93%, 89%, 81%, 76%, and 96%, respectively, for normal, mild, moderate, severe, and DR phases. The analyses of the hybrid modeling, including CNN analysis and SHAP model derivation results, are compared at the end of the paper, and ideal hybrid modeling strategies for deep learning classification models for automated DR detection are identified. Full article
Show Figures

Figure 1

13 pages, 2117 KiB  
Article
Real-Time Protozoa Detection from Microscopic Imaging Using YOLOv4 Algorithm
by İdris Kahraman, İsmail Rakıp Karaş and Muhammed Kamil Turan
Appl. Sci. 2024, 14(2), 607; https://doi.org/10.3390/app14020607 - 10 Jan 2024
Viewed by 1183
Abstract
Protozoa detection and classification from freshwaters and microscopic imaging are critical components in environmental monitoring, parasitology, science, biological processes, and scientific research. Bacterial and parasitic contamination of water plays an important role in society health. Conventional methods often rely on manual identification, resulting [...] Read more.
Protozoa detection and classification from freshwaters and microscopic imaging are critical components in environmental monitoring, parasitology, science, biological processes, and scientific research. Bacterial and parasitic contamination of water plays an important role in society health. Conventional methods often rely on manual identification, resulting in time-consuming analyses and limited scalability. In this study, we propose a real-time protozoa detection framework using the YOLOv4 algorithm, a state-of-the-art deep learning model known for its exceptional speed and accuracy. Our dataset consists of objects of the protozoa species, such as Bdelloid Rotifera, Stylonychia Pustulata, Paramecium, Hypotrich Ciliate, Colpoda, Lepocinclis Acus, and Clathrulina Elegans, which are in freshwaters and have different shapes, sizes, and movements. One of the major properties of our work is to create a dataset by forming different cultures from various water sources like rainwater and puddles. Our network architecture is carefully tailored to optimize the detection of protozoa, ensuring precise localization and classification of individual organisms. To validate our approach, extensive experiments are conducted using real-world microscopic image datasets. The results demonstrate that the YOLOv4-based model achieves outstanding detection accuracy and significantly outperforms traditional methods in terms of speed and precision. The real-time capabilities of our framework enable rapid analysis of large-scale datasets, making it highly suitable for dynamic environments and time-sensitive applications. Furthermore, we introduce a user-friendly interface that allows researchers and environmental professionals to effortlessly deploy our YOLOv4-based protozoa detection tool. We conducted f1-score 0.95, precision 0.92, sensitivity 0.98, and mAP 0.9752 as evaluating metrics. The proposed model achieved 97% accuracy. After reaching high efficiency, a desktop application was developed to provide testing of the model. The proposed framework’s speed and accuracy have significant implications for various fields, ranging from a support tool for paramesiology/parasitology studies to water quality assessments, offering a powerful tool to enhance our understanding and preservation of ecosystems. Full article
Show Figures

Figure 1

12 pages, 895 KiB  
Article
Artificial Intelligence and Panendoscopy—Automatic Detection of Clinically Relevant Lesions in Multibrand Device-Assisted Enteroscopy
by Francisco Mendes, Miguel Mascarenhas, Tiago Ribeiro, João Afonso, Pedro Cardoso, Miguel Martins, Hélder Cardoso, Patrícia Andrade, João P. S. Ferreira, Miguel Mascarenhas Saraiva and Guilherme Macedo
Cancers 2024, 16(1), 208; https://doi.org/10.3390/cancers16010208 - 01 Jan 2024
Cited by 1 | Viewed by 847
Abstract
Device-assisted enteroscopy (DAE) is capable of evaluating the entire gastrointestinal tract, identifying multiple lesions. Nevertheless, DAE’s diagnostic yield is suboptimal. Convolutional neural networks (CNN) are multi-layer architecture artificial intelligence models suitable for image analysis, but there is a lack of studies about their [...] Read more.
Device-assisted enteroscopy (DAE) is capable of evaluating the entire gastrointestinal tract, identifying multiple lesions. Nevertheless, DAE’s diagnostic yield is suboptimal. Convolutional neural networks (CNN) are multi-layer architecture artificial intelligence models suitable for image analysis, but there is a lack of studies about their application in DAE. Our group aimed to develop a multidevice CNN for panendoscopic detection of clinically relevant lesions during DAE. In total, 338 exams performed in two specialized centers were retrospectively evaluated, with 152 single-balloon enteroscopies (Fujifilm®, Porto, Portugal), 172 double-balloon enteroscopies (Olympus®, Porto, Portugal) and 14 motorized spiral enteroscopies (Olympus®, Porto, Portugal); then, 40,655 images were divided in a training dataset (90% of the images, n = 36,599) and testing dataset (10% of the images, n = 4066) used to evaluate the model. The CNN’s output was compared to an expert consensus classification. The model was evaluated by its sensitivity, specificity, positive (PPV) and negative predictive values (NPV), accuracy and area under the precision recall curve (AUC-PR). The CNN had an 88.9% sensitivity, 98.9% specificity, 95.8% PPV, 97.1% NPV, 96.8% accuracy and an AUC-PR of 0.97. Our group developed the first multidevice CNN for panendoscopic detection of clinically relevant lesions during DAE. The development of accurate deep learning models is of utmost importance for increasing the diagnostic yield of DAE-based panendoscopy. Full article
Show Figures

Figure 1

18 pages, 32324 KiB  
Article
DTR-GAN: An Unsupervised Bidirectional Translation Generative Adversarial Network for MRI-CT Registration
by Aolin Yang, Tiejun Yang, Xiang Zhao, Xin Zhang, Yanghui Yan and Chunxia Jiao
Appl. Sci. 2024, 14(1), 95; https://doi.org/10.3390/app14010095 - 21 Dec 2023
Viewed by 763
Abstract
Medical image registration is a fundamental and indispensable element in medical image analysis, which can establish spatial consistency among corresponding anatomical structures across various medical images. Since images with different modalities exhibit different features, it remains a challenge to find their exact correspondence. [...] Read more.
Medical image registration is a fundamental and indispensable element in medical image analysis, which can establish spatial consistency among corresponding anatomical structures across various medical images. Since images with different modalities exhibit different features, it remains a challenge to find their exact correspondence. Most of the current methods based on image-to-image translation cannot fully leverage the available information, which will affect the subsequent registration performance. To solve the problem, we develop an unsupervised multimodal image registration method named DTR-GAN. Firstly, we design a multimodal registration framework via a bidirectional translation network to transform the multimodal image registration into a unimodal registration, which can effectively use the complementary information of different modalities. Then, to enhance the quality of the transformed images in the translation network, we design a multiscale encoder–decoder network that effectively captures both local and global features in images. Finally, we propose a mixed similarity loss to encourage the warped image to be closer to the target image in deep features. We extensively evaluate methods for MRI-CT image registration tasks of the abdominal cavity with advanced unsupervised multimodal image registration approaches. The results indicate that DTR-GAN obtains a competitive performance compared to other methods in MRI-CT registration. Compared with DFR, DTR-GAN has not only obtained performance improvements of 2.35% and 2.08% in the dice similarity coefficient (DSC) of MRI-CT registration and CT-MRI registration on the Learn2Reg dataset but has also decreased the average symmetric surface distance (ASD) by 0.33 mm and 0.12 mm on the Learn2Reg dataset. Full article
Show Figures

Figure 1

13 pages, 5402 KiB  
Review
Assessment of Computed Tomography Perfusion Research Landscape: A Topic Modeling Study
by Burak B. Ozkara, Mert Karabacak, Konstantinos Margetis, Vivek S. Yedavalli, Max Wintermark and Sotirios Bisdas
Tomography 2023, 9(6), 2016-2028; https://doi.org/10.3390/tomography9060158 - 01 Nov 2023
Viewed by 2906
Abstract
The number of scholarly articles continues to rise. The continuous increase in scientific output poses a challenge for researchers, who must devote considerable time to collecting and analyzing these results. The topic modeling approach emerges as a novel response to this need. Considering [...] Read more.
The number of scholarly articles continues to rise. The continuous increase in scientific output poses a challenge for researchers, who must devote considerable time to collecting and analyzing these results. The topic modeling approach emerges as a novel response to this need. Considering the swift advancements in computed tomography perfusion (CTP), we deem it essential to launch an initiative focused on topic modeling. We conducted a comprehensive search of the Scopus database from 1 January 2000 to 16 August 2023, to identify relevant articles about CTP. Using the BERTopic model, we derived a group of topics along with their respective representative articles. For the 2020s, linear regression models were used to identify and interpret trending topics. From the most to the least prevalent, the topics that were identified include “Tumor Vascularity”, “Stroke Assessment”, “Myocardial Perfusion”, “Intracerebral Hemorrhage”, “Imaging Optimization”, “Reperfusion Therapy”, “Postprocessing”, “Carotid Artery Disease”, “Seizures”, “Hemorrhagic Transformation”, “Artificial Intelligence”, and “Moyamoya Disease”. The model provided insights into the trends of the current decade, highlighting “Postprocessing” and “Artificial Intelligence” as the most trending topics. Full article
Show Figures

Figure 1

13 pages, 1034 KiB  
Article
Real-World Evidence on the Clinical Characteristics and Management of Patients with Chronic Lymphocytic Leukemia in Spain Using Natural Language Processing: The SRealCLL Study
by Javier Loscertales, Pau Abrisqueta-Costa, Antonio Gutierrez, José Ángel Hernández-Rivas, Rafael Andreu-Lapiedra, Alba Mora, Carolina Leiva-Farré, María Dolores López-Roda, Ángel Callejo-Mellén, Esther Álvarez-García and José Antonio García-Marco
Cancers 2023, 15(16), 4047; https://doi.org/10.3390/cancers15164047 - 10 Aug 2023
Cited by 1 | Viewed by 2305
Abstract
The SRealCLL study aimed to obtain real-world evidence on the clinical characteristics and treatment patterns of patients with chronic lymphocytic leukemia (CLL) using natural language processing (NLP). Electronic health records (EHRs) from seven Spanish hospitals (January 2016–December 2018) were analyzed using EHRead® [...] Read more.
The SRealCLL study aimed to obtain real-world evidence on the clinical characteristics and treatment patterns of patients with chronic lymphocytic leukemia (CLL) using natural language processing (NLP). Electronic health records (EHRs) from seven Spanish hospitals (January 2016–December 2018) were analyzed using EHRead® technology, based on NLP and machine learning. A total of 534 CLL patients were assessed. No treatment was detected in 270 (50.6%) patients (watch-and-wait, W&W). First-line (1L) treatment was identified in 230 (43.1%) patients and relapsed/refractory (2L) treatment was identified in 58 (10.9%). The median age ranged from 71 to 75 years, with a uniform male predominance (54.8–63.8%). The main comorbidities included hypertension (W&W: 35.6%; 1L: 38.3%; 2L: 39.7%), diabetes mellitus (W&W: 24.4%; 1L: 24.3%; 2L: 31%), cardiac arrhythmia (W&W: 16.7%; 1L: 17.8%; 2L: 17.2%), heart failure (W&W 16.3%, 1L 17.4%, 2L 17.2%), and dyslipidemia (W&W: 13.7%; 1L: 18.7%; 2L: 19.0%). The most common antineoplastic treatment was ibrutinib in 1L (64.8%) and 2L (62.1%), followed by bendamustine + rituximab (12.6%), obinutuzumab + chlorambucil (5.2%), rituximab + chlorambucil (4.8%), and idelalisib + rituximab (3.9%) in 1L and venetoclax (15.5%), idelalisib + rituximab (6.9%), bendamustine + rituximab (3.5%), and venetoclax + rituximab (3.5%) in 2L. This study expands the information available on patients with CLL in Spain, describing the diversity in patient characteristics and therapeutic approaches in clinical practice. Full article
Show Figures

Figure 1

24 pages, 11744 KiB  
Review
A Review of Machine Learning Techniques for the Classification and Detection of Breast Cancer from Medical Images
by Reem Jalloul, H. K. Chethan and Ramez Alkhatib
Diagnostics 2023, 13(14), 2460; https://doi.org/10.3390/diagnostics13142460 - 24 Jul 2023
Cited by 5 | Viewed by 3506
Abstract
Cancer is an incurable disease based on unregulated cell division. Breast cancer is the most prevalent cancer in women worldwide, and early detection can lower death rates. Medical images can be used to find important information for locating and diagnosing breast cancer. The [...] Read more.
Cancer is an incurable disease based on unregulated cell division. Breast cancer is the most prevalent cancer in women worldwide, and early detection can lower death rates. Medical images can be used to find important information for locating and diagnosing breast cancer. The best information for identifying and diagnosing breast cancer comes from medical pictures. This paper reviews the history of the discipline and examines how deep learning and machine learning are applied to detect breast cancer. The classification of breast cancer, using several medical imaging modalities, is covered in this paper. Numerous medical imaging modalities’ classification systems for tumors, non-tumors, and dense masses are thoroughly explained. The differences between various medical image types are initially examined using a variety of study datasets. Following that, numerous machine learning and deep learning methods exist for diagnosing and classifying breast cancer. Finally, this review addressed the challenges of categorization and detection and the best results of different approaches. Full article
Show Figures

Figure 1

Back to TopTop