Artificial Intelligence in Radiology: Present and Future Perspectives

A special issue of Journal of Clinical Medicine (ISSN 2077-0383). This special issue belongs to the section "Nuclear Medicine & Radiology".

Deadline for manuscript submissions: closed (30 April 2023) | Viewed by 29408

Special Issue Editor


E-Mail Website
Guest Editor
Division of Breast Radiology IEO, European Institute of Oncology IRCCS, Milan, Italy
Interests: artificial intelligence; personalized medicine; radiomics; oncology; magnetic resonance imaging; ultrasound; computed tomography; mammography; X-rays; interventional radiology; innovations; education
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

One of the most promising areas of health innovation is the application of artificial intelligence (AI) in medical imaging, including but not limited to image processing and interpretation. Indeed, AI may find multiple applications, from image acquisition and processing to aided reporting, follow-up planning, data storage, data mining, and many others. Due to this wide range of applications, AI is expected to have a massive impact in the radiologist’s daily life, and the potential benefits of AI in radiology are immense.

This Special Issue aims to collect papers which discuss AI applications, analyze various aspects related to the integration of AI into the radiological workflow, and provide an overview of the balance between AI threats and opportunities for radiologists. In this Special Issue, original studies, meta-analyses, reviews, pictorial reviews, and letters investigating the new frontiers of breast imaging will be evaluated.

Awareness of this trend is a necessity, especially for younger generations who will have to face this revolution. Radiologists should be aware of the basic principles of machine learning and deep learning systems, of the characteristics of datasets to train them, and of their limitations.

Dr. Filippo Pesapane
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Clinical Medicine is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • personalized medicine
  • radiomics
  • oncology
  • magnetic resonance imaging
  • ultrasound
  • computed tomography
  • mammography
  • X-rays
  • interventional radiology
  • innovations
  • education

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

12 pages, 2296 KiB  
Article
Pathological Diagnosis of Adult Craniopharyngioma on MR Images: An Automated End-to-End Approach Based on Deep Neural Networks Requiring No Manual Segmentation
by Yuen Teng, Xiaoping Ran, Boran Chen, Chaoyue Chen and Jianguo Xu
J. Clin. Med. 2022, 11(24), 7481; https://doi.org/10.3390/jcm11247481 - 16 Dec 2022
Cited by 1 | Viewed by 1484
Abstract
Purpose: The goal of this study was to develop end-to-end convolutional neural network (CNN) models that can noninvasively discriminate papillary craniopharyngioma (PCP) from adamantinomatous craniopharyngioma (ACP) on MR images requiring no manual segmentation. Materials and methods: A total of 97 patients diagnosed with [...] Read more.
Purpose: The goal of this study was to develop end-to-end convolutional neural network (CNN) models that can noninvasively discriminate papillary craniopharyngioma (PCP) from adamantinomatous craniopharyngioma (ACP) on MR images requiring no manual segmentation. Materials and methods: A total of 97 patients diagnosed with ACP or PCP were included. Pretreatment contrast-enhanced T1-weighted images were collected and used as the input of the CNNs. Six models were established based on six networks, including VGG16, ResNet18, ResNet50, ResNet101, DenseNet121, and DenseNet169. The area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity were used to assess the performances of these deep neural networks. A five-fold cross-validation was applied to evaluate the performances of the models. Results: The six networks yielded feasible performances, with area under the receiver operating characteristic curves (AUCs) of at least 0.78 for classification. The model based on Resnet50 achieved the highest AUC of 0.838 ± 0.062, with an accuracy of 0.757 ± 0.052, a sensitivity of 0.608 ± 0.198, and a specificity of 0.845 ± 0.034, respectively. Moreover, the results also indicated that the CNN method had a competitive performance compared to the radiomics-based method, which required manual segmentation for feature extraction and further feature selection. Conclusions: MRI-based deep neural networks can noninvasively differentiate ACP from PCP to facilitate the personalized assessment of craniopharyngiomas. Full article
(This article belongs to the Special Issue Artificial Intelligence in Radiology: Present and Future Perspectives)
Show Figures

Figure 1

17 pages, 1902 KiB  
Article
Application of nnU-Net for Automatic Segmentation of Lung Lesions on CT Images and Its Implication for Radiomic Models
by Matteo Ferrante, Lisa Rinaldi, Francesca Botta, Xiaobin Hu, Andreas Dolp, Marta Minotti, Francesca De Piano, Gianluigi Funicelli, Stefania Volpe, Federica Bellerba, Paolo De Marco, Sara Raimondi, Stefania Rizzo, Kuangyu Shi, Marta Cremonesi, Barbara A. Jereczek-Fossa, Lorenzo Spaggiari, Filippo De Marinis, Roberto Orecchia and Daniela Origgi
J. Clin. Med. 2022, 11(24), 7334; https://doi.org/10.3390/jcm11247334 - 9 Dec 2022
Cited by 5 | Viewed by 1930
Abstract
Radiomics investigates the predictive role of quantitative parameters calculated from radiological images. In oncology, tumour segmentation constitutes a crucial step of the radiomic workflow. Manual segmentation is time-consuming and prone to inter-observer variability. In this study, a state-of-the-art deep-learning network for automatic segmentation [...] Read more.
Radiomics investigates the predictive role of quantitative parameters calculated from radiological images. In oncology, tumour segmentation constitutes a crucial step of the radiomic workflow. Manual segmentation is time-consuming and prone to inter-observer variability. In this study, a state-of-the-art deep-learning network for automatic segmentation (nnU-Net) was applied to computed tomography images of lung tumour patients, and its impact on the performance of survival radiomic models was assessed. In total, 899 patients were included, from two proprietary and one public datasets. Different network architectures (2D, 3D) were trained and tested on different combinations of the datasets. Automatic segmentations were compared to reference manual segmentations performed by physicians using the DICE similarity coefficient. Subsequently, the accuracy of radiomic models for survival classification based on either manual or automatic segmentations were compared, considering both hand-crafted and deep-learning features. The best agreement between automatic and manual contours (DICE = 0.78 ± 0.12) was achieved averaging 2D and 3D predictions and applying customised post-processing. The accuracy of the survival classifier (ranging between 0.65 and 0.78) was not statistically different when using manual versus automatic contours, both with hand-crafted and deep features. These results support the promising role nnU-Net can play in automatic segmentation, accelerating the radiomic workflow without impairing the models’ accuracy. Further investigations on different clinical endpoints and populations are encouraged to confirm and generalise these findings. Full article
(This article belongs to the Special Issue Artificial Intelligence in Radiology: Present and Future Perspectives)
Show Figures

Figure 1

17 pages, 1650 KiB  
Article
Detection of Lumbar Spondylolisthesis from X-ray Images Using Deep Learning Network
by Giam Minh Trinh, Hao-Chiang Shao, Kevin Li-Chun Hsieh, Ching-Yu Lee, Hsiao-Wei Liu, Chen-Wei Lai, Sen-Yi Chou, Pei-I Tsai, Kuan-Jen Chen, Fang-Chieh Chang, Meng-Huang Wu and Tsung-Jen Huang
J. Clin. Med. 2022, 11(18), 5450; https://doi.org/10.3390/jcm11185450 - 16 Sep 2022
Cited by 15 | Viewed by 16495
Abstract
Spondylolisthesis refers to the displacement of a vertebral body relative to the vertrabra below it, which can cause radicular symptoms, back pain or leg pain. It usually occurs in the lower lumbar spine, especially in women over the age of 60. The prevalence [...] Read more.
Spondylolisthesis refers to the displacement of a vertebral body relative to the vertrabra below it, which can cause radicular symptoms, back pain or leg pain. It usually occurs in the lower lumbar spine, especially in women over the age of 60. The prevalence of spondylolisthesis is expected to rise as the global population ages, requiring prudent action to promptly identify it in clinical settings. The goal of this study was to develop a computer-aided diagnostic (CADx) algorithm, LumbarNet, and to evaluate the efficiency of this model in automatically detecting spondylolisthesis from lumbar X-ray images. Built upon U-Net, feature fusion module (FFM) and collaborating with (i) a P-grade, (ii) a piecewise slope detection (PSD) scheme, and (iii) a dynamic shift (DS), LumbarNet was able to analyze complex structural patterns on lumbar X-ray images, including true lateral, flexion, and extension lateral views. Our results showed that the model achieved a mean intersection over union (mIOU) value of 0.88 in vertebral region segmentation and an accuracy of 88.83% in vertebral slip detection. We conclude that LumbarNet outperformed U-Net, a commonly used method in medical image segmentation, and could serve as a reliable method to identify spondylolisthesis. Full article
(This article belongs to the Special Issue Artificial Intelligence in Radiology: Present and Future Perspectives)
Show Figures

Figure 1

23 pages, 3725 KiB  
Article
Deep Ensemble Learning for the Automatic Detection of Pneumoconiosis in Coal Worker’s Chest X-ray Radiography
by Liton Devnath, Suhuai Luo, Peter Summons, Dadong Wang, Kamran Shaukat, Ibrahim A. Hameed and Fatma S. Alrayes
J. Clin. Med. 2022, 11(18), 5342; https://doi.org/10.3390/jcm11185342 - 12 Sep 2022
Cited by 19 | Viewed by 2159
Abstract
Globally, coal remains one of the natural resources that provide power to the world. Thousands of people are involved in coal collection, processing, and transportation. Particulate coal dust is produced during these processes, which can crush the lung structure of workers and cause [...] Read more.
Globally, coal remains one of the natural resources that provide power to the world. Thousands of people are involved in coal collection, processing, and transportation. Particulate coal dust is produced during these processes, which can crush the lung structure of workers and cause pneumoconiosis. There is no automated system for detecting and monitoring diseases in coal miners, except for specialist radiologists. This paper proposes ensemble learning techniques for detecting pneumoconiosis disease in chest X-ray radiographs (CXRs) using multiple deep learning models. Three ensemble learning techniques (simple averaging, multi-weighted averaging, and majority voting (MVOT)) were proposed to investigate performances using randomised cross-folds and leave-one-out cross-validations datasets. Five statistical measurements were used to compare the outcomes of the three investigations on the proposed integrated approach with state-of-the-art approaches from the literature for the same dataset. In the second investigation, the statistical combination was marginally enhanced in the ensemble of multi-weighted averaging on a robust model, CheXNet. However, in the third investigation, the same model elevated accuracies from 87.80 to 90.2%. The investigated results helped us identify a robust deep learning model and ensemble framework that outperformed others, achieving an accuracy of 91.50% in the automated detection of pneumoconiosis. Full article
(This article belongs to the Special Issue Artificial Intelligence in Radiology: Present and Future Perspectives)
Show Figures

Figure 1

Review

Jump to: Research, Other

18 pages, 3930 KiB  
Review
How Radiomics Can Improve Breast Cancer Diagnosis and Treatment
by Filippo Pesapane, Paolo De Marco, Anna Rapino, Eleonora Lombardo, Luca Nicosia, Priyan Tantrige, Anna Rotili, Anna Carla Bozzini, Silvia Penco, Valeria Dominelli, Chiara Trentin, Federica Ferrari, Mariagiorgia Farina, Lorenza Meneghetti, Antuono Latronico, Francesca Abbate, Daniela Origgi, Gianpaolo Carrafiello and Enrico Cassano
J. Clin. Med. 2023, 12(4), 1372; https://doi.org/10.3390/jcm12041372 - 9 Feb 2023
Cited by 14 | Viewed by 3356
Abstract
Recent technological advances in the field of artificial intelligence hold promise in addressing medical challenges in breast cancer care, such as early diagnosis, cancer subtype determination and molecular profiling, prediction of lymph node metastases, and prognostication of treatment response and probability of recurrence. [...] Read more.
Recent technological advances in the field of artificial intelligence hold promise in addressing medical challenges in breast cancer care, such as early diagnosis, cancer subtype determination and molecular profiling, prediction of lymph node metastases, and prognostication of treatment response and probability of recurrence. Radiomics is a quantitative approach to medical imaging, which aims to enhance the existing data available to clinicians by means of advanced mathematical analysis using artificial intelligence. Various published studies from different fields in imaging have highlighted the potential of radiomics to enhance clinical decision making. In this review, we describe the evolution of AI in breast imaging and its frontiers, focusing on handcrafted and deep learning radiomics. We present a typical workflow of a radiomics analysis and a practical “how-to” guide. Finally, we summarize the methodology and implementation of radiomics in breast cancer, based on the most recent scientific literature to help researchers and clinicians gain fundamental knowledge of this emerging technology. Alongside this, we discuss the current limitations of radiomics and challenges of integration into clinical practice with conceptual consistency, data curation, technical reproducibility, adequate accuracy, and clinical translation. The incorporation of radiomics with clinical, histopathological, and genomic information will enable physicians to move forward to a higher level of personalized management of patients with breast cancer. Full article
(This article belongs to the Special Issue Artificial Intelligence in Radiology: Present and Future Perspectives)
Show Figures

Figure 1

Other

Jump to: Research, Review

19 pages, 963 KiB  
Systematic Review
Artificial Intelligence in the Diagnosis of Hepatocellular Carcinoma: A Systematic Review
by Alessandro Martinino, Mohammad Aloulou, Surobhi Chatterjee, Juan Pablo Scarano Pereira, Saurabh Singhal, Tapan Patel, Thomas Paul-Emile Kirchgesner, Salvatore Agnes, Salvatore Annunziata, Giorgio Treglia and Francesco Giovinazzo
J. Clin. Med. 2022, 11(21), 6368; https://doi.org/10.3390/jcm11216368 - 28 Oct 2022
Cited by 7 | Viewed by 3117
Abstract
Hepatocellular carcinoma ranks fifth amongst the most common malignancies and is the third most common cause of cancer-related death globally. Artificial Intelligence is a rapidly growing field of interest. Following the PRISMA reporting guidelines, we conducted a systematic review to retrieve articles reporting [...] Read more.
Hepatocellular carcinoma ranks fifth amongst the most common malignancies and is the third most common cause of cancer-related death globally. Artificial Intelligence is a rapidly growing field of interest. Following the PRISMA reporting guidelines, we conducted a systematic review to retrieve articles reporting the application of AI in HCC detection and characterization. A total of 27 articles were included and analyzed with our composite score for the evaluation of the quality of the publications. The contingency table reported a statistically significant constant improvement over the years of the total quality score (p = 0.004). Different AI methods have been adopted in the included articles correlated with 19 articles studying CT (41.30%), 20 studying US (43.47%), and 7 studying MRI (15.21%). No article has discussed the use of artificial intelligence in PET and X-ray technology. Our systematic approach has shown that previous works in HCC detection and characterization have assessed the comparability of conventional interpretation with machine learning using US, CT, and MRI. The distribution of the imaging techniques in our analysis reflects the usefulness and evolution of medical imaging for the diagnosis of HCC. Moreover, our results highlight an imminent need for data sharing in collaborative data repositories to minimize unnecessary repetition and wastage of resources. Full article
(This article belongs to the Special Issue Artificial Intelligence in Radiology: Present and Future Perspectives)
Show Figures

Figure 1

Back to TopTop