Artificial Intelligence in Radiology 2.0

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Medical Imaging and Theranostics".

Deadline for manuscript submissions: closed (30 November 2023) | Viewed by 30858

Special Issue Editors


E-Mail Website
Guest Editor
Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH, USA
Interests: diagnostic radiology; neuroradiology; machine learning; quantitative modeling

E-Mail Website
Guest Editor
InPhase Solutions AS, Hornebergvegen 7A, 7038 Trondheim, Norway
Interests: computer vision; convolutional neural networks; generative adversarial network; self- and semi-supervised learning strategies

Special Issue Information

Dear Colleagues, 

Advances in computer vision over the past decade have led to a growing interest in machine learning and other artificial intelligence (AI) applications in radiology. While a small but growing number of AI software programs have been approved for clinical use, there are numerous potential uses of AI in radiology that are areas of active investigation. Among the AI processes relevant to radiologic image interpretation is computer-assisted detection or diagnosis, utilizing deep convolutional neural networks and other state-of-the-art AI methodologies to automate such computer vision tasks as image classification, object detection/localization, and image segmentation. Prognostication or clinical decision-making could also be assisted by the AI-facilitated assessment of images and/or other clinical data. There are also potential roles of AI in radiology beyond image interpretation, such as clinical decision support, protocol selection, improving the image acquisition speed or quality, reporting and communication, and other clinical or research workflow processes. We include discussions of the repertoire of available network architectures applicable to radiology, including deep-learning convolutional neural networks commonly employed for image classification and other architectures such as generative adversarial networks and U-Net-based architectures. The aims of this Special Issue are to (1) summarize current research on several broad categories of AI tasks relevant to radiology through a series of multi-disciplinary literature reviews and (2) illustrate AI applications in selected radiology workflows through a diverse set of original research articles.

Dr. Xuan V. Nguyen
Dr. Engin Dikici
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • radiology workflow
  • computer-assisted diagnosis
  • computer-assisted detection
  • machine learning
  • deep learning
  • convolutional neural networks
  • medical diagnosis
  • diagnostic radiology

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

13 pages, 2234 KiB  
Article
Machine Learning Radiomics Signature for Differentiating Lymphoma versus Benign Splenomegaly on CT
by Jih-An Cheng, Yu-Chun Lin, Yenpo Lin, Ren-Chin Wu, Hsin-Ying Lu, Lan-Yan Yang, Hsin-Ju Chiang, Yu-Hsiang Juan, Ying-Chieh Lai and Gigin Lin
Diagnostics 2023, 13(24), 3632; https://doi.org/10.3390/diagnostics13243632 - 08 Dec 2023
Viewed by 1120
Abstract
Background: We aimed to develop and validate a preoperative CT-based radiomics signature for differentiating lymphoma versus benign splenomegaly. Methods: We retrospectively analyzed CT studies from 139 patients (age range 26–93 years, 43% female) between 2011 and 2019 with histopathological diagnosis of the spleen [...] Read more.
Background: We aimed to develop and validate a preoperative CT-based radiomics signature for differentiating lymphoma versus benign splenomegaly. Methods: We retrospectively analyzed CT studies from 139 patients (age range 26–93 years, 43% female) between 2011 and 2019 with histopathological diagnosis of the spleen (19 lymphoma, 120 benign) and divided them into developing (n = 79) and testing (n = 60) datasets. The volumetric radiomic features were extracted from manual segmentation of the whole spleen on venous-phase CT imaging using PyRadiomics package. LASSO regression was applied for feature selection and development of the radiomic signature, which was interrogated with the complete blood cell count and differential count. All p values < 0.05 were considered to be significant. Results: Seven features were selected for constructing the radiomic signature after feature selection, including first-order statistics (10th percentile and Robust Mean Absolute Deviation), shape-based (Surface Area), and texture features (Correlation, MCC, Small Area Low Gray-level Emphasis and Low Gray-level Zone Emphasis). The radiomic signature achieved an excellent diagnostic accuracy of 97%, sensitivity of 89%, and specificity of 98%, distinguishing lymphoma versus benign splenomegaly in the testing dataset. The radiomic signature significantly correlated with the platelet and segmented neutrophil percentage. Conclusions: CT-based radiomics signature can be useful in distinguishing lymphoma versus benign splenomegaly and can reflect the changes in underlying blood profiles. Full article
(This article belongs to the Special Issue Artificial Intelligence in Radiology 2.0)
Show Figures

Figure 1

22 pages, 3342 KiB  
Article
Integrating Artificial Intelligence Tools in the Clinical Research Setting: The Ovarian Cancer Use Case
by Lorena Escudero Sanchez, Thomas Buddenkotte, Mohammad Al Sa’d, Cathal McCague, James Darcy, Leonardo Rundo, Alex Samoshkin, Martin J. Graves, Victoria Hollamby, Paul Browne, Mireia Crispin-Ortuzar, Ramona Woitek, Evis Sala, Carola-Bibiane Schönlieb, Simon J. Doran and Ozan Öktem
Diagnostics 2023, 13(17), 2813; https://doi.org/10.3390/diagnostics13172813 - 30 Aug 2023
Cited by 2 | Viewed by 1822
Abstract
Artificial intelligence (AI) methods applied to healthcare problems have shown enormous potential to alleviate the burden of health services worldwide and to improve the accuracy and reproducibility of predictions. In particular, developments in computer vision are creating a paradigm shift in the analysis [...] Read more.
Artificial intelligence (AI) methods applied to healthcare problems have shown enormous potential to alleviate the burden of health services worldwide and to improve the accuracy and reproducibility of predictions. In particular, developments in computer vision are creating a paradigm shift in the analysis of radiological images, where AI tools are already capable of automatically detecting and precisely delineating tumours. However, such tools are generally developed in technical departments that continue to be siloed from where the real benefit would be achieved with their usage. Significant effort still needs to be made to make these advancements available, first in academic clinical research and ultimately in the clinical setting. In this paper, we demonstrate a prototype pipeline based entirely on open-source software and free of cost to bridge this gap, simplifying the integration of tools and models developed within the AI community into the clinical research setting, ensuring an accessible platform with visualisation applications that allow end-users such as radiologists to view and interact with the outcome of these AI tools. Full article
(This article belongs to the Special Issue Artificial Intelligence in Radiology 2.0)
Show Figures

Figure 1

11 pages, 1604 KiB  
Article
Automatic Classification of Magnetic Resonance Histology of Peripheral Arterial Chronic Total Occlusions Using a Variational Autoencoder: A Feasibility Study
by Judit Csore, Christof Karmonik, Kayla Wilhoit, Lily Buckner and Trisha L. Roy
Diagnostics 2023, 13(11), 1925; https://doi.org/10.3390/diagnostics13111925 - 31 May 2023
Cited by 1 | Viewed by 1037
Abstract
The novel approach of our study consists in adapting and in evaluating a custom-made variational autoencoder (VAE) using two-dimensional (2D) convolutional neural networks (CNNs) on magnetic resonance imaging (MRI) images for differentiate soft vs. hard plaque components in peripheral arterial disease (PAD). Five [...] Read more.
The novel approach of our study consists in adapting and in evaluating a custom-made variational autoencoder (VAE) using two-dimensional (2D) convolutional neural networks (CNNs) on magnetic resonance imaging (MRI) images for differentiate soft vs. hard plaque components in peripheral arterial disease (PAD). Five amputated lower extremities were imaged at a clinical ultra-high field 7 Tesla MRI. Ultrashort echo time (UTE), T1-weighted (T1w) and T2-weighted (T2w) datasets were acquired. Multiplanar reconstruction (MPR) images were obtained from one lesion per limb. Images were aligned to each other and pseudo-color red-green-blue images were created. Four areas in latent space were defined corresponding to the sorted images reconstructed by the VAE. Images were classified from their position in latent space and scored using tissue score (TS) as following: (1) lumen patent, TS:0; (2) partially patent, TS:1; (3) mostly occluded with soft tissue, TS:3; (4) mostly occluded with hard tissue, TS:5. Average and relative percentage of TS was calculated per lesion defined as the sum of the tissue score for each image divided by the total number of images. In total, 2390 MPR reconstructed images were included in the analysis. Relative percentage of average tissue score varied from only patent (lesion #1) to presence of all four classes. Lesions #2, #3 and #5 were classified to contain tissues except mostly occluded with hard tissue while lesion #4 contained all (ranges (I): 0.2–100%, (II): 46.3–75.9%, (III): 18–33.5%, (IV): 20%). Training the VAE was successful as images with soft/hard tissues in PAD lesions were satisfactory separated in latent space. Using VAE may assist in rapid classification of MRI histology images acquired in a clinical setup for facilitating endovascular procedures. Full article
(This article belongs to the Special Issue Artificial Intelligence in Radiology 2.0)
Show Figures

Figure 1

13 pages, 24571 KiB  
Article
Deep-Learning-Based Segmentation of the Shoulder from MRI with Inference Accuracy Prediction
by Hanspeter Hess, Adrian C. Ruckli, Finn Bürki, Nicolas Gerber, Jennifer Menzemer, Jürgen Burger, Michael Schär, Matthias A. Zumstein and Kate Gerber
Diagnostics 2023, 13(10), 1668; https://doi.org/10.3390/diagnostics13101668 - 09 May 2023
Cited by 7 | Viewed by 2043
Abstract
Three-dimensional (3D)-image-based anatomical analysis of rotator cuff tear patients has been proposed as a way to improve repair prognosis analysis to reduce the incidence of postoperative retear. However, for application in clinics, an efficient and robust method for the segmentation of anatomy from [...] Read more.
Three-dimensional (3D)-image-based anatomical analysis of rotator cuff tear patients has been proposed as a way to improve repair prognosis analysis to reduce the incidence of postoperative retear. However, for application in clinics, an efficient and robust method for the segmentation of anatomy from MRI is required. We present the use of a deep learning network for automatic segmentation of the humerus, scapula, and rotator cuff muscles with integrated automatic result verification. Trained on N = 111 and tested on N = 60 diagnostic T1-weighted MRI of 76 rotator cuff tear patients acquired from 19 centers, a nnU-Net segmented the anatomy with an average Dice coefficient of 0.91 ± 0.06. For the automatic identification of inaccurate segmentations during the inference procedure, the nnU-Net framework was adapted to allow for the estimation of label-specific network uncertainty directly from its subnetworks. The average Dice coefficient of segmentation results from the subnetworks identified labels requiring segmentation correction with an average sensitivity of 1.0 and a specificity of 0.94. The presented automatic methods facilitate the use of 3D diagnosis in clinical routine by eliminating the need for time-consuming manual segmentation and slice-by-slice segmentation verification. Full article
(This article belongs to the Special Issue Artificial Intelligence in Radiology 2.0)
Show Figures

Figure 1

15 pages, 3971 KiB  
Article
Advancing Brain Metastases Detection in T1-Weighted Contrast-Enhanced 3D MRI Using Noisy Student-Based Training
by Engin Dikici, Xuan V. Nguyen, Matthew Bigelow, John L. Ryu and Luciano M. Prevedello
Diagnostics 2022, 12(8), 2023; https://doi.org/10.3390/diagnostics12082023 - 21 Aug 2022
Cited by 2 | Viewed by 1602
Abstract
The detection of brain metastases (BM) in their early stages could have a positive impact on the outcome of cancer patients. The authors previously developed a framework for detecting small BM (with diameters of <15 mm) in T1-weighted contrast-enhanced 3D magnetic resonance images [...] Read more.
The detection of brain metastases (BM) in their early stages could have a positive impact on the outcome of cancer patients. The authors previously developed a framework for detecting small BM (with diameters of <15 mm) in T1-weighted contrast-enhanced 3D magnetic resonance images (T1c). This study aimed to advance the framework with a noisy-student-based self-training strategy to use a large corpus of unlabeled T1c data. Accordingly, a sensitivity-based noisy-student learning approach was formulated to provide high BM detection sensitivity with a reduced count of false positives. This paper (1) proposes student/teacher convolutional neural network architectures, (2) presents data and model noising mechanisms, and (3) introduces a novel pseudo-labeling strategy factoring in the sensitivity constraint. The evaluation was performed using 217 labeled and 1247 unlabeled exams via two-fold cross-validation. The framework utilizing only the labeled exams produced 9.23 false positives for 90% BM detection sensitivity, whereas the one using the introduced learning strategy led to ~9% reduction in false detections (i.e., 8.44). Significant reductions in false positives (>10%) were also observed in reduced labeled data scenarios (using 50% and 75% of labeled data). The results suggest that the introduced strategy could be utilized in existing medical detection applications with access to unlabeled datasets to elevate their performances. Full article
(This article belongs to the Special Issue Artificial Intelligence in Radiology 2.0)
Show Figures

Figure 1

Review

Jump to: Research, Other

13 pages, 303 KiB  
Review
A Review of the Clinical Applications of Artificial Intelligence in Abdominal Imaging
by Benjamin M. Mervak, Jessica G. Fried and Ashish P. Wasnik
Diagnostics 2023, 13(18), 2889; https://doi.org/10.3390/diagnostics13182889 - 08 Sep 2023
Viewed by 1142
Abstract
Artificial intelligence (AI) has been a topic of substantial interest for radiologists in recent years. Although many of the first clinical applications were in the neuro, cardiothoracic, and breast imaging subspecialties, the number of investigated and real-world applications of body imaging has been [...] Read more.
Artificial intelligence (AI) has been a topic of substantial interest for radiologists in recent years. Although many of the first clinical applications were in the neuro, cardiothoracic, and breast imaging subspecialties, the number of investigated and real-world applications of body imaging has been increasing, with more than 30 FDA-approved algorithms now available for applications in the abdomen and pelvis. In this manuscript, we explore some of the fundamentals of artificial intelligence and machine learning, review major functions that AI algorithms may perform, introduce current and potential future applications of AI in abdominal imaging, provide a basic understanding of the pathways by which AI algorithms can receive FDA approval, and explore some of the challenges with the implementation of AI in clinical practice. Full article
(This article belongs to the Special Issue Artificial Intelligence in Radiology 2.0)
25 pages, 1450 KiB  
Review
Artificial Intelligence in Neuroradiology: A Review of Current Topics and Competition Challenges
by Daniel T. Wagner, Luke Tilmans, Kevin Peng, Marilyn Niedermeier, Matt Rohl, Sean Ryan, Divya Yadav, Noah Takacs, Krystle Garcia-Fraley, Mensur Koso, Engin Dikici, Luciano M. Prevedello and Xuan V. Nguyen
Diagnostics 2023, 13(16), 2670; https://doi.org/10.3390/diagnostics13162670 - 14 Aug 2023
Cited by 2 | Viewed by 2256
Abstract
There is an expanding body of literature that describes the application of deep learning and other machine learning and artificial intelligence methods with potential relevance to neuroradiology practice. In this article, we performed a literature review to identify recent developments on the topics [...] Read more.
There is an expanding body of literature that describes the application of deep learning and other machine learning and artificial intelligence methods with potential relevance to neuroradiology practice. In this article, we performed a literature review to identify recent developments on the topics of artificial intelligence in neuroradiology, with particular emphasis on large datasets and large-scale algorithm assessments, such as those used in imaging AI competition challenges. Numerous applications relevant to ischemic stroke, intracranial hemorrhage, brain tumors, demyelinating disease, and neurodegenerative/neurocognitive disorders were discussed. The potential applications of these methods to spinal fractures, scoliosis grading, head and neck oncology, and vascular imaging were also reviewed. The AI applications examined perform a variety of tasks, including localization, segmentation, longitudinal monitoring, diagnostic classification, and prognostication. While research on this topic is ongoing, several applications have been cleared for clinical use and have the potential to augment the accuracy or efficiency of neuroradiologists. Full article
(This article belongs to the Special Issue Artificial Intelligence in Radiology 2.0)
Show Figures

Figure 1

17 pages, 901 KiB  
Review
Artificial Intelligence Applications in Breast Imaging: Current Status and Future Directions
by Clayton R. Taylor, Natasha Monga, Candise Johnson, Jeffrey R. Hawley and Mitva Patel
Diagnostics 2023, 13(12), 2041; https://doi.org/10.3390/diagnostics13122041 - 13 Jun 2023
Cited by 9 | Viewed by 7314
Abstract
Attempts to use computers to aid in the detection of breast malignancies date back more than 20 years. Despite significant interest and investment, this has historically led to minimal or no significant improvement in performance and outcomes with traditional computer-aided detection. However, recent [...] Read more.
Attempts to use computers to aid in the detection of breast malignancies date back more than 20 years. Despite significant interest and investment, this has historically led to minimal or no significant improvement in performance and outcomes with traditional computer-aided detection. However, recent advances in artificial intelligence and machine learning are now starting to deliver on the promise of improved performance. There are at present more than 20 FDA-approved AI applications for breast imaging, but adoption and utilization are widely variable and low overall. Breast imaging is unique and has aspects that create both opportunities and challenges for AI development and implementation. Breast cancer screening programs worldwide rely on screening mammography to reduce the morbidity and mortality of breast cancer, and many of the most exciting research projects and available AI applications focus on cancer detection for mammography. There are, however, multiple additional potential applications for AI in breast imaging, including decision support, risk assessment, breast density quantitation, workflow and triage, quality evaluation, response to neoadjuvant chemotherapy assessment, and image enhancement. In this review the current status, availability, and future directions of investigation of these applications are discussed, as well as the opportunities and barriers to more widespread utilization. Full article
(This article belongs to the Special Issue Artificial Intelligence in Radiology 2.0)
Show Figures

Figure 1

14 pages, 973 KiB  
Review
Artificial Intelligence, Augmented Reality, and Virtual Reality Advances and Applications in Interventional Radiology
by Elizabeth von Ende, Sean Ryan, Matthew A. Crain and Mina S. Makary
Diagnostics 2023, 13(5), 892; https://doi.org/10.3390/diagnostics13050892 - 27 Feb 2023
Cited by 8 | Viewed by 5810
Abstract
Artificial intelligence (AI) uses computer algorithms to process and interpret data as well as perform tasks, while continuously redefining itself. Machine learning, a subset of AI, is based on reverse training in which evaluation and extraction of data occur from exposure to labeled [...] Read more.
Artificial intelligence (AI) uses computer algorithms to process and interpret data as well as perform tasks, while continuously redefining itself. Machine learning, a subset of AI, is based on reverse training in which evaluation and extraction of data occur from exposure to labeled examples. AI is capable of using neural networks to extract more complex, high-level data, even from unlabeled data sets, and better emulate, or even exceed, the human brain. Advances in AI have and will continue to revolutionize medicine, especially the field of radiology. Compared to the field of interventional radiology, AI innovations in the field of diagnostic radiology are more widely understood and used, although still with significant potential and growth on the horizon. Additionally, AI is closely related and often incorporated into the technology and programming of augmented reality, virtual reality, and radiogenomic innovations which have the potential to enhance the efficiency and accuracy of radiological diagnoses and treatment planning. There are many barriers that limit the applications of artificial intelligence applications into the clinical practice and dynamic procedures of interventional radiology. Despite these barriers to implementation, artificial intelligence in IR continues to advance and the continued development of machine learning and deep learning places interventional radiology in a unique position for exponential growth. This review describes the current and possible future applications of artificial intelligence, radiogenomics, and augmented and virtual reality in interventional radiology while also describing the challenges and limitations that must be addressed before these applications can be fully implemented into common clinical practice. Full article
(This article belongs to the Special Issue Artificial Intelligence in Radiology 2.0)
Show Figures

Figure 1

20 pages, 708 KiB  
Review
Artificial Intelligence in Emergency Radiology: Where Are We Going?
by Michaela Cellina, Maurizio Cè, Giovanni Irmici, Velio Ascenti, Elena Caloro, Lorenzo Bianchi, Giuseppe Pellegrino, Natascha D’Amico, Sergio Papa and Gianpaolo Carrafiello
Diagnostics 2022, 12(12), 3223; https://doi.org/10.3390/diagnostics12123223 - 19 Dec 2022
Cited by 14 | Viewed by 3637
Abstract
Emergency Radiology is a unique branch of imaging, as rapidity in the diagnosis and management of different pathologies is essential to saving patients’ lives. Artificial Intelligence (AI) has many potential applications in emergency radiology: firstly, image acquisition can be facilitated by reducing acquisition [...] Read more.
Emergency Radiology is a unique branch of imaging, as rapidity in the diagnosis and management of different pathologies is essential to saving patients’ lives. Artificial Intelligence (AI) has many potential applications in emergency radiology: firstly, image acquisition can be facilitated by reducing acquisition times through automatic positioning and minimizing artifacts with AI-based reconstruction systems to optimize image quality, even in critical patients; secondly, it enables an efficient workflow (AI algorithms integrated with RIS–PACS workflow), by analyzing the characteristics and images of patients, detecting high-priority examinations and patients with emergent critical findings. Different machine and deep learning algorithms have been trained for the automated detection of different types of emergency disorders (e.g., intracranial hemorrhage, bone fractures, pneumonia), to help radiologists to detect relevant findings. AI-based smart reporting, summarizing patients’ clinical data, and analyzing the grading of the imaging abnormalities, can provide an objective indicator of the disease’s severity, resulting in quick and optimized treatment planning. In this review, we provide an overview of the different AI tools available in emergency radiology, to keep radiologists up to date on the current technological evolution in this field. Full article
(This article belongs to the Special Issue Artificial Intelligence in Radiology 2.0)
Show Figures

Figure 1

Other

Jump to: Research, Review

12 pages, 2249 KiB  
Technical Note
Automatization of CT Annotation: Combining AI Efficiency with Expert Precision
by Edgars Edelmers, Dzintra Kazoka, Katrina Bolocko, Kaspars Sudars and Mara Pilmane
Diagnostics 2024, 14(2), 185; https://doi.org/10.3390/diagnostics14020185 - 15 Jan 2024
Viewed by 758
Abstract
The integration of artificial intelligence (AI), particularly through machine learning (ML) and deep learning (DL) algorithms, marks a transformative progression in medical imaging diagnostics. This technical note elucidates a novel methodology for semantic segmentation of the vertebral column in CT scans, exemplified by [...] Read more.
The integration of artificial intelligence (AI), particularly through machine learning (ML) and deep learning (DL) algorithms, marks a transformative progression in medical imaging diagnostics. This technical note elucidates a novel methodology for semantic segmentation of the vertebral column in CT scans, exemplified by a dataset of 250 patients from Riga East Clinical University Hospital. Our approach centers on the accurate identification and labeling of individual vertebrae, ranging from C1 to the sacrum–coccyx complex. Patient selection was meticulously conducted, ensuring demographic balance in age and sex, and excluding scans with significant vertebral abnormalities to reduce confounding variables. This strategic selection bolstered the representativeness of our sample, thereby enhancing the external validity of our findings. Our workflow streamlined the segmentation process by eliminating the need for volume stitching, aligning seamlessly with the methodology we present. By leveraging AI, we have introduced a semi-automated annotation system that enables initial data labeling even by individuals without medical expertise. This phase is complemented by thorough manual validation against established anatomical standards, significantly reducing the time traditionally required for segmentation. This dual approach not only conserves resources but also expedites project timelines. While this method significantly advances radiological data annotation, it is not devoid of challenges, such as the necessity for manual validation by anatomically skilled personnel and reliance on specialized GPU hardware. Nonetheless, our methodology represents a substantial leap forward in medical data semantic segmentation, highlighting the potential of AI-driven approaches to revolutionize clinical and research practices in radiology. Full article
(This article belongs to the Special Issue Artificial Intelligence in Radiology 2.0)
Show Figures

Figure 1

Back to TopTop