The Potential for Artificial Intelligence in Medical Imaging Diagnosis

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: closed (31 January 2024) | Viewed by 9418

Special Issue Editors


E-Mail Website
Guest Editor
School of Science and Technology, Nottingham Trent University, Clifton, Nottingham NG11 8NS, UK
Interests: brain informatics; data analytics; brain–machine interfacing; Internet of Healthcare Things
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Information Technology, Jahangirnagar University, Dhaka 1342, Bangladesh
Interests: machine learning; multihop network; sensor network; healthcare
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
USC Norris Comprehensive Cancer Center, Los Angeles, CA, USA
Interests: immunohistochemistry; neurobiology; cellular neuroscience; molecular neuroscience; cell culture; immunofluorescence; neurodegenerative diseases; stem cell biology; gene expression

E-Mail Website
Guest Editor
School of Electronics, Indian Institute of Information Technology Una (IIITU), Una, India
Interests: artificial intelligence; autism spectrum disorder; machine learning; cognitive neuroscience; signal processing

E-Mail Website
Guest Editor
Department of Electronics and Communication Engineering, Kuwait College of Science and Technology, Doha, Kuwait
Interests: affective computing; bio-medical signal processing; vision system; BCI

Special Issue Information

Dear Colleagues,

In recent years various systems have been introduced in the diagnostic arena for automated, rapid, and accurate diagnoses of diseases. Numerous studies in the literature have demonstrated the great utility of such systems as support for doctors and specialists in formulating more precise diagnoses accurately and rapidly. Most of these systems, falling under the category of expert systems, have been enabled with elements of artificial intelligence (AI). However, given the invading use of AI in almost every aspect of our daily lives, fundamental questions remain unresolved – (i) what is the actual potential of AI in automated diagnosis? and (ii) can future AI systems replace and surpass human performance?

This Special Issue solicits contributions to collate recent developments in the form of original research, review, survey, and case study articles that address the potential of AI systems in automated diagnoses of several important areas of medicine that focus on medical imaging including but not limited to different forms of Magnetic Resonance Imaging (MRI), Computerised Tomography (CT) scan, X-ray, Positron Emission Tomography (PET) scan, and Ultrasound.

Dr. Mufti Mahmud
Prof. Dr. M. Shamim Kaiser
Dr. Vicky N. Yamamoto
Dr. Tanu Wadhera
Prof. Dr. M Murugappan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 3365 KiB  
Article
Artificial Intelligence in Medical Imaging: Analyzing the Performance of ChatGPT and Microsoft Bing in Scoliosis Detection and Cobb Angle Assessment
by Artur Fabijan, Agnieszka Zawadzka-Fabijan, Robert Fabijan, Krzysztof Zakrzewski, Emilia Nowosławska and Bartosz Polis
Diagnostics 2024, 14(7), 773; https://doi.org/10.3390/diagnostics14070773 - 05 Apr 2024
Viewed by 549
Abstract
Open-source artificial intelligence models (OSAIM) find free applications in various industries, including information technology and medicine. Their clinical potential, especially in supporting diagnosis and therapy, is the subject of increasingly intensive research. Due to the growing interest in artificial intelligence (AI) for diagnostic [...] Read more.
Open-source artificial intelligence models (OSAIM) find free applications in various industries, including information technology and medicine. Their clinical potential, especially in supporting diagnosis and therapy, is the subject of increasingly intensive research. Due to the growing interest in artificial intelligence (AI) for diagnostic purposes, we conducted a study evaluating the capabilities of AI models, including ChatGPT and Microsoft Bing, in the diagnosis of single-curve scoliosis based on posturographic radiological images. Two independent neurosurgeons assessed the degree of spinal deformation, selecting 23 cases of severe single-curve scoliosis. Each posturographic image was separately implemented onto each of the mentioned platforms using a set of formulated questions, starting from ‘What do you see in the image?’ and ending with a request to determine the Cobb angle. In the responses, we focused on how these AI models identify and interpret spinal deformations and how accurately they recognize the direction and type of scoliosis as well as vertebral rotation. The Intraclass Correlation Coefficient (ICC) with a ‘two-way’ model was used to assess the consistency of Cobb angle measurements, and its confidence intervals were determined using the F test. Differences in Cobb angle measurements between human assessments and the AI ChatGPT model were analyzed using metrics such as RMSEA, MSE, MPE, MAE, RMSLE, and MAPE, allowing for a comprehensive assessment of AI model performance from various statistical perspectives. The ChatGPT model achieved 100% effectiveness in detecting scoliosis in X-ray images, while the Bing model did not detect any scoliosis. However, ChatGPT had limited effectiveness (43.5%) in assessing Cobb angles, showing significant inaccuracy and discrepancy compared to human assessments. This model also had limited accuracy in determining the direction of spinal curvature, classifying the type of scoliosis, and detecting vertebral rotation. Overall, although ChatGPT demonstrated potential in detecting scoliosis, its abilities in assessing Cobb angles and other parameters were limited and inconsistent with expert assessments. These results underscore the need for comprehensive improvement of AI algorithms, including broader training with diverse X-ray images and advanced image processing techniques, before they can be considered as auxiliary in diagnosing scoliosis by specialists. Full article
Show Figures

Figure 1

10 pages, 1210 KiB  
Article
Evaluation of Pulmonary Nodules by Radiologists vs. Radiomics in Stand-Alone and Complementary CT and MRI
by Eric Tietz, Gustav Müller-Franzes, Markus Zimmermann, Christiane Katharina Kuhl, Sebastian Keil, Sven Nebelung and Daniel Truhn
Diagnostics 2024, 14(5), 483; https://doi.org/10.3390/diagnostics14050483 - 23 Feb 2024
Viewed by 543
Abstract
Increased attention has been given to MRI in radiation-free screening for malignant nodules in recent years. Our objective was to compare the performance of human readers and radiomic feature analysis based on stand-alone and complementary CT and MRI imaging in classifying pulmonary nodules. [...] Read more.
Increased attention has been given to MRI in radiation-free screening for malignant nodules in recent years. Our objective was to compare the performance of human readers and radiomic feature analysis based on stand-alone and complementary CT and MRI imaging in classifying pulmonary nodules. This single-center study comprises patients with CT findings of pulmonary nodules who underwent additional lung MRI and whose nodules were classified as benign/malignant by resection. For radiomic features analysis, 2D segmentation was performed for each lung nodule on axial CT, T2-weighted (T2w), and diffusion (DWI) images. The 105 extracted features were reduced by iterative backward selection. The performance of radiomics and human readers was compared by calculating accuracy with Clopper–Pearson confidence intervals. Fifty patients (mean age 63 +/− 10 years) with 66 pulmonary nodules (40 malignant) were evaluated. ACC values for radiomic features analysis vs. radiologists based on CT alone (0.68; 95%CI: 0.56, 0.79 vs. 0.59; 95%CI: 0.46, 0.71), T2w alone (0.65; 95%CI: 0.52, 0.77 vs. 0.68; 95%CI: 0.54, 0.78), DWI alone (0.61; 95%CI:0.48, 0.72 vs. 0.73; 95%CI: 0.60, 0.83), combined T2w/DWI (0.73; 95%CI: 0.60, 0.83 vs. 0.70; 95%CI: 0.57, 0.80), and combined CT/T2w/DWI (0.83; 95%CI: 0.72, 0.91 vs. 0.64; 95%CI: 0.51, 0.75) were calculated. This study is the first to show that by combining quantitative image information from CT, T2w, and DWI datasets, pulmonary nodule assessment through radiomics analysis is superior to using one modality alone, even exceeding human readers’ performance. Full article
Show Figures

Figure 1

24 pages, 7253 KiB  
Article
A Novel Ensemble Framework for Multi-Classification of Brain Tumors Using Magnetic Resonance Imaging
by Yasemin Çetin-Kaya and Mahir Kaya
Diagnostics 2024, 14(4), 383; https://doi.org/10.3390/diagnostics14040383 - 09 Feb 2024
Viewed by 905
Abstract
Brain tumors can have fatal consequences, affecting many body functions. For this reason, it is essential to detect brain tumor types accurately and at an early stage to start the appropriate treatment process. Although convolutional neural networks (CNNs) are widely used in disease [...] Read more.
Brain tumors can have fatal consequences, affecting many body functions. For this reason, it is essential to detect brain tumor types accurately and at an early stage to start the appropriate treatment process. Although convolutional neural networks (CNNs) are widely used in disease detection from medical images, they face the problem of overfitting in the training phase on limited labeled and insufficiently diverse datasets. The existing studies use transfer learning and ensemble models to overcome these problems. When the existing studies are examined, it is evident that there is a lack of models and weight ratios that will be used with the ensemble technique. With the framework proposed in this study, several CNN models with different architectures are trained with transfer learning and fine-tuning on three brain tumor datasets. A particle swarm optimization-based algorithm determined the optimum weights for combining the five most successful CNN models with the ensemble technique. The results across three datasets are as follows: Dataset 1, 99.35% accuracy and 99.20 F1-score; Dataset 2, 98.77% accuracy and 98.92 F1-score; and Dataset 3, 99.92% accuracy and 99.92 F1-score. We achieved successful performances on three brain tumor datasets, showing that the proposed framework is reliable in classification. As a result, the proposed framework outperforms existing studies, offering clinicians enhanced decision-making support through its high-accuracy classification performance. Full article
Show Figures

Figure 1

17 pages, 2342 KiB  
Article
CoSev: Data-Driven Optimizations for COVID-19 Severity Assessment in Low-Sample Regimes
by Aksh Garg, Shray Alag and Dominique Duncan
Diagnostics 2024, 14(3), 337; https://doi.org/10.3390/diagnostics14030337 - 04 Feb 2024
Viewed by 654
Abstract
Given the pronounced impact COVID-19 continues to have on society—infecting 700 million reported individuals and causing 6.96 million deaths—many deep learning works have recently focused on the virus’s diagnosis. However, assessing severity has remained an open and challenging problem due to a lack [...] Read more.
Given the pronounced impact COVID-19 continues to have on society—infecting 700 million reported individuals and causing 6.96 million deaths—many deep learning works have recently focused on the virus’s diagnosis. However, assessing severity has remained an open and challenging problem due to a lack of large datasets, the large dimensionality of images for which to find weights, and the compute limitations of modern graphics processing units (GPUs). In this paper, a new, iterative application of transfer learning is demonstrated on the understudied field of 3D CT scans for COVID-19 severity analysis. This methodology allows for enhanced performance on the MosMed Dataset, which is a small and challenging dataset containing 1130 images of patients for five levels of COVID-19 severity (Zero, Mild, Moderate, Severe, and Critical). Specifically, given the large dimensionality of the input images, we create several custom shallow convolutional neural network (CNN) architectures and iteratively refine and optimize them, paying attention to learning rates, layer types, normalization types, filter sizes, dropout values, and more. After a preliminary architecture design, the models are systematically trained on a simplified version of the dataset-building models for two-class, then three-class, then four-class, and finally five-class classification. The simplified problem structure allows the model to start learning preliminary features, which can then be further modified for more difficult classification tasks. Our final model CoSev boosts classification accuracies from below 60% at first to 81.57% with the optimizations, reaching similar performance to the state-of-the-art on the dataset, with much simpler setup procedures. In addition to COVID-19 severity diagnosis, the explored methodology can be applied to general image-based disease detection. Overall, this work highlights innovative methodologies that advance current computer vision practices for high-dimension, low-sample data as well as the practicality of data-driven machine learning and the importance of feature design for training, which can then be implemented for improvements in clinical practices. Full article
Show Figures

Figure 1

23 pages, 5201 KiB  
Article
Automated Computer-Aided Detection and Classification of Intracranial Hemorrhage Using Ensemble Deep Learning Techniques
by Snekhalatha Umapathy, Murugappan Murugappan, Deepa Bharathi and Mahima Thakur
Diagnostics 2023, 13(18), 2987; https://doi.org/10.3390/diagnostics13182987 - 18 Sep 2023
Cited by 1 | Viewed by 1119
Abstract
Diagnosing Intracranial Hemorrhage (ICH) at an early stage is difficult since it affects the blood vessels in the brain, often resulting in death. We propose an ensemble of Convolutional Neural Networks (CNNs) combining Squeeze and Excitation–based Residual Networks with the next dimension (SE-ResNeXT) [...] Read more.
Diagnosing Intracranial Hemorrhage (ICH) at an early stage is difficult since it affects the blood vessels in the brain, often resulting in death. We propose an ensemble of Convolutional Neural Networks (CNNs) combining Squeeze and Excitation–based Residual Networks with the next dimension (SE-ResNeXT) and Long Short-Term Memory (LSTM) Networks in order to address this issue. This research work primarily used data from the Radiological Society of North America (RSNA) brain CT hemorrhage challenge dataset and the CQ500 dataset. Preprocessing and data augmentation are performed using the windowing technique in the proposed work. The ICH is then classified using ensembled CNN techniques after being preprocessed, followed by feature extraction in an automatic manner. ICH is classified into the following five types: epidural, intraventricular, subarachnoid, intra-parenchymal, and subdural. A gradient-weighted Class Activation Mapping method (Grad-CAM) is used for identifying the region of interest in an ICH image. A number of performance measures are used to compare the experimental results with various state-of-the-art algorithms. By achieving 99.79% accuracy with an F-score of 0.97, the proposed model proved its efficacy in detecting ICH compared to other deep learning models. The proposed ensembled model can classify epidural, intraventricular, subarachnoid, intra-parenchymal, and subdural hemorrhages with an accuracy of 99.89%, 99.65%, 98%, 99.75%, and 99.88%. Simulation results indicate that the suggested approach can categorize a variety of intracranial bleeding types. By implementing the ensemble deep learning technique using the SE-ResNeXT and LSTM models, we achieved significant classification accuracy and AUC scores. Full article
Show Figures

Figure 1

11 pages, 1724 KiB  
Article
Automated Lung Ultrasound Pulmonary Disease Quantification Using an Unsupervised Machine Learning Technique for COVID-19
by Hersh Sagreiya, Michael A. Jacobs and Alireza Akhbardeh
Diagnostics 2023, 13(16), 2692; https://doi.org/10.3390/diagnostics13162692 - 16 Aug 2023
Viewed by 1024
Abstract
COVID-19 is an ongoing global health pandemic. Although COVID-19 can be diagnosed with various tests such as PCR, these tests do not establish pulmonary disease burden. Whereas point-of-care lung ultrasound (POCUS) can directly assess the severity of characteristic pulmonary findings of COVID-19, the [...] Read more.
COVID-19 is an ongoing global health pandemic. Although COVID-19 can be diagnosed with various tests such as PCR, these tests do not establish pulmonary disease burden. Whereas point-of-care lung ultrasound (POCUS) can directly assess the severity of characteristic pulmonary findings of COVID-19, the advantage of using US is that it is inexpensive, portable, and widely available for use in many clinical settings. For automated assessment of pulmonary findings, we have developed an unsupervised learning technique termed the calculated lung ultrasound (CLU) index. The CLU can quantify various types of lung findings, such as A or B lines, consolidations, and pleural effusions, and it uses these findings to calculate a CLU index score, which is a quantitative measure of pulmonary disease burden. This is accomplished using an unsupervised, patient-specific approach that does not require training on a large dataset. The CLU was tested on 52 lung ultrasound examinations from several institutions. CLU demonstrated excellent concordance with radiologist findings in different pulmonary disease states. Given the global nature of COVID-19, the CLU would be useful for sonographers and physicians in resource-strapped areas with limited ultrasound training and diagnostic capacities for more accurate assessment of pulmonary status. Full article
Show Figures

Figure 1

Review

Jump to: Research

18 pages, 1165 KiB  
Review
The Application of Deep Learning on CBCT in Dentistry
by Wenjie Fan, Jiaqi Zhang, Nan Wang, Jia Li and Li Hu
Diagnostics 2023, 13(12), 2056; https://doi.org/10.3390/diagnostics13122056 - 14 Jun 2023
Cited by 3 | Viewed by 2726
Abstract
Cone beam computed tomography (CBCT) has become an essential tool in modern dentistry, allowing dentists to analyze the relationship between teeth and the surrounding tissues. However, traditional manual analysis can be time-consuming and its accuracy depends on the user’s proficiency. To address these [...] Read more.
Cone beam computed tomography (CBCT) has become an essential tool in modern dentistry, allowing dentists to analyze the relationship between teeth and the surrounding tissues. However, traditional manual analysis can be time-consuming and its accuracy depends on the user’s proficiency. To address these limitations, deep learning (DL) systems have been integrated into CBCT analysis to improve accuracy and efficiency. Numerous DL models have been developed for tasks such as automatic diagnosis, segmentation, classification of teeth, inferior alveolar nerve, bone, airway, and preoperative planning. All research articles summarized were from Pubmed, IEEE, Google Scholar, and Web of Science up to December 2022. Many studies have demonstrated that the application of deep learning technology in CBCT examination in dentistry has achieved significant progress, and its accuracy in radiology image analysis has reached the level of clinicians. However, in some fields, its accuracy still needs to be improved. Furthermore, ethical issues and CBCT device differences may prohibit its extensive use. DL models have the potential to be used clinically as medical decision-making aids. The combination of DL and CBCT can highly reduce the workload of image reading. This review provides an up-to-date overview of the current applications of DL on CBCT images in dentistry, highlighting its potential and suggesting directions for future research. Full article
Show Figures

Figure 1

Back to TopTop