Applications of Artificial Intelligence in Thoracic Imaging

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: 30 June 2024 | Viewed by 13738

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science, Institute of Space Technology, Islamabad, Pakistan
Interests: smart healthcare; machine learning; deep learning; intelligent distributed networking systems

Special Issue Information

Dear Colleagues,

This Special Issue provides a platform for researchers, practitioners, and academicians to share their latest research, insights, and developments on the applications of artificial intelligence (AI) in thoracic imaging. This Special Issue aims to highlight the current state-of-the-art research and a comprehensive overview of the latest advancements while identifying the most promising directions for future research in this rapidly evolving field. The target audience of this Special Issue includes researchers, clinicians, and practitioners in the areas of radiology, pulmonology, oncology, medical imaging, and computer science.

The scope of this Special Issue includes but is not limited to the following topics:

  • AI algorithms and models for diagnosing thoracic diseases, including lung cancer, pulmonary fibrosis, emphysema, and interstitial lung disease;
  • AI to improve the accuracy and efficiency of thoracic imaging, including computer-aided detection and diagnosis and quantitative image analysis;
  • AI to facilitate personalized treatment planning and disease management in thoracic imaging, including prediction of disease progression and response to therapy;
  • AI-based imaging techniques for thoracic imaging, including advanced image acquisition, reconstruction, and post-processing;
  • AI integration with other imaging modalities in thoracic imaging, including positron emission tomography (PET), magnetic resonance imaging (MRI), and computed tomography (CT);
  • AI to improve communication and collaboration among healthcare providers, including radiologists, pulmonologists, and oncologists.

Dr. Saif Ul Islam
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 9059 KiB  
Article
Efficient Thorax Disease Classification and Localization Using DCNN and Chest X-ray Images
by Zeeshan Ahmad, Ahmad Kamran Malik, Nafees Qamar and Saif ul Islam
Diagnostics 2023, 13(22), 3462; https://doi.org/10.3390/diagnostics13223462 - 17 Nov 2023
Viewed by 989
Abstract
Thorax disease is a life-threatening disease caused by bacterial infections that occur in the lungs. It could be deadly if not treated at the right time, so early diagnosis of thoracic diseases is vital. The suggested study can assist radiologists in more swiftly [...] Read more.
Thorax disease is a life-threatening disease caused by bacterial infections that occur in the lungs. It could be deadly if not treated at the right time, so early diagnosis of thoracic diseases is vital. The suggested study can assist radiologists in more swiftly diagnosing thorax disorders and in the rapid airport screening of patients with a thorax disease, such as pneumonia. This paper focuses on automatically detecting and localizing thorax disease using chest X-ray images. It provides accurate detection and localization using DenseNet-121 which is foundation of our proposed framework, called Z-Net. The proposed framework utilizes the weighted cross-entropy loss function (W-CEL) that manages class imbalance issue in the ChestX-ray14 dataset, which helped in achieving the highest performance as compared to the previous models. The 112,120 images contained in the ChestX-ray14 dataset (60,412 images are normal, and the rest contain thorax diseases) were preprocessed and then trained for classification and localization. This work uses computer-aided diagnosis (CAD) system that supports development of highly accurate and precise computer-aided systems. We aim to develop a CAD system using a deep learning approach. Our quantitative results show high AUC scores in comparison with the latest research works. The proposed approach achieved the highest mean AUC score of 85.8%. This is the highest accuracy documented in the literature for any related model. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Thoracic Imaging)
Show Figures

Figure 1

19 pages, 19734 KiB  
Article
Classification of Parkinson’s Disease in Patch-Based MRI of Substantia Nigra
by Sayyed Shahid Hussain, Xu Degang, Pir Masoom Shah, Saif Ul Islam, Mahmood Alam, Izaz Ahmad Khan, Fuad A. Awwad and Emad A. A. Ismail
Diagnostics 2023, 13(17), 2827; https://doi.org/10.3390/diagnostics13172827 - 31 Aug 2023
Viewed by 1189
Abstract
Parkinson’s disease (PD) is a chronic and progressive neurological disease that mostly shakes and compromises the motor system of the human brain. Patients with PD can face resting tremors, loss of balance, bradykinesia, and rigidity problems. Complex patterns of PD, i.e., with relevance [...] Read more.
Parkinson’s disease (PD) is a chronic and progressive neurological disease that mostly shakes and compromises the motor system of the human brain. Patients with PD can face resting tremors, loss of balance, bradykinesia, and rigidity problems. Complex patterns of PD, i.e., with relevance to other neurological diseases and minor changes in brain structure, make the diagnosis of this disease a challenge and cause inaccuracy of about 25% in the diagnostics. The research community utilizes different machine learning techniques for diagnosis using handcrafted features. This paper proposes a computer-aided diagnostic system using a convolutional neural network (CNN) to diagnose PD. CNN is one of the most suitable models to extract and learn the essential features of a problem. The dataset is obtained from Parkinson’s Progression Markers Initiative (PPMI), which provides different datasets (benchmarks), such as T2-weighted MRI for PD and other healthy controls (HC). The mid slices are collected from each MRI. Further, these slices are registered for alignment. Since the PD can be found in substantia nigra (i.e., the midbrain), the midbrain region of the registered T2-weighted MRI slice is selected using the freehand region of interest technique with a 33 × 33 sized window. Several experiments have been carried out to ensure the validity of the CNN. The standard measures, such as accuracy, sensitivity, specificity, and area under the curve, are used to evaluate the proposed system. The evaluation results show that CNN provides better accuracy than machine learning techniques, such as naive Bayes, decision tree, support vector machine, and artificial neural network. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Thoracic Imaging)
Show Figures

Figure 1

33 pages, 8996 KiB  
Article
Deep Learning-Based Classification of Chest Diseases Using X-rays, CT Scans, and Cough Sound Images
by Hassaan Malik, Tayyaba Anees, Ahmad Sami Al-Shamaylehs, Salman Z. Alharthi, Wajeeha Khalil and Adnan Akhunzada
Diagnostics 2023, 13(17), 2772; https://doi.org/10.3390/diagnostics13172772 - 26 Aug 2023
Cited by 1 | Viewed by 1616
Abstract
Chest disease refers to a variety of lung disorders, including lung cancer (LC), COVID-19, pneumonia (PNEU), tuberculosis (TB), and numerous other respiratory disorders. The symptoms (i.e., fever, cough, sore throat, etc.) of these chest diseases are similar, which might mislead radiologists and health [...] Read more.
Chest disease refers to a variety of lung disorders, including lung cancer (LC), COVID-19, pneumonia (PNEU), tuberculosis (TB), and numerous other respiratory disorders. The symptoms (i.e., fever, cough, sore throat, etc.) of these chest diseases are similar, which might mislead radiologists and health experts when classifying chest diseases. Chest X-rays (CXR), cough sounds, and computed tomography (CT) scans are utilized by researchers and doctors to identify chest diseases such as LC, COVID-19, PNEU, and TB. The objective of the work is to identify nine different types of chest diseases, including COVID-19, edema (EDE), LC, PNEU, pneumothorax (PNEUTH), normal, atelectasis (ATE), and consolidation lung (COL). Therefore, we designed a novel deep learning (DL)-based chest disease detection network (DCDD_Net) that uses a CXR, CT scans, and cough sound images for the identification of nine different types of chest diseases. The scalogram method is used to convert the cough sounds into an image. Before training the proposed DCDD_Net model, the borderline (BL) SMOTE is applied to balance the CXR, CT scans, and cough sound images of nine chest diseases. The proposed DCDD_Net model is trained and evaluated on 20 publicly available benchmark chest disease datasets of CXR, CT scan, and cough sound images. The classification performance of the DCDD_Net is compared with four baseline models, i.e., InceptionResNet-V2, EfficientNet-B0, DenseNet-201, and Xception, as well as state-of-the-art (SOTA) classifiers. The DCDD_Net achieved an accuracy of 96.67%, a precision of 96.82%, a recall of 95.76%, an F1-score of 95.61%, and an area under the curve (AUC) of 99.43%. The results reveal that DCDD_Net outperformed the other four baseline models in terms of many performance evaluation metrics. Thus, the proposed DCDD_Net model can provide significant assistance to radiologists and medical experts. Additionally, the proposed model was also shown to be resilient by statistical evaluations of the datasets using McNemar and ANOVA tests. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Thoracic Imaging)
Show Figures

Figure 1

21 pages, 4733 KiB  
Article
DR-NASNet: Automated System to Detect and Classify Diabetic Retinopathy Severity Using Improved Pretrained NASNet Model
by Muhammad Zaheer Sajid, Muhammad Fareed Hamid, Ayman Youssef, Javeria Yasmin, Ganeshkumar Perumal, Imran Qureshi, Syed Muhammad Naqi and Qaisar Abbas
Diagnostics 2023, 13(16), 2645; https://doi.org/10.3390/diagnostics13162645 - 10 Aug 2023
Cited by 1 | Viewed by 1176
Abstract
Diabetes is a widely spread disease that significantly affects people’s lives. The leading cause is uncontrolled levels of blood glucose, which develop eye defects over time, including Diabetic Retinopathy (DR), which results in severe visual loss. The primary factor causing blindness is considered [...] Read more.
Diabetes is a widely spread disease that significantly affects people’s lives. The leading cause is uncontrolled levels of blood glucose, which develop eye defects over time, including Diabetic Retinopathy (DR), which results in severe visual loss. The primary factor causing blindness is considered to be DR in diabetic patients. DR treatment tries to control the disease’s severity, as it is irreversible. The primary goal of this effort is to create a reliable method for automatically detecting the severity of DR. This paper proposes a new automated system (DR-NASNet) to detect and classify DR severity using an improved pretrained NASNet Model. To develop the DR-NASNet system, we first utilized a preprocessing technique that takes advantage of Ben Graham and CLAHE to lessen noise, emphasize lesions, and ultimately improve DR classification performance. Taking into account the imbalance between classes in the dataset, data augmentation procedures were conducted to control overfitting. Next, we have integrated dense blocks into the NASNet architecture to improve the effectiveness of classification results for five severity levels of DR. In practice, the DR-NASNet model achieves state-of-the-art results with a smaller model size and lower complexity. To test the performance of the DR-NASNet system, a combination of various datasets is used in this paper. To learn effective features from DR images, we used a pretrained model on the dataset. The last step is to put the image into one of five categories: No DR, Mild, Moderate, Proliferate, or Severe. To carry this out, the classifier layer of a linear SVM with a linear activation function must be added. The DR-NASNet system was tested using six different experiments. The system achieves 96.05% accuracy with the challenging DR dataset. The results and comparisons demonstrate that the DR-NASNet system improves a model’s performance and learning ability. As a result, the DR-NASNet system provides assistance to ophthalmologists by describing an effective system for classifying early-stage levels of DR. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Thoracic Imaging)
Show Figures

Figure 1

18 pages, 3371 KiB  
Article
Ejection Fraction Estimation from Echocardiograms Using Optimal Left Ventricle Feature Extraction Based on Clinical Methods
by Samana Batool, Imtiaz Ahmad Taj and Mubeen Ghafoor
Diagnostics 2023, 13(13), 2155; https://doi.org/10.3390/diagnostics13132155 - 24 Jun 2023
Cited by 3 | Viewed by 1060
Abstract
Echocardiography is one of the imaging systems most often utilized for assessing heart anatomy and function. Left ventricle ejection fraction (LVEF) is an important clinical variable assessed from echocardiography via the measurement of left ventricle (LV) parameters. Significant inter-observer and intra-observer variability is [...] Read more.
Echocardiography is one of the imaging systems most often utilized for assessing heart anatomy and function. Left ventricle ejection fraction (LVEF) is an important clinical variable assessed from echocardiography via the measurement of left ventricle (LV) parameters. Significant inter-observer and intra-observer variability is seen when LVEF is quantified by cardiologists using huge echocardiography data. Machine learning algorithms have the capability to analyze such extensive datasets and identify intricate patterns of structure and function of the heart that highly skilled observers might overlook, hence paving the way for computer-assisted diagnostics in this field. In this study, LV segmentation is performed on echocardiogram data followed by feature extraction from the left ventricle based on clinical methods. The extracted features are then subjected to analysis using both neural networks and traditional machine learning algorithms to estimate the LVEF. The results indicate that employing machine learning techniques on the extracted features from the left ventricle leads to higher accuracy than the utilization of Simpson’s method for estimating the LVEF. The evaluations are performed on a publicly available echocardiogram dataset, EchoNet-Dynamic. The best results are obtained when DeepLab, a convolutional neural network architecture, is used for LV segmentation along with Long Short-Term Memory Networks (LSTM) for the regression of LVEF, obtaining a dice similarity coefficient of 0.92 and a mean absolute error of 5.736%. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Thoracic Imaging)
Show Figures

Figure 1

11 pages, 4019 KiB  
Article
Estimating the Volume of Nodules and Masses on Serial Chest Radiography Using a Deep-Learning-Based Automatic Detection Algorithm: A Preliminary Study
by Chae Young Lim, Yoon Ki Cha, Myung Jin Chung, Subin Park, Soyoung Park, Jung Han Woo and Jong Hee Kim
Diagnostics 2023, 13(12), 2060; https://doi.org/10.3390/diagnostics13122060 - 14 Jun 2023
Viewed by 878
Abstract
Background: The purpose of this study was to assess the volume of the pulmonary nodules and masses on serial chest X-rays (CXRs) from deep-learning-based automatic detection algorithm (DLAD)-based parameters. Methods: In a retrospective single-institutional study, 72 patients, who obtained serial CXRs (n [...] Read more.
Background: The purpose of this study was to assess the volume of the pulmonary nodules and masses on serial chest X-rays (CXRs) from deep-learning-based automatic detection algorithm (DLAD)-based parameters. Methods: In a retrospective single-institutional study, 72 patients, who obtained serial CXRs (n = 147) for pulmonary nodules or masses with corresponding chest CT images as the reference standards, were included. A pre-trained DLAD based on a convolutional neural network was developed to detect and localize nodules using 13,710 radiographs and to calculate a localization map and the derived parameters (e.g., the area and mean probability value of pulmonary nodules) for each CXR, including serial follow-ups. For validation, reference 3D CT volumes were measured semi-automatically. Volume prediction models for pulmonary nodules were established through univariable or multivariable, and linear or non-linear regression analyses with the parameters. A polynomial regression analysis was performed as a method of a non-linear regression model. Results: Of the 147 CXRs and 208 nodules of 72 patients, the mean volume of nodules or masses was measured as 9.37 ± 11.69 cm3 (mean ± standard deviation). The area and CT volume demonstrated a linear correlation of moderate strength (i.e., R = 0.58, RMSE: 9449.9 mm3 m3 in a linear regression analysis). The area and mean probability values exhibited a strong linear correlation (R = 0.73). The volume prediction performance based on a multivariable regression model was best with a mean probability and unit-adjusted area (i.e., RMSE: 7975.6 mm3, the smallest among the other variable parameters). Conclusions: The prediction model with the area and the mean probability based on the DLAD showed a rather accurate quantitative estimation of pulmonary nodule or mass volume and the change in serial CXRs. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Thoracic Imaging)
Show Figures

Figure 1

15 pages, 3156 KiB  
Article
CRV-NET: Robust Intensity Recognition of Coronavirus in Lung Computerized Tomography Scan Images
by Uzair Iqbal, Romil Imtiaz, Abdul Khader Jilani Saudagar and Khubaib Amjad Alam
Diagnostics 2023, 13(10), 1783; https://doi.org/10.3390/diagnostics13101783 - 18 May 2023
Cited by 1 | Viewed by 1339
Abstract
The early diagnosis of infectious diseases is demanded by digital healthcare systems. Currently, the detection of the new coronavirus disease (COVID-19) is a major clinical requirement. For COVID-19 detection, deep learning models are used in various studies, but the robustness is still compromised. [...] Read more.
The early diagnosis of infectious diseases is demanded by digital healthcare systems. Currently, the detection of the new coronavirus disease (COVID-19) is a major clinical requirement. For COVID-19 detection, deep learning models are used in various studies, but the robustness is still compromised. In recent years, deep learning models have increased in popularity in almost every area, particularly in medical image processing and analysis. The visualization of the human body’s internal structure is critical in medical analysis; many imaging techniques are in use to perform this job. A computerized tomography (CT) scan is one of them, and it has been generally used for the non-invasive observation of the human body. The development of an automatic segmentation method for lung CT scans showing COVID-19 can save experts time and can reduce human error. In this article, the CRV-NET is proposed for the robust detection of COVID-19 in lung CT scan images. A public dataset (SARS-CoV-2 CT Scan dataset), is used for the experimental work and customized according to the scenario of the proposed model. The proposed modified deep-learning-based U-Net model is trained on a custom dataset with 221 training images and their ground truth, which was labeled by an expert. The proposed model is tested on 100 test images, and the results show that the model segments COVID-19 with a satisfactory level of accuracy. Moreover, the comparison of the proposed CRV-NET with different state-of-the-art convolutional neural network models (CNNs), including the U-Net Model, shows better results in terms of accuracy (96.67%) and robustness (low epoch value in detection and the smallest training data size). Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Thoracic Imaging)
Show Figures

Figure 1

15 pages, 2810 KiB  
Article
A Deep Learning Framework for the Prediction and Diagnosis of Ovarian Cancer in Pre- and Post-Menopausal Women
by Blessed Ziyambe, Abid Yahya, Tawanda Mushiri, Muhammad Usman Tariq, Qaisar Abbas, Muhammad Babar, Mubarak Albathan, Muhammad Asim, Ayyaz Hussain and Sohail Jabbar
Diagnostics 2023, 13(10), 1703; https://doi.org/10.3390/diagnostics13101703 - 11 May 2023
Cited by 6 | Viewed by 2793
Abstract
Ovarian cancer ranks as the fifth leading cause of cancer-related mortality in women. Late-stage diagnosis (stages III and IV) is a major challenge due to the often vague and inconsistent initial symptoms. Current diagnostic methods, such as biomarkers, biopsy, and imaging tests, face [...] Read more.
Ovarian cancer ranks as the fifth leading cause of cancer-related mortality in women. Late-stage diagnosis (stages III and IV) is a major challenge due to the often vague and inconsistent initial symptoms. Current diagnostic methods, such as biomarkers, biopsy, and imaging tests, face limitations, including subjectivity, inter-observer variability, and extended testing times. This study proposes a novel convolutional neural network (CNN) algorithm for predicting and diagnosing ovarian cancer, addressing these limitations. In this paper, CNN was trained on a histopathological image dataset, divided into training and validation subsets and augmented before training. The model achieved a remarkable accuracy of 94%, with 95.12% of cancerous cases correctly identified and 93.02% of healthy cells accurately classified. The significance of this study lies in overcoming the challenges associated with the human expert examination, such as higher misclassification rates, inter-observer variability, and extended analysis times. This study presents a more accurate, efficient, and reliable approach to predicting and diagnosing ovarian cancer. Future research should explore recent advances in this field to enhance the effectiveness of the proposed method further. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Thoracic Imaging)
Show Figures

Figure 1

13 pages, 771 KiB  
Article
An Ensembled Framework for Human Breast Cancer Survivability Prediction Using Deep Learning
by Ehzaz Mustafa, Ehtisham Khan Jadoon, Sardar Khaliq-uz-Zaman, Mohammad Ali Humayun and Mohammed Maray
Diagnostics 2023, 13(10), 1688; https://doi.org/10.3390/diagnostics13101688 - 10 May 2023
Cited by 3 | Viewed by 1471
Abstract
Breast cancer is categorized as an aggressive disease, and it is one of the leading causes of death. Accurate survival predictions for both long-term and short-term survivors, when delivered on time, can help physicians make effective treatment decisions for their patients. Therefore, there [...] Read more.
Breast cancer is categorized as an aggressive disease, and it is one of the leading causes of death. Accurate survival predictions for both long-term and short-term survivors, when delivered on time, can help physicians make effective treatment decisions for their patients. Therefore, there is a dire need to design an efficient and rapid computational model for breast cancer prognosis. In this study, we propose an ensemble model for breast cancer survivability prediction (EBCSP) that utilizes multi-modal data and stacks the output of multiple neural networks. Specifically, we design a convolutional neural network (CNN) for clinical modalities, a deep neural network (DNN) for copy number variations (CNV), and a long short-term memory (LSTM) architecture for gene expression modalities to effectively handle multi-dimensional data. The independent models’ results are then used for binary classification (long term > 5 years and short term < 5 years) based on survivability using the random forest method. The EBCSP model’s successful application outperforms models that utilize a single data modality for prediction and existing benchmarks. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Thoracic Imaging)
Show Figures

Figure 1

Back to TopTop