Deep Learning Models for Medical Imaging Processing

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: closed (30 September 2023) | Viewed by 28584

Special Issue Editor


E-Mail Website
Guest Editor
BITS Pilani, Hyderabad, 500078, India
Interests: healthcare data; machine learning; deep learning; signal processing and image processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The images recorded from subjects using different medical imaging modalities provide valuable information for diagnosing various ailments. However, it is a cumbersome task for medical professionals to check these images to diagnose different pathologies manually. Artificial intelligence (AI)-based automated diagnosis systems are widely used to assist medical professionals in diagnosing different diseases using medical images. In recent years, deep learning methods have become popular for medical image processing applications such as the segmentation of medical images, medical image quality assessment, automated detection of different pathologies using medical images, etc. This Special Issue will help to demonstrate the application of deep learning techniques for the processing of different types of medical images such as X-ray images, ultrasound images, thermal images, fundus images, optical tomography (OCT) images, echocardiography images, magnetic resonance imaging, positron emission tomography (PET), etc. This Special Issue welcomes high-quality original research papers and review papers on the application of deep learning for medical image processing.

We expect submissions of articles related but not limited to the following topics:

  1. Deep learning for X-ray image processing;
  2. Deep learning for ultrasound image processing;
  3. Deep learning and graph CNN models for fundus image processing;
  4. MRI and FMRI data processing using deep neural networks;
  5. Deep learning models for thermal-imaging-based biomedical applications;
  6. Deep learning for cardiac echocardiography imaging;
  7. Deep learning for other medical imaging applications.

Dr. Rajesh K. Tripathy
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • medical image processing
  • Graph CNN
  • attention models
  • CNN

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 4618 KiB  
Article
Deep Learning-Based Denoising of CEST MR Data: A Feasibility Study on Applying Synthetic Phantoms in Medical Imaging
by Karl Ludger Radke, Benedikt Kamp, Vibhu Adriaenssens, Julia Stabinska, Patrik Gallinnis, Hans-Jörg Wittsack, Gerald Antoch and Anja Müller-Lutz
Diagnostics 2023, 13(21), 3326; https://doi.org/10.3390/diagnostics13213326 - 27 Oct 2023
Viewed by 1028
Abstract
Chemical Exchange Saturation Transfer (CEST) magnetic resonance imaging (MRI) provides a novel method for analyzing biomolecule concentrations in tissues without exogenous contrast agents. Despite its potential, achieving a high signal-to-noise ratio (SNR) is imperative for detecting small CEST effects. Traditional metrics such as [...] Read more.
Chemical Exchange Saturation Transfer (CEST) magnetic resonance imaging (MRI) provides a novel method for analyzing biomolecule concentrations in tissues without exogenous contrast agents. Despite its potential, achieving a high signal-to-noise ratio (SNR) is imperative for detecting small CEST effects. Traditional metrics such as Magnetization Transfer Ratio Asymmetry (MTRasym) and Lorentzian analyses are vulnerable to image noise, hampering their precision in quantitative concentration estimations. Recent noise-reduction algorithms like principal component analysis (PCA), nonlocal mean filtering (NLM), and block matching combined with 3D filtering (BM3D) have shown promise, as there is a burgeoning interest in the utilization of neural networks (NNs), particularly autoencoders, for imaging denoising. This study uses the Bloch–McConnell equations, which allow for the synthetic generation of CEST images and explores NNs efficacy in denoising these images. Using synthetically generated phantoms, autoencoders were created, and their performance was compared with traditional denoising methods using various datasets. The results underscored the superior performance of NNs, notably the ResUNet architectures, in noise identification and abatement compared to analytical approaches across a wide noise gamut. This superiority was particularly pronounced at elevated noise intensities in the in vitro data. Notably, the neural architectures significantly improved the PSNR values, achieving up to 35.0, while some traditional methods struggled, especially in low-noise reduction scenarios. However, the application to the in vivo data presented challenges due to varying noise profiles. This study accentuates the potential of NNs as robust denoising tools, but their translation to clinical settings warrants further investigation. Full article
(This article belongs to the Special Issue Deep Learning Models for Medical Imaging Processing)
Show Figures

Figure 1

15 pages, 875 KiB  
Article
Fusion of Graph and Tabular Deep Learning Models for Predicting Chronic Kidney Disease
by Patike Kiran Rao, Subarna Chatterjee, K Nagaraju, Surbhi B. Khan, Ahlam Almusharraf and Abdullah I. Alharbi
Diagnostics 2023, 13(12), 1981; https://doi.org/10.3390/diagnostics13121981 - 06 Jun 2023
Cited by 1 | Viewed by 1785
Abstract
Chronic Kidney Disease (CKD) represents a considerable global health challenge, emphasizing the need for precise and prompt prediction of disease progression to enable early intervention and enhance patient outcomes. As per this study, we introduce an innovative fusion deep learning model that combines [...] Read more.
Chronic Kidney Disease (CKD) represents a considerable global health challenge, emphasizing the need for precise and prompt prediction of disease progression to enable early intervention and enhance patient outcomes. As per this study, we introduce an innovative fusion deep learning model that combines a Graph Neural Network (GNN) and a tabular data model for predicting CKD progression by capitalizing on the strengths of both graph-structured and tabular data representations. The GNN model processes graph-structured data, uncovering intricate relationships between patients and their medical conditions, while the tabular data model adeptly manages patient-specific features within a conventional data format. An extensive comparison of the fusion model, GNN model, tabular data model, and a baseline model was conducted utilizing various evaluation metrics, encompassing accuracy, precision, recall, and F1-score. The fusion model exhibited outstanding performance across all metrics, underlining its augmented capacity for predicting CKD progression. The GNN model’s performance closely trailed the fusion model, accentuating the advantages of integrating graph-structured data into the prediction process. Hyperparameter optimization was performed using grid search, ensuring a fair comparison among the models. The fusion model displayed consistent performance across diverse data splits, demonstrating its adaptability to dataset variations and resilience against noise and outliers. In conclusion, the proposed fusion deep learning model, which amalgamates the capabilities of both the GNN model and the tabular data model, substantially surpasses the individual models and the baseline model in predicting CKD progression. This pioneering approach provides a more precise and dependable method for early detection and management of CKD, highlighting its potential to advance the domain of precision medicine and elevate patient care. Full article
(This article belongs to the Special Issue Deep Learning Models for Medical Imaging Processing)
Show Figures

Figure 1

15 pages, 4071 KiB  
Article
Detection of Gallbladder Disease Types Using Deep Learning: An Informative Medical Method
by Ahmed Mahdi Obaid, Amina Turki, Hatem Bellaaj, Mohamed Ksantini, Abdulla AlTaee and Alaa Alaerjan
Diagnostics 2023, 13(10), 1744; https://doi.org/10.3390/diagnostics13101744 - 15 May 2023
Cited by 4 | Viewed by 3040
Abstract
Nowadays, despite all the conducted research and the provided efforts in advancing the healthcare sector, there is a strong need to rapidly and efficiently diagnose various diseases. The complexity of some disease mechanisms on one side and the dramatic life-saving potential on the [...] Read more.
Nowadays, despite all the conducted research and the provided efforts in advancing the healthcare sector, there is a strong need to rapidly and efficiently diagnose various diseases. The complexity of some disease mechanisms on one side and the dramatic life-saving potential on the other side raise big challenges for the development of tools for the early detection and diagnosis of diseases. Deep learning (DL), an area of artificial intelligence (AI), can be an informative medical tomography method that can aid in the early diagnosis of gallbladder (GB) disease based on ultrasound images (UI). Many researchers considered the classification of only one disease of the GB. In this work, we successfully managed to apply a deep neural network (DNN)-based classification model to a rich built database in order to detect nine diseases at once and to determine the type of disease using UI. In the first step, we built a balanced database composed of 10,692 UI of the GB organ from 1782 patients. These images were carefully collected from three hospitals over roughly three years and then classified by professionals. In the second step, we preprocessed and enhanced the dataset images in order to achieve the segmentation step. Finally, we applied and then compared four DNN models to analyze and classify these images in order to detect nine GB disease types. All the models produced good results in detecting GB diseases; the best was the MobileNet model, with an accuracy of 98.35%. Full article
(This article belongs to the Special Issue Deep Learning Models for Medical Imaging Processing)
Show Figures

Figure 1

14 pages, 3848 KiB  
Article
Colon Disease Diagnosis with Convolutional Neural Network and Grasshopper Optimization Algorithm
by Amna Ali A. Mohamed, Aybaba Hançerlioğullari, Javad Rahebi, Mayukh K. Ray and Sudipta Roy
Diagnostics 2023, 13(10), 1728; https://doi.org/10.3390/diagnostics13101728 - 12 May 2023
Cited by 9 | Viewed by 1325
Abstract
This paper presents a robust colon cancer diagnosis method based on the feature selection method. The proposed method for colon disease diagnosis can be divided into three steps. In the first step, the images’ features were extracted based on the convolutional neural network. [...] Read more.
This paper presents a robust colon cancer diagnosis method based on the feature selection method. The proposed method for colon disease diagnosis can be divided into three steps. In the first step, the images’ features were extracted based on the convolutional neural network. Squeezenet, Resnet-50, AlexNet, and GoogleNet were used for the convolutional neural network. The extracted features are huge, and the number of features cannot be appropriate for training the system. For this reason, the metaheuristic method is used in the second step to reduce the number of features. This research uses the grasshopper optimization algorithm to select the best features from the feature data. Finally, using machine learning methods, colon disease diagnosis was found to be accurate and successful. Two classification methods are applied for the evaluation of the proposed method. These methods include the decision tree and the support vector machine. The sensitivity, specificity, accuracy, and F1Score have been used to evaluate the proposed method. For Squeezenet based on the support vector machine, we obtained results of 99.34%, 99.41%, 99.12%, 98.91% and 98.94% for sensitivity, specificity, accuracy, precision, and F1Score, respectively. In the end, we compared the suggested recognition method’s performance to the performances of other methods, including 9-layer CNN, random forest, 7-layer CNN, and DropBlock. We demonstrated that our solution outperformed the others. Full article
(This article belongs to the Special Issue Deep Learning Models for Medical Imaging Processing)
Show Figures

Figure 1

13 pages, 2902 KiB  
Article
Deep Learning Denoising Improves and Homogenizes Patient [18F]FDG PET Image Quality in Digital PET/CT
by Kathleen Weyts, Elske Quak, Idlir Licaj, Renaud Ciappuccini, Charline Lasnon, Aurélien Corroyer-Dulmont, Gauthier Foucras, Stéphane Bardet and Cyril Jaudet
Diagnostics 2023, 13(9), 1626; https://doi.org/10.3390/diagnostics13091626 - 04 May 2023
Cited by 1 | Viewed by 1283
Abstract
Given the constant pressure to increase patient throughput while respecting radiation protection, global body PET image quality (IQ) is not satisfactory in all patients. We first studied the association between IQ and other variables, in particular body habitus, on a digital PET/CT. Second, [...] Read more.
Given the constant pressure to increase patient throughput while respecting radiation protection, global body PET image quality (IQ) is not satisfactory in all patients. We first studied the association between IQ and other variables, in particular body habitus, on a digital PET/CT. Second, to improve and homogenize IQ, we evaluated a deep learning PET denoising solution (Subtle PETTM) using convolutional neural networks. We analysed retrospectively in 113 patients visual IQ (by a 5-point Likert score in two readers) and semi-quantitative IQ (by the coefficient of variation in the liver, CVliv) as well as lesion detection and quantification in native and denoised PET. In native PET, visual and semi-quantitative IQ were lower in patients with larger body habitus (p < 0.0001 for both) and in men vs. women (p ≤ 0.03 for CVliv). After PET denoising, visual IQ scores increased and became more homogeneous between patients (4.8 ± 0.3 in denoised vs. 3.6 ± 0.6 in native PET; p < 0.0001). CVliv were lower in denoised PET than in native PET, 6.9 ± 0.9% vs. 12.2 ± 1.6%; p < 0.0001. The slope calculated by linear regression of CVliv according to weight was significantly lower in denoised than in native PET (p = 0.0002), demonstrating more uniform CVliv. Lesion concordance rate between both PET series was 369/371 (99.5%), with two lesions exclusively detected in native PET. SUVmax and SUVpeak of up to the five most intense native PET lesions per patient were lower in denoised PET (p < 0.001), with an average relative bias of −7.7% and −2.8%, respectively. DL-based PET denoising by Subtle PETTM allowed [18F]FDG PET global image quality to be improved and homogenized, while maintaining satisfactory lesion detection and quantification. DL-based denoising may render body habitus adaptive PET protocols unnecessary, and pave the way for the improvement and homogenization of PET modalities. Full article
(This article belongs to the Special Issue Deep Learning Models for Medical Imaging Processing)
Show Figures

Figure 1

26 pages, 5884 KiB  
Article
Hybrid Techniques of X-ray Analysis to Predict Knee Osteoarthritis Grades Based on Fusion Features of CNN and Handcrafted
by Ahmed Khalid, Ebrahim Mohammed Senan, Khalil Al-Wagih, Mamoun Mohammad Ali Al-Azzam and Ziad Mohammad Alkhraisha
Diagnostics 2023, 13(9), 1609; https://doi.org/10.3390/diagnostics13091609 - 02 May 2023
Cited by 7 | Viewed by 1615
Abstract
Knee osteoarthritis (KOA) is a chronic disease that impedes movement, especially in the elderly, affecting more than 5% of people worldwide. KOA goes through many stages, from the mild grade that can be treated to the severe grade in which the knee must [...] Read more.
Knee osteoarthritis (KOA) is a chronic disease that impedes movement, especially in the elderly, affecting more than 5% of people worldwide. KOA goes through many stages, from the mild grade that can be treated to the severe grade in which the knee must be replaced. Therefore, early diagnosis of KOA is essential to avoid its development to the advanced stages. X-rays are one of the vital techniques for the early detection of knee infections, which requires highly experienced doctors and radiologists to distinguish Kellgren-Lawrence (KL) grading. Thus, artificial intelligence techniques solve the shortcomings of manual diagnosis. This study developed three methodologies for the X-ray analysis of both the Osteoporosis Initiative (OAI) and Rani Channamma University (RCU) datasets for diagnosing KOA and discrimination between KL grades. In all methodologies, the Principal Component Analysis (PCA) algorithm was applied after the CNN models to delete the unimportant and redundant features and keep the essential features. The first methodology for analyzing x-rays and diagnosing the degree of knee inflammation uses the VGG-19 -FFNN and ResNet-101 -FFNN systems. The second methodology of X-ray analysis and diagnosis of KOA grade by Feed Forward Neural Network (FFNN) is based on the combined features of VGG-19 and ResNet-101 before and after PCA. The third methodology for X-ray analysis and diagnosis of KOA grade by FFNN is based on the fusion features of VGG-19 and handcrafted features, and fusion features of ResNet-101 and handcrafted features. For an OAI dataset with fusion features of VGG-19 and handcrafted features, FFNN obtained an AUC of 99.25%, an accuracy of 99.1%, a sensitivity of 98.81%, a specificity of 100%, and a precision of 98.24%. For the RCU dataset with the fusion features of VGG-19 and the handcrafted features, FFNN obtained an AUC of 99.07%, an accuracy of 98.20%, a sensitivity of 98.16%, a specificity of 99.73%, and a precision of 98.08%. Full article
(This article belongs to the Special Issue Deep Learning Models for Medical Imaging Processing)
Show Figures

Figure 1

19 pages, 3154 KiB  
Article
Skin Lesion Detection Using Hand-Crafted and DL-Based Features Fusion and LSTM
by Rabbia Mahum and Suliman Aladhadh
Diagnostics 2022, 12(12), 2974; https://doi.org/10.3390/diagnostics12122974 - 28 Nov 2022
Cited by 12 | Viewed by 2277
Abstract
The abnormal growth of cells in the skin causes two types of tumor: benign and malignant. Various methods, such as imaging and biopsies, are used by oncologists to assess the presence of skin cancer, but these are time-consuming and require extra human effort. [...] Read more.
The abnormal growth of cells in the skin causes two types of tumor: benign and malignant. Various methods, such as imaging and biopsies, are used by oncologists to assess the presence of skin cancer, but these are time-consuming and require extra human effort. However, some automated methods have been developed by researchers based on hand-crafted feature extraction from skin images. Nevertheless, these methods may fail to detect skin cancers at an early stage if they are tested on unseen data. Therefore, in this study, a novel and robust skin cancer detection model was proposed based on features fusion. First, our proposed model pre-processed the images using a GF filter to remove the noise. Second, the features were manually extracted by employing local binary patterns (LBP), and Inception V3 for automatic feature extraction. Aside from this, an Adam optimizer was utilized for the adjustments of learning rate. In the end, LSTM network was utilized on fused features for the classification of skin cancer into malignant and benign. Our proposed system employs the benefits of both ML- and DL-based algorithms. We utilized the skin lesion DermIS dataset, which is available on the Kaggle website and consists of 1000 images, out of which 500 belong to the benign class and 500 to the malignant class. The proposed methodology attained 99.4% accuracy, 98.7% precision, 98.66% recall, and a 98% F-score. We compared the performance of our features fusion-based method with existing segmentation-based and DL-based techniques. Additionally, we cross-validated the performance of our proposed model using 1000 images from International Skin Image Collection (ISIC), attaining 98.4% detection accuracy. The results show that our method provides significant results compared to existing techniques and outperforms them. Full article
(This article belongs to the Special Issue Deep Learning Models for Medical Imaging Processing)
Show Figures

Figure 1

Review

Jump to: Research

34 pages, 4532 KiB  
Review
Demystifying Supervised Learning in Healthcare 4.0: A New Reality of Transforming Diagnostic Medicine
by Sudipta Roy, Tanushree Meena and Se-Jung Lim
Diagnostics 2022, 12(10), 2549; https://doi.org/10.3390/diagnostics12102549 - 20 Oct 2022
Cited by 64 | Viewed by 4995
Abstract
The global healthcare sector continues to grow rapidly and is reflected as one of the fastest-growing sectors in the fourth industrial revolution (4.0). The majority of the healthcare industry still uses labor-intensive, time-consuming, and error-prone traditional, manual, and manpower-based methods. This review addresses [...] Read more.
The global healthcare sector continues to grow rapidly and is reflected as one of the fastest-growing sectors in the fourth industrial revolution (4.0). The majority of the healthcare industry still uses labor-intensive, time-consuming, and error-prone traditional, manual, and manpower-based methods. This review addresses the current paradigm, the potential for new scientific discoveries, the technological state of preparation, the potential for supervised machine learning (SML) prospects in various healthcare sectors, and ethical issues. The effectiveness and potential for innovation of disease diagnosis, personalized medicine, clinical trials, non-invasive image analysis, drug discovery, patient care services, remote patient monitoring, hospital data, and nanotechnology in various learning-based automation in healthcare along with the requirement for explainable artificial intelligence (AI) in healthcare are evaluated. In order to understand the potential architecture of non-invasive treatment, a thorough study of medical imaging analysis from a technical point of view is presented. This study also represents new thinking and developments that will push the boundaries and increase the opportunity for healthcare through AI and SML in the near future. Nowadays, SML-based applications require a lot of data quality awareness as healthcare is data-heavy, and knowledge management is paramount. Nowadays, SML in biomedical and healthcare developments needs skills, quality data consciousness for data-intensive study, and a knowledge-centric health management system. As a result, the merits, demerits, and precautions need to take ethics and the other effects of AI and SML into consideration. The overall insight in this paper will help researchers in academia and industry to understand and address the future research that needs to be discussed on SML in the healthcare and biomedical sectors. Full article
(This article belongs to the Special Issue Deep Learning Models for Medical Imaging Processing)
Show Figures

Figure 1

17 pages, 2280 KiB  
Review
Bone Fracture Detection Using Deep Supervised Learning from Radiological Images: A Paradigm Shift
by Tanushree Meena and Sudipta Roy
Diagnostics 2022, 12(10), 2420; https://doi.org/10.3390/diagnostics12102420 - 07 Oct 2022
Cited by 40 | Viewed by 8463
Abstract
Bone diseases are common and can result in various musculoskeletal conditions (MC). An estimated 1.71 billion patients suffer from musculoskeletal problems worldwide. Apart from musculoskeletal fractures, femoral neck injuries, knee osteoarthritis, and fractures are very common bone diseases, and the rate is expected [...] Read more.
Bone diseases are common and can result in various musculoskeletal conditions (MC). An estimated 1.71 billion patients suffer from musculoskeletal problems worldwide. Apart from musculoskeletal fractures, femoral neck injuries, knee osteoarthritis, and fractures are very common bone diseases, and the rate is expected to double in the next 30 years. Therefore, proper and timely diagnosis and treatment of a fractured patient are crucial. Contrastingly, missed fractures are a common prognosis failure in accidents and emergencies. This causes complications and delays in patients’ treatment and care. These days, artificial intelligence (AI) and, more specifically, deep learning (DL) are receiving significant attention to assist radiologists in bone fracture detection. DL can be widely used in medical image analysis. Some studies in traumatology and orthopaedics have shown the use and potential of DL in diagnosing fractures and diseases from radiographs. In this systematic review, we provide an overview of the use of DL in bone imaging to help radiologists to detect various abnormalities, particularly fractures. We have also discussed the challenges and problems faced in the DL-based method, and the future of DL in bone imaging. Full article
(This article belongs to the Special Issue Deep Learning Models for Medical Imaging Processing)
Show Figures

Figure 1

Back to TopTop