AI/ML-Based Medical Image Processing and Analysis

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: closed (31 March 2023) | Viewed by 45377

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Engineering, Virginia Military Institute, Lexington, VA 24450, USA
Interests: machine learning; artificial intelligence; image processing; Internet of Things (IoT)
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Deanship of Research & Graduate Studies, Prince Mohammad bin Fahd University, Al Khobar 31952, Saudi Arabia
2. Department of Computer Sciences and Mathematics, Université du Québec à Chicoutimi, 555 boulevard de l’Université, Québec, QC G1V 0A6, Canada
Interests: machine learning; artificial intelligence; image processing; Internet of Things (IoT); robotics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues, 

AI/ML-based medical image processing and analysis are becoming increasingly important with advances in the use of image processing and analysis in the automated recommended diagnosis of medical conditions. Medical professionals and medical institutions are not only ready to accept machine learning (ML)- and artificial intelligence (AI)-enabled medical devices, but they are also eagerly awaiting devices that could potentially ease the load on professional medical personnel and increase accuracy of diagnosis, as well as provide a means for early diagnosis and intervention. The U.S. Food and Drug Administration has already approved many AI/ML-enabled medical devices, listed in https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices. Researchers are encouraged to continue in this field of study and continue with patentable methods and devices that could be potentially approved for use in medical institutions. 

Dr. Jaafar M. Alghazo
Dr. Ghazanfar Latif
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • artificial intelligence
  • medical imaging
  • medical diagnosis
  • medical
  • magnetic resonance imaging (MRI)
  • computer tomography
  • imaging techniques
  • medical conditions

Related Special Issue

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

4 pages, 205 KiB  
Editorial
AI/ML-Based Medical Image Processing and Analysis
by Jaafar Alghazo and Ghazanfar Latif
Diagnostics 2023, 13(24), 3671; https://doi.org/10.3390/diagnostics13243671 - 14 Dec 2023
Viewed by 1102
Abstract
The medical field is experiencing remarkable advancements, notably with the latest technologies—artificial intelligence (AI), big data, high-performance computing (HPC), and high-throughput computing (HTC)—that are in place to offer groundbreaking solutions to support medical professionals in the diagnostic process [...] Full article
(This article belongs to the Special Issue AI/ML-Based Medical Image Processing and Analysis)

Research

Jump to: Editorial

13 pages, 2493 KiB  
Article
An Efficient Combination of Convolutional Neural Network and LightGBM Algorithm for Lung Cancer Histopathology Classification
by Esraa A.-R. Hamed, Mohammed A.-M. Salem, Nagwa L. Badr and Mohamed F. Tolba
Diagnostics 2023, 13(15), 2469; https://doi.org/10.3390/diagnostics13152469 - 25 Jul 2023
Cited by 3 | Viewed by 1462
Abstract
The most dangerous disease in recent decades is lung cancer. The most accurate method of cancer diagnosis, according to research, is through the use of histopathological images that are acquired by a biopsy. Deep learning techniques have achieved success in bioinformatics, particularly medical [...] Read more.
The most dangerous disease in recent decades is lung cancer. The most accurate method of cancer diagnosis, according to research, is through the use of histopathological images that are acquired by a biopsy. Deep learning techniques have achieved success in bioinformatics, particularly medical imaging. In this paper, we present an innovative method for rapidly identifying and classifying histopathology images of lung tissues by combining a newly proposed Convolutional Neural Networks (CNN) model with a few total parameters and the enhanced Light Gradient Boosting Model (LightGBM) classifier. After the images have been pre-processed in this study, the proposed CNN technique is provided for feature extraction. Then, the LightGBM model with multiple threads has been used for lung tissue classification. The simulation result, applied to the LC25000 dataset, demonstrated that the novel technique successfully classifies lung tissue with 99.6% accuracy and sensitivity. Furthermore, the proposed CNN model has achieved the lowest training parameters of only one million parameters, and it has also achieved the shortest processing time of just one second throughout the feature extraction process. When this result is compared with the most recent state-of-the-art approaches, the suggested approach has increased effectiveness in the areas of both disease classification accuracy and processing time. Full article
(This article belongs to the Special Issue AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

15 pages, 2906 KiB  
Article
Landmark-Assisted Anatomy-Sensitive Retinal Vessel Segmentation Network
by Haifeng Zhang, Yunlong Qiu, Chonghui Song and Jiale Li
Diagnostics 2023, 13(13), 2260; https://doi.org/10.3390/diagnostics13132260 - 04 Jul 2023
Cited by 1 | Viewed by 1158
Abstract
Automatic retinal vessel segmentation is important for assisting clinicians in diagnosing ophthalmic diseases. The existing deep learning methods remain constrained in instance connectivity and thin vessel detection. To this end, we propose a novel anatomy-sensitive retinal vessel segmentation framework to preserve instance connectivity [...] Read more.
Automatic retinal vessel segmentation is important for assisting clinicians in diagnosing ophthalmic diseases. The existing deep learning methods remain constrained in instance connectivity and thin vessel detection. To this end, we propose a novel anatomy-sensitive retinal vessel segmentation framework to preserve instance connectivity and improve the segmentation accuracy of thin vessels. This framework uses TransUNet as its backbone and utilizes self-supervised extracted landmarks to guide network learning. TransUNet is designed to simultaneously benefit from the advantages of convolutional and multi-head attention mechanisms in extracting local features and modeling global dependencies. In particular, we introduce contrastive learning-based self-supervised extraction anatomical landmarks to guide the model to focus on learning the morphological information of retinal vessels. We evaluated the proposed method on three public datasets: DRIVE, CHASE-DB1, and STARE. Our method demonstrates promising results on the DRIVE and CHASE-DB1 datasets, outperforming state-of-the-art methods by improving the F1 scores by 0.36% and 0.31%, respectively. On the STARE dataset, our method achieves results close to the best-performing methods. Visualizations of the results highlight the potential of our method in maintaining topological continuity and identifying thin blood vessels. Furthermore, we conducted a series of ablation experiments to validate the effectiveness of each module in our model and considered the impact of image resolution on the results. Full article
(This article belongs to the Special Issue AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

21 pages, 5130 KiB  
Article
A Hybrid Technique for Diabetic Retinopathy Detection Based on Ensemble-Optimized CNN and Texture Features
by Uzair Ishtiaq, Erma Rahayu Mohd Faizal Abdullah and Zubair Ishtiaque
Diagnostics 2023, 13(10), 1816; https://doi.org/10.3390/diagnostics13101816 - 22 May 2023
Cited by 9 | Viewed by 2308
Abstract
One of the most prevalent chronic conditions that can result in permanent vision loss is diabetic retinopathy (DR). Diabetic retinopathy occurs in five stages: no DR, and mild, moderate, severe, and proliferative DR. The early detection of DR is essential for preventing vision [...] Read more.
One of the most prevalent chronic conditions that can result in permanent vision loss is diabetic retinopathy (DR). Diabetic retinopathy occurs in five stages: no DR, and mild, moderate, severe, and proliferative DR. The early detection of DR is essential for preventing vision loss in diabetic patients. In this paper, we propose a method for the detection and classification of DR stages to determine whether patients are in any of the non-proliferative stages or in the proliferative stage. The hybrid approach based on image preprocessing and ensemble features is the foundation of the proposed classification method. We created a convolutional neural network (CNN) model from scratch for this study. Combining Local Binary Patterns (LBP) and deep learning features resulted in the creation of the ensemble features vector, which was then optimized using the Binary Dragonfly Algorithm (BDA) and the Sine Cosine Algorithm (SCA). Moreover, this optimized feature vector was fed to the machine learning classifiers. The SVM classifier achieved the highest classification accuracy of 98.85% on a publicly available dataset, i.e., Kaggle EyePACS. Rigorous testing and comparisons with state-of-the-art approaches in the literature indicate the effectiveness of the proposed methodology. Full article
(This article belongs to the Special Issue AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

16 pages, 5602 KiB  
Article
OView-AI Supporter for Classifying Pneumonia, Pneumothorax, Tuberculosis, Lung Cancer Chest X-ray Images Using Multi-Stage Superpixels Classification
by Joonho Oh, Chanho Park, Hongchang Lee, Beanbonyka Rim, Younggyu Kim, Min Hong, Jiwon Lyu, Suha Han and Seongjun Choi
Diagnostics 2023, 13(9), 1519; https://doi.org/10.3390/diagnostics13091519 - 23 Apr 2023
Cited by 3 | Viewed by 2333
Abstract
The deep learning approach has recently attracted much attention for its outstanding performance to assist in clinical diagnostic tasks, notably in computer-aided solutions. Computer-aided solutions are being developed using chest radiography to identify lung diseases. A chest X-ray image is one of the [...] Read more.
The deep learning approach has recently attracted much attention for its outstanding performance to assist in clinical diagnostic tasks, notably in computer-aided solutions. Computer-aided solutions are being developed using chest radiography to identify lung diseases. A chest X-ray image is one of the most often utilized diagnostic imaging modalities in computer-aided solutions since it produces non-invasive standard-of-care data. However, the accurate identification of a specific illness in chest X-ray images still poses a challenge due to their high inter-class similarities and low intra-class variant abnormalities, especially given the complex nature of radiographs and the complex anatomy of the chest. In this paper, we proposed a deep-learning-based solution to classify four lung diseases (pneumonia, pneumothorax, tuberculosis, and lung cancer) and healthy lungs using chest X-ray images. In order to achieve a high performance, the EfficientNet B7 model with the pre-trained weights of ImageNet trained by Noisy Student was used as a backbone model, followed by our proposed fine-tuned layers and hyperparameters. Our study achieved an average test accuracy of 97.42%, sensitivity of 95.93%, and specificity of 99.05%. Additionally, our findings were utilized as diagnostic supporting software in OView-AI system (computer-aided application). We conducted 910 clinical trials and achieved an AUC confidence interval (95% CI) of the diagnostic results in the OView-AI system of 97.01%, sensitivity of 95.68%, and specificity of 99.34%. Full article
(This article belongs to the Special Issue AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

17 pages, 2809 KiB  
Article
Recognizing Brain Tumors Using Adaptive Noise Filtering and Statistical Features
by Mehwish Rasheed, Muhammad Waseem Iqbal, Arfan Jaffar, Muhammad Usman Ashraf, Khalid Ali Almarhabi, Ahmed Mohammed Alghamdi and Adel A. Bahaddad
Diagnostics 2023, 13(8), 1451; https://doi.org/10.3390/diagnostics13081451 - 17 Apr 2023
Cited by 4 | Viewed by 1761
Abstract
The human brain, primarily composed of white blood cells, is centered on the neurological system. Incorrectly positioned cells in the immune system, blood vessels, endocrine, glial, axon, and other cancer-causing tissues, can assemble to create a brain tumor. It is currently impossible to [...] Read more.
The human brain, primarily composed of white blood cells, is centered on the neurological system. Incorrectly positioned cells in the immune system, blood vessels, endocrine, glial, axon, and other cancer-causing tissues, can assemble to create a brain tumor. It is currently impossible to find cancer physically and make a diagnosis. The tumor can be found and recognized using the MRI-programmed division method. It takes a powerful segmentation technique to produce accurate output. This study examines a brain MRI scan and uses a technique to obtain a more precise image of the tumor-affected area. The critical aspects of the proposed method are the utilization of noisy MRI brain images, anisotropic noise removal filtering, segmentation with an SVM classifier, and isolation of the adjacent region from the normal morphological processes. Accurate brain MRI imaging is the primary goal of this strategy. The divided section of the cancer is placed on the actual image of a particular culture, but that is by no means the last step. The tumor is located by categorizing the pixel brightness in the filtered image. According to test findings, the SVM could partition data with 98% accuracy. Full article
(This article belongs to the Special Issue AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

16 pages, 13633 KiB  
Article
Knee Osteoarthritis Detection and Severity Classification Using Residual Neural Networks on Preprocessed X-ray Images
by Abdul Sami Mohammed, Ahmed Abul Hasanaath, Ghazanfar Latif and Abul Bashar
Diagnostics 2023, 13(8), 1380; https://doi.org/10.3390/diagnostics13081380 - 10 Apr 2023
Cited by 5 | Viewed by 4906
Abstract
One of the most common and challenging medical conditions to deal with in old-aged people is the occurrence of knee osteoarthritis (KOA). Manual diagnosis of this disease involves observing X-ray images of the knee area and classifying it under five grades using the [...] Read more.
One of the most common and challenging medical conditions to deal with in old-aged people is the occurrence of knee osteoarthritis (KOA). Manual diagnosis of this disease involves observing X-ray images of the knee area and classifying it under five grades using the Kellgren–Lawrence (KL) system. This requires the physician’s expertise, suitable experience, and a lot of time, and even after that the diagnosis can be prone to errors. Therefore, researchers in the ML/DL domain have employed the capabilities of deep neural network (DNN) models to identify and classify KOA images in an automated, faster, and accurate manner. To this end, we propose the application of six pretrained DNN models, namely, VGG16, VGG19, ResNet101, MobileNetV2, InceptionResNetV2, and DenseNet121 for KOA diagnosis using images obtained from the Osteoarthritis Initiative (OAI) dataset. More specifically, we perform two types of classification, namely, a binary classification, which detects the presence or absence of KOA and secondly, classifying the severity of KOA in a three-class classification. For a comparative analysis, we experiment on three datasets (Dataset I, Dataset II, and Dataset III) with five, two, and three classes of KOA images, respectively. We achieved maximum classification accuracies of 69%, 83%, and 89%, respectively, with the ResNet101 DNN model. Our results show an improved performance from the existing work in the literature. Full article
(This article belongs to the Special Issue AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

13 pages, 2019 KiB  
Article
Predicting Overall Survival with Deep Learning from 18F-FDG PET-CT Images in Patients with Hepatocellular Carcinoma before Liver Transplantation
by Yung-Chi Lai, Kuo-Chen Wu, Chao-Jen Chang, Yi-Jin Chen, Kuan-Pin Wang, Long-Bin Jeng and Chia-Hung Kao
Diagnostics 2023, 13(5), 981; https://doi.org/10.3390/diagnostics13050981 - 04 Mar 2023
Cited by 1 | Viewed by 1457
Abstract
Positron emission tomography and computed tomography with 18F-fluorodeoxyglucose (18F-FDG PET-CT) were used to predict outcomes after liver transplantation in patients with hepatocellular carcinoma (HCC). However, few approaches for prediction based on 18F-FDG PET-CT images that leverage automatic liver segmentation and deep learning were [...] Read more.
Positron emission tomography and computed tomography with 18F-fluorodeoxyglucose (18F-FDG PET-CT) were used to predict outcomes after liver transplantation in patients with hepatocellular carcinoma (HCC). However, few approaches for prediction based on 18F-FDG PET-CT images that leverage automatic liver segmentation and deep learning were proposed. This study evaluated the performance of deep learning from 18F-FDG PET-CT images to predict overall survival in HCC patients before liver transplantation (LT). We retrospectively included 304 patients with HCC who underwent 18F-FDG PET/CT before LT between January 2010 and December 2016. The hepatic areas of 273 of the patients were segmented by software, while the other 31 were delineated manually. We analyzed the predictive value of the deep learning model from both FDG PET/CT images and CT images alone. The results of the developed prognostic model were obtained by combining FDG PET-CT images and combining FDG CT images (0.807 AUC vs. 0.743 AUC). The model based on FDG PET-CT images achieved somewhat better sensitivity than the model based on CT images alone (0.571 SEN vs. 0.432 SEN). Automatic liver segmentation from 18F-FDG PET-CT images is feasible and can be utilized to train deep-learning models. The proposed predictive tool can effectively determine prognosis (i.e., overall survival) and, thereby, select an optimal candidate of LT for patients with HCC. Full article
(This article belongs to the Special Issue AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

19 pages, 3190 KiB  
Article
Alzheimer Disease Classification through Transfer Learning Approach
by Noman Raza, Asma Naseer, Maria Tamoor and Kashif Zafar
Diagnostics 2023, 13(4), 801; https://doi.org/10.3390/diagnostics13040801 - 20 Feb 2023
Cited by 13 | Viewed by 3338
Abstract
Alzheimer’s disease (AD) is a slow neurological disorder that destroys the thought process, and consciousness, of a human. It directly affects the development of mental ability and neurocognitive functionality. The number of patients with Alzheimer’s disease is increasing day by day, especially in [...] Read more.
Alzheimer’s disease (AD) is a slow neurological disorder that destroys the thought process, and consciousness, of a human. It directly affects the development of mental ability and neurocognitive functionality. The number of patients with Alzheimer’s disease is increasing day by day, especially in old aged people, who are above 60 years of age, and, gradually, it becomes cause of their death. In this research, we discuss the segmentation and classification of the Magnetic resonance imaging (MRI) of Alzheimer’s disease, through the concept of transfer learning and customizing of the convolutional neural network (CNN) by specifically using images that are segmented by the Gray Matter (GM) of the brain. Instead of training and computing the proposed model accuracy from the start, we used a pre-trained deep learning model as our base model, and, after that, transfer learning was applied. The accuracy of the proposed model was tested over a different number of epochs, 10, 25, and 50. The overall accuracy of the proposed model was 97.84%. Full article
(This article belongs to the Special Issue AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

10 pages, 1210 KiB  
Article
Deep-Learning-Based Automatic Segmentation of Parotid Gland on Computed Tomography Images
by Merve Önder, Cengiz Evli, Ezgi Türk, Orhan Kazan, İbrahim Şevki Bayrakdar, Özer Çelik, Andre Luiz Ferreira Costa, João Pedro Perez Gomes, Celso Massahiro Ogawa, Rohan Jagtap and Kaan Orhan
Diagnostics 2023, 13(4), 581; https://doi.org/10.3390/diagnostics13040581 - 04 Feb 2023
Cited by 4 | Viewed by 2986
Abstract
This study aims to develop an algorithm for the automatic segmentation of the parotid gland on CT images of the head and neck using U-Net architecture and to evaluate the model’s performance. In this retrospective study, a total of 30 anonymized CT volumes [...] Read more.
This study aims to develop an algorithm for the automatic segmentation of the parotid gland on CT images of the head and neck using U-Net architecture and to evaluate the model’s performance. In this retrospective study, a total of 30 anonymized CT volumes of the head and neck were sliced into 931 axial images of the parotid glands. Ground truth labeling was performed with the CranioCatch Annotation Tool (CranioCatch, Eskisehir, Turkey) by two oral and maxillofacial radiologists. The images were resized to 512 × 512 and split into training (80%), validation (10%), and testing (10%) subgroups. A deep convolutional neural network model was developed using U-net architecture. The automatic segmentation performance was evaluated in terms of the F1-score, precision, sensitivity, and the Area Under Curve (AUC) statistics. The threshold for a successful segmentation was determined by the intersection of over 50% of the pixels with the ground truth. The F1-score, precision, and sensitivity of the AI model in segmenting the parotid glands in the axial CT slices were found to be 1. The AUC value was 0.96. This study has shown that it is possible to use AI models based on deep learning to automatically segment the parotid gland on axial CT images. Full article
(This article belongs to the Special Issue AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

11 pages, 2864 KiB  
Article
A Prospective Study on Diabetic Retinopathy Detection Based on Modify Convolutional Neural Network Using Fundus Images at Sindh Institute of Ophthalmology & Visual Sciences
by Awais Bajwa, Neelam Nosheen, Khalid Iqbal Talpur and Sheeraz Akram
Diagnostics 2023, 13(3), 393; https://doi.org/10.3390/diagnostics13030393 - 20 Jan 2023
Cited by 12 | Viewed by 2181
Abstract
Diabetic Retinopathy (DR) is the most common complication that arises due to diabetes, and it affects the retina. It is the leading cause of blindness globally, and early detection can protect patients from losing sight. However, the early detection of Diabetic Retinopathy is [...] Read more.
Diabetic Retinopathy (DR) is the most common complication that arises due to diabetes, and it affects the retina. It is the leading cause of blindness globally, and early detection can protect patients from losing sight. However, the early detection of Diabetic Retinopathy is an difficult task that needs clinical experts’ interpretation of fundus images. In this study, a deep learning model was trained and validated on a private dataset and tested in real time at the Sindh Institute of Ophthalmology & Visual Sciences (SIOVS). The intelligent model evaluated the quality of the test images. The implemented model classified the test images into DR-Positive and DR-Negative ones. Furthermore, the results were reviewed by clinical experts to assess the model’s performance. A total number of 398 patients, including 232 male and 166 female patients, were screened for five weeks. The model achieves 93.72% accuracy, 97.30% sensitivity, and 92.90% specificity on the test data as labelled by clinical experts on Diabetic Retinopathy. Full article
(This article belongs to the Special Issue AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

14 pages, 2046 KiB  
Article
A Novel Medical Image Enhancement Algorithm for Breast Cancer Detection on Mammography Images Using Machine Learning
by Hanife Avcı and Jale Karakaya
Diagnostics 2023, 13(3), 348; https://doi.org/10.3390/diagnostics13030348 - 18 Jan 2023
Cited by 19 | Viewed by 4723
Abstract
Mammography is the most preferred method for breast cancer screening. In this study, computer-aided diagnosis (CAD) systems were used to improve the image quality of mammography images and to detect suspicious areas. The main contribution of this study is to reveal the optimal [...] Read more.
Mammography is the most preferred method for breast cancer screening. In this study, computer-aided diagnosis (CAD) systems were used to improve the image quality of mammography images and to detect suspicious areas. The main contribution of this study is to reveal the optimal combination of various pre-processing algorithms to enable better interpretation and classification of mammography images because pre-processing algorithms significantly affect the accuracy of segmentation and classification methods. In this study, the effect of combinations of different preprocessing methods in differentiating benign and malignant breast lesions was investigated. All image processing algorithms used for lesion detection were used in the mini-MIAS database. In the first step, label information and pectoral muscle resulting from the acquisition of mammography images were removed. In the second step, median filter (MF), contrast limited adaptive histogram equalization (CLAHE), and unsharp masking (USM) algorithms with different combinations of the resolution and visibility of images are increased. In the third step, suspicious regions are extracted from the mammograms using the k-means clustering technique. Then, features were extracted from the obtained ROIs. Finally, feature datasets were classified as normal/abnormal, and benign/malign (two class classification) using Machine Learning algorithms. Test performance measures of the classification methods were examined. In both classifications made in the study, lower classification performance values were obtained when the CLAHE algorithm was used alone as a pre-processing method compared to other pre-processing combinations. When the median filter and unsharp masking algorithms are added to the CLAHE algorithm, the performance of the classification methods has increased. In terms of classification success, Support Vector Machines, Random Forest, and Neural Networks showed the best performance. It was found by comparing the performances of the classification methods that different preprocessing algorithms were effective in detecting the presence of breast lesions and distinguishing benign and malignant. Full article
(This article belongs to the Special Issue AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

15 pages, 3183 KiB  
Article
The Effect of Magnetic Resonance Imaging Based Radiomics Models in Discriminating stage I–II and III–IVa Nasopharyngeal Carcinoma
by Quanjiang Li, Qiang Yu, Beibei Gong, Youquan Ning, Xinwei Chen, Jinming Gu, Fajin Lv, Juan Peng and Tianyou Luo
Diagnostics 2023, 13(2), 300; https://doi.org/10.3390/diagnostics13020300 - 13 Jan 2023
Cited by 2 | Viewed by 1180
Abstract
Background: Nasopharyngeal carcinoma (NPC) is a common tumor in China. Accurate stages of NPC are crucial for treatment. We therefore aim to develop radiomics models for discriminating early-stage (I–II) and advanced-stage (III–IVa) NPC based on MR images. Methods: 329 NPC patients were enrolled [...] Read more.
Background: Nasopharyngeal carcinoma (NPC) is a common tumor in China. Accurate stages of NPC are crucial for treatment. We therefore aim to develop radiomics models for discriminating early-stage (I–II) and advanced-stage (III–IVa) NPC based on MR images. Methods: 329 NPC patients were enrolled and randomly divided into a training cohort (n = 229) and a validation cohort (n = 100). Features were extracted based on axial contrast-enhanced T1-weighted images (CE-T1WI), T1WI, and T2-weighted images (T2WI). Least absolute shrinkage and selection operator (LASSO) was used to build radiomics signatures. Seven radiomics models were constructed with logistic regression. The AUC value was used to assess classification performance. The DeLong test was used to compare the AUCs of different radiomics models and visual assessment. Results: Models A, B, C, D, E, F, and G were constructed with 13, 9, 7, 9, 10, 7, and 6 features, respectively. All radiomics models showed better classification performance than that of visual assessment. Model A (CE-T1WI + T1WI + T2WI) showed the best classification performance (AUC: 0.847) in the training cohort. CE-T1WI showed the greatest significance for staging NPC. Conclusion: Radiomics models can effectively distinguish early-stage from advanced-stage NPC patients, and Model A (CE-T1WI + T1WI + T2WI) showed the best classification performance. Full article
(This article belongs to the Special Issue AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

30 pages, 4124 KiB  
Article
CAD-ALZ: A Blockwise Fine-Tuning Strategy on Convolutional Model and Random Forest Classifier for Recognition of Multistage Alzheimer’s Disease
by Qaisar Abbas, Ayyaz Hussain and Abdul Rauf Baig
Diagnostics 2023, 13(1), 167; https://doi.org/10.3390/diagnostics13010167 - 03 Jan 2023
Cited by 3 | Viewed by 3113
Abstract
Mental deterioration or Alzheimer’s (ALZ) disease is progressive and causes both physical and mental dependency. There is a need for a computer-aided diagnosis (CAD) system that can help doctors make an immediate decision. (1) Background: Currently, CAD systems are developed based on hand-crafted [...] Read more.
Mental deterioration or Alzheimer’s (ALZ) disease is progressive and causes both physical and mental dependency. There is a need for a computer-aided diagnosis (CAD) system that can help doctors make an immediate decision. (1) Background: Currently, CAD systems are developed based on hand-crafted features, machine learning (ML), and deep learning (DL) techniques. Those CAD systems frequently require domain-expert knowledge and massive datasets to extract deep features or model training, which causes problems with class imbalance and overfitting. Additionally, there are still manual approaches used by radiologists due to the lack of dataset availability and to train the model with cost-effective computation. Existing works rely on performance improvement by neglecting the problems of the limited dataset, high computational complexity, and unavailability of lightweight and efficient feature descriptors. (2) Methods: To address these issues, a new approach, CAD-ALZ, is developed by extracting deep features through a ConvMixer layer with a blockwise fine-tuning strategy on a very small original dataset. At first, we apply the data augmentation method to images to increase the size of datasets. In this study, a blockwise fine-tuning strategy is employed on the ConvMixer model to detect robust features. Afterwards, a random forest (RF) is used to classify ALZ disease stages. (3) Results: The proposed CAD-ALZ model obtained significant results by using six evaluation metrics such as the F1-score, Kappa, accuracy, precision, sensitivity, and specificity. The CAD-ALZ model performed with a sensitivity of 99.69% and an F1-score of 99.61%. (4) Conclusions: The suggested CAD-ALZ approach is a potential technique for clinical use and computational efficiency compared to state-of-the-art approaches. The CAD-ALZ model code is freely available on GitHub for the scientific community. Full article
(This article belongs to the Special Issue AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

18 pages, 2250 KiB  
Article
A Deep CNN Transformer Hybrid Model for Skin Lesion Classification of Dermoscopic Images Using Focal Loss
by Yali Nie, Paolo Sommella, Marco Carratù, Mattias O’Nils and Jan Lundgren
Diagnostics 2023, 13(1), 72; https://doi.org/10.3390/diagnostics13010072 - 27 Dec 2022
Cited by 10 | Viewed by 3676
Abstract
Skin cancers are the most cancers diagnosed worldwide, with an estimated > 1.5 million new cases in 2020. Use of computer-aided diagnosis (CAD) systems for early detection and classification of skin lesions helps reduce skin cancer mortality rates. Inspired by the success of [...] Read more.
Skin cancers are the most cancers diagnosed worldwide, with an estimated > 1.5 million new cases in 2020. Use of computer-aided diagnosis (CAD) systems for early detection and classification of skin lesions helps reduce skin cancer mortality rates. Inspired by the success of the transformer network in natural language processing (NLP) and the deep convolutional neural network (DCNN) in computer vision, we propose an end-to-end CNN transformer hybrid model with a focal loss (FL) function to classify skin lesion images. First, the CNN extracts low-level, local feature maps from the dermoscopic images. In the second stage, the vision transformer (ViT) globally models these features, then extracts abstract and high-level semantic information, and finally sends this to the multi-layer perceptron (MLP) head for classification. Based on an evaluation of three different loss functions, the FL-based algorithm is aimed to improve the extreme class imbalance that exists in the International Skin Imaging Collaboration (ISIC) 2018 dataset. The experimental analysis demonstrates that impressive results of skin lesion classification are achieved by employing the hybrid model and FL strategy, which shows significantly high performance and outperforms the existing work. Full article
(This article belongs to the Special Issue AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

26 pages, 5362 KiB  
Article
Automatic Detection and Classification of Cardiovascular Disorders Using Phonocardiogram and Convolutional Vision Transformers
by Qaisar Abbas, Ayyaz Hussain and Abdul Rauf Baig
Diagnostics 2022, 12(12), 3109; https://doi.org/10.3390/diagnostics12123109 - 09 Dec 2022
Cited by 8 | Viewed by 3100
Abstract
The major cause of death worldwide is due to cardiovascular disorders (CVDs). For a proper diagnosis of CVD disease, an inexpensive solution based on phonocardiogram (PCG) signals is proposed. (1) Background: Currently, a few deep learning (DL)-based CVD systems have been developed to [...] Read more.
The major cause of death worldwide is due to cardiovascular disorders (CVDs). For a proper diagnosis of CVD disease, an inexpensive solution based on phonocardiogram (PCG) signals is proposed. (1) Background: Currently, a few deep learning (DL)-based CVD systems have been developed to recognize different stages of CVD. However, the accuracy of these systems is not up-to-the-mark, and the methods require high computational power and huge training datasets. (2) Methods: To address these issues, we developed a novel attention-based technique (CVT-Trans) on a convolutional vision transformer to recognize and categorize PCG signals into five classes. The continuous wavelet transform-based spectrogram (CWTS) strategy was used to extract representative features from PCG data. Following that, a new CVT-Trans architecture was created to categorize the CWTS signals into five groups. (3) Results: The dataset derived from our investigation indicated that the CVT-Trans system had an overall average accuracy ACC of 100%, SE of 99.00%, SP of 99.5%, and F1-score of 98%, based on 10-fold cross validation. (4) Conclusions: The CVD-Trans technique outperformed many state-of-the-art methods. The robustness of the constructed model was confirmed by 10-fold cross-validation. Cardiologists can use this CVT-Trans system to help patients with the diagnosis of heart valve problems. Full article
(This article belongs to the Special Issue AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

13 pages, 11162 KiB  
Article
Robust Ulcer Classification: Contrast and Illumination Invariant Approach
by Mousa Alhajlah
Diagnostics 2022, 12(12), 2898; https://doi.org/10.3390/diagnostics12122898 - 22 Nov 2022
Viewed by 1402
Abstract
Gastrointestinal (GI) disease cases are on the rise throughout the world. Ulcers, being the most common type of GI disease, if left untreated, can cause internal bleeding resulting in anemia and bloody vomiting. Early detection and classification of different types of ulcers can [...] Read more.
Gastrointestinal (GI) disease cases are on the rise throughout the world. Ulcers, being the most common type of GI disease, if left untreated, can cause internal bleeding resulting in anemia and bloody vomiting. Early detection and classification of different types of ulcers can reduce the death rate and severity of the disease. Manual detection and classification of ulcers are tedious and error-prone. This calls for automated systems based on computer vision techniques to detect and classify ulcers in images and video data. A major challenge in accurate detection and classification is dealing with the similarity among classes and the poor quality of input images. Improper contrast and illumination reduce the anticipated classification accuracy. In this paper, contrast and illumination invariance was achieved by utilizing log transformation and power law transformation. Optimal values of the parameters for both these techniques were achieved and combined to obtain the fused image dataset. Augmentation was used to handle overfitting and classification was performed using the lightweight and efficient deep learning model MobilNetv2. Experiments were conducted on the KVASIR dataset to assess the efficacy of the proposed approach. An accuracy of 96.71% was achieved, which is a considerable improvement over the state-of-the-art techniques. Full article
(This article belongs to the Special Issue AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

Back to TopTop