Special Issue "Medical Image Processing and Analysis"

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: 30 June 2023 | Viewed by 12595

Special Issue Editors

Department of Computer Science, HITEC University, Taxila 47040, Pakistan
Interests: medical image analysis
Informatics Building School of Informatics, University of Leicester, Leicester LE1 7RH, UK
Interests: artificial intelligence; deep learning; medical image processingrecognition; transfer learning; medical image analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Machine-learning-based approaches are attracting a lot of attention due to the wide range of applications in various fields. The last two decades have witnessed increasing interest in computer-aided medical systems for early detection, diagnosis, prognosis, risk assessment, and final therapy of diseases. The development of a reliable medical solution is a crucial task because there is no single standard approach covering all the subdomains, including data processing, regions of interest detection, image segmentation and registration, image fusion, and classification with high accuracy. Therefore, computer-aided diagnosis systems are still a highly challenging domain which provides enough space for improvement. These days, deep-learning-based methods are attracting much attention among researchers in the machine learning community due to their improved segmentation and classification results. Moreover, deep-learning-based methods have also lowered the barriers of data preprocessing and extreme set of users’ dependability. Consequently, the processing burden in medical imaging has now shifted from the human to the computer side, thus allowing more researchers to step into this well-regarded and momentous area. This leads to improved performance, both in terms of accuracy and decision time.

This Special Issue seeks high-quality research articles generally dealing with methods such as semantic segmentation and deep learning in the field of medical image processing. We are primarily interested in original research articles proposing novel solutions, covering new theories, and describing new implementations for medical image analytics.

Dr. Muhammad Attique Khan
Prof. Dr. Yu-Dong Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep-learning-based segmentation
  • federated learning
  • semantic segmentation for medical infection diagnosis
  • wireless capsule endoscopy (WCE) imaging technology using deep learning
  • magnetic resonance imaging (MRI)
  • semantic techniques for MRI images
  • FPGA with deep learning for medical imaging
  • mammogram imaging modality using deep learning
  • ultrasound imaging modality detection using deep learning
  • X-ray computed tomography (CT)
  • deep-learning-based CAD systems
  • transfer learning in deep learning for medical imaging
  • cancers classification using deep learning
  • autoencoder-based features selection using deep learning in the medical field
  • fusion of convolutional layers in deep learning for recognition
  • optimal deep learning features selection for recognition
  • fusion of image modality using deep learning
  • deep-learning-based medical imaging retrieval

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Machine Learning Assisting the Prediction of Clinical Outcomes following Nucleoplasty for Lumbar Degenerative Disc Disease
Diagnostics 2023, 13(11), 1863; https://doi.org/10.3390/diagnostics13111863 - 26 May 2023
Viewed by 358
Abstract
Background: Lumbar degenerative disc disease (LDDD) is a leading cause of chronic lower back pain; however, a lack of clear diagnostic criteria and solid LDDD interventional therapies have made predicting the benefits of therapeutic strategies challenging. Our goal is to develop machine learning [...] Read more.
Background: Lumbar degenerative disc disease (LDDD) is a leading cause of chronic lower back pain; however, a lack of clear diagnostic criteria and solid LDDD interventional therapies have made predicting the benefits of therapeutic strategies challenging. Our goal is to develop machine learning (ML)–based radiomic models based on pre-treatment imaging for predicting the outcomes of lumbar nucleoplasty (LNP), which is one of the interventional therapies for LDDD. Methods: The input data included general patient characteristics, perioperative medical and surgical details, and pre-operative magnetic resonance imaging (MRI) results from 181 LDDD patients receiving lumbar nucleoplasty. Post-treatment pain improvements were categorized as clinically significant (defined as a ≥80% decrease in the visual analog scale) or non-significant. To develop the ML models, T2-weighted MRI images were subjected to radiomic feature extraction, which was combined with physiological clinical parameters. After data processing, we developed five ML models: support vector machine, light gradient boosting machine, extreme gradient boosting, extreme gradient boosting random forest, and improved random forest. Model performance was measured by evaluating indicators, such as the confusion matrix, accuracy, sensitivity, specificity, F1 score, and area under the receiver operating characteristic curve (AUC), which were acquired using an 8:2 allocation of training to testing sequences. Results: Among the five ML models, the improved random forest algorithm had the best performance, with an accuracy of 0.76, a sensitivity of 0.69, a specificity of 0.83, an F1 score of 0.73, and an AUC of 0.77. The most influential clinical features included in the ML models were pre-operative VAS and age. In contrast, the most influential radiomic features had the correlation coefficient and gray-scale co-occurrence matrix. Conclusions: We developed an ML-based model for predicting pain improvement after LNP for patients with LDDD. We hope this tool will provide both doctors and patients with better information for therapeutic planning and decision-making. Full article
(This article belongs to the Special Issue Medical Image Processing and Analysis)
Show Figures

Figure 1

Article
Hybrid Models for Endoscopy Image Analysis for Early Detection of Gastrointestinal Diseases Based on Fused Features
Diagnostics 2023, 13(10), 1758; https://doi.org/10.3390/diagnostics13101758 - 16 May 2023
Viewed by 390
Abstract
The gastrointestinal system contains the upper and lower gastrointestinal tracts. The main tasks of the gastrointestinal system are to break down food and convert it into essential elements that the body can benefit from and expel waste in the form of feces. If [...] Read more.
The gastrointestinal system contains the upper and lower gastrointestinal tracts. The main tasks of the gastrointestinal system are to break down food and convert it into essential elements that the body can benefit from and expel waste in the form of feces. If any organ is affected, it does not work well, which affects the body. Many gastrointestinal diseases, such as infections, ulcers, and benign and malignant tumors, threaten human life. Endoscopy techniques are the gold standard for detecting infected parts within the organs of the gastrointestinal tract. Endoscopy techniques produce videos that are converted into thousands of frames that show the disease’s characteristics in only some frames. Therefore, this represents a challenge for doctors because it is a tedious task that requires time, effort, and experience. Computer-assisted automated diagnostic techniques help achieve effective diagnosis to help doctors identify the disease and give the patient the appropriate treatment. In this study, many efficient methodologies for analyzing endoscopy images for diagnosing gastrointestinal diseases were developed for the Kvasir dataset. The Kvasir dataset was classified by three pre-trained models: GoogLeNet, MobileNet, and DenseNet121. The images were optimized, and the gradient vector flow (GVF) algorithm was applied to segment the regions of interest (ROIs), isolating them from healthy regions and saving the endoscopy images as Kvasir-ROI. The Kvasir-ROI dataset was classified by the three pre-trained GoogLeNet, MobileNet, and DenseNet121 models. Hybrid methodologies (CNN–FFNN and CNN–XGBoost) were developed based on the GVF algorithm and achieved promising results for diagnosing disease based on endoscopy images of gastroenterology. The last methodology is based on fused CNN models and their classification by FFNN and XGBoost networks. The hybrid methodology based on the fused CNN features, called GoogLeNet–MobileNet–DenseNet121–XGBoost, achieved an AUC of 97.54%, accuracy of 97.25%, sensitivity of 96.86%, precision of 97.25%, and specificity of 99.48%. Full article
(This article belongs to the Special Issue Medical Image Processing and Analysis)
Show Figures

Figure 1

Article
Classification of Monkeypox Images Using LIME-Enabled Investigation of Deep Convolutional Neural Network
Diagnostics 2023, 13(9), 1639; https://doi.org/10.3390/diagnostics13091639 - 05 May 2023
Viewed by 746
Abstract
In this research, we demonstrate a Deep Convolutional Neural Network-based classification model for the detection of monkeypox. Monkeypox can be difficult to diagnose clinically in its early stages since it resembles both chickenpox and measles in symptoms. The early diagnosis of monkeypox helps [...] Read more.
In this research, we demonstrate a Deep Convolutional Neural Network-based classification model for the detection of monkeypox. Monkeypox can be difficult to diagnose clinically in its early stages since it resembles both chickenpox and measles in symptoms. The early diagnosis of monkeypox helps doctors cure it more quickly. Therefore, pre-trained models are frequently used in the diagnosis of monkeypox, because the manual analysis of a large number of images is labor-intensive and prone to inaccuracy. Therefore, finding the monkeypox virus requires an automated process. The large layer count of convolutional neural network (CNN) architectures enables them to successfully conceptualize the features on their own, thereby contributing to better performance in image classification. The scientific community has recently articulated significant attention in employing artificial intelligence (AI) to diagnose monkeypox from digital skin images due primarily to AI’s success in COVID-19 identification. The VGG16, VGG19, ResNet50, ResNet101, DenseNet201, and AlexNet models were used in our proposed method to classify patients with monkeypox symptoms with other diseases of a similar kind (chickenpox, measles, and normal). The majority of images in our research are collected from publicly available datasets. This study suggests an adaptive k-means clustering image segmentation technique that delivers precise segmentation results with straightforward operation. Our preliminary computational findings reveal that the proposed model could accurately detect patients with monkeypox. The best overall accuracy achieved by ResNet101 is 94.25%, with an AUC of 98.59%. Additionally, we describe the categorization of our model utilizing feature extraction using Local Interpretable Model-Agnostic Explanations (LIME), which provides a more in-depth understanding of particular properties that distinguish the monkeypox virus. Full article
(This article belongs to the Special Issue Medical Image Processing and Analysis)
Show Figures

Figure 1

Article
BRMI-Net: Deep Learning Features and Flower Pollination-Controlled Regula Falsi-Based Feature Selection Framework for Breast Cancer Recognition in Mammography Images
Diagnostics 2023, 13(9), 1618; https://doi.org/10.3390/diagnostics13091618 - 03 May 2023
Viewed by 580
Abstract
The early detection of breast cancer using mammogram images is critical for lowering women’s mortality rates and allowing for proper treatment. Deep learning techniques are commonly used for feature extraction and have demonstrated significant performance in the literature. However, these features do not [...] Read more.
The early detection of breast cancer using mammogram images is critical for lowering women’s mortality rates and allowing for proper treatment. Deep learning techniques are commonly used for feature extraction and have demonstrated significant performance in the literature. However, these features do not perform well in several cases due to redundant and irrelevant information. We created a new framework for diagnosing breast cancer using entropy-controlled deep learning and flower pollination optimization from the mammogram images. In the proposed framework, a filter fusion-based method for contrast enhancement is developed. The pre-trained ResNet-50 model is then improved and trained using transfer learning on both the original and enhanced datasets. Deep features are extracted and combined into a single vector in the following phase using a serial technique known as serial mid-value features. The top features are then classified using neural networks and machine learning classifiers in the following stage. To accomplish this, a technique for flower pollination optimization with entropy control has been developed. The exercise used three publicly available datasets: CBIS-DDSM, INbreast, and MIAS. On these selected datasets, the proposed framework achieved 93.8, 99.5, and 99.8% accuracy, respectively. Compared to the current methods, the increase in accuracy and decrease in computational time are explained. Full article
(This article belongs to the Special Issue Medical Image Processing and Analysis)
Show Figures

Figure 1

Article
A Framework of Faster CRNN and VGG16-Enhanced Region Proposal Network for Detection and Grade Classification of Knee RA
Diagnostics 2023, 13(8), 1385; https://doi.org/10.3390/diagnostics13081385 - 10 Apr 2023
Viewed by 641
Abstract
We developed a framework to detect and grade knee RA using digital X-radiation images and used it to demonstrate the ability of deep learning approaches to detect knee RA using a consensus-based decision (CBD) grading system. The study aimed to evaluate the efficiency [...] Read more.
We developed a framework to detect and grade knee RA using digital X-radiation images and used it to demonstrate the ability of deep learning approaches to detect knee RA using a consensus-based decision (CBD) grading system. The study aimed to evaluate the efficiency with which a deep learning approach based on artificial intelligence (AI) can find and determine the severity of knee RA in digital X-radiation images. The study comprised people over 50 years with RA symptoms, such as knee joint pain, stiffness, crepitus, and functional impairments. The digitized X-radiation images of the people were obtained from the BioGPS database repository. We used 3172 digital X-radiation images of the knee joint from an anterior–posterior perspective. The trained Faster-CRNN architecture was used to identify the knee joint space narrowing (JSN) area in digital X-radiation images and extract the features using ResNet-101 with domain adaptation. In addition, we employed another well-trained model (VGG16 with domain adaptation) for knee RA severity classification. Medical experts graded the X-radiation images of the knee joint using a consensus-based decision score. We trained the enhanced-region proposal network (ERPN) using this manually extracted knee area as the test dataset image. An X-radiation image was fed into the final model, and a consensus decision was used to grade the outcome. The presented model correctly identified the marginal knee JSN region with 98.97% of accuracy, with a total knee RA intensity classification accuracy of 99.10%, with a sensitivity of 97.3%, a specificity of 98.2%, a precision of 98.1%, and a dice score of 90.1% compared with other conventional models. Full article
(This article belongs to the Special Issue Medical Image Processing and Analysis)
Show Figures

Figure 1

Article
An Improved Skin Lesion Boundary Estimation for Enhanced-Intensity Images Using Hybrid Metaheuristics
Diagnostics 2023, 13(7), 1285; https://doi.org/10.3390/diagnostics13071285 - 28 Mar 2023
Viewed by 671
Abstract
The demand for the accurate and timely identification of melanoma as a major skin cancer type is increasing daily. Due to the advent of modern tools and computer vision techniques, it has become easier to perform analysis. Skin cancer classification and segmentation techniques [...] Read more.
The demand for the accurate and timely identification of melanoma as a major skin cancer type is increasing daily. Due to the advent of modern tools and computer vision techniques, it has become easier to perform analysis. Skin cancer classification and segmentation techniques require clear lesions segregated from the background for efficient results. Many studies resolve the matter partly. However, there exists plenty of room for new research in this field. Recently, many algorithms have been presented to preprocess skin lesions, aiding the segmentation algorithms to generate efficient outcomes. Nature-inspired algorithms and metaheuristics help to estimate the optimal parameter set in the search space. This research article proposes a hybrid metaheuristic preprocessor, BA-ABC, to improve the quality of images by enhancing their contrast and preserving the brightness. The statistical transformation function, which helps to improve the contrast, is based on a parameter set estimated through the proposed hybrid metaheuristic model for every image in the dataset. For experimentation purposes, we have utilised three publicly available datasets, ISIC-2016, 2017 and 2018. The efficacy of the presented model is validated through some state-of-the-art segmentation algorithms. The visual outcomes of the boundary estimation algorithms and performance matrix validate that the proposed model performs well. The proposed model improves the dice coefficient to 94.6% in the results. Full article
(This article belongs to the Special Issue Medical Image Processing and Analysis)
Show Figures

Figure 1

Article
BC2NetRF: Breast Cancer Classification from Mammogram Images Using Enhanced Deep Learning Features and Equilibrium-Jaya Controlled Regula Falsi-Based Features Selection
Diagnostics 2023, 13(7), 1238; https://doi.org/10.3390/diagnostics13071238 - 25 Mar 2023
Cited by 3 | Viewed by 995
Abstract
One of the most frequent cancers in women is breast cancer, and in the year 2022, approximately 287,850 new cases have been diagnosed. From them, 43,250 women died from this cancer. An early diagnosis of this cancer can help to overcome the mortality [...] Read more.
One of the most frequent cancers in women is breast cancer, and in the year 2022, approximately 287,850 new cases have been diagnosed. From them, 43,250 women died from this cancer. An early diagnosis of this cancer can help to overcome the mortality rate. However, the manual diagnosis of this cancer using mammogram images is not an easy process and always requires an expert person. Several AI-based techniques have been suggested in the literature. However, still, they are facing several challenges, such as similarities between cancer and non-cancer regions, irrelevant feature extraction, and weak training models. In this work, we proposed a new automated computerized framework for breast cancer classification. The proposed framework improves the contrast using a novel enhancement technique called haze-reduced local-global. The enhanced images are later employed for the dataset augmentation. This step aimed at increasing the diversity of the dataset and improving the training capability of the selected deep learning model. After that, a pre-trained model named EfficientNet-b0 was employed and fine-tuned to add a few new layers. The fine-tuned model was trained separately on original and enhanced images using deep transfer learning concepts with static hyperparameters’ initialization. Deep features were extracted from the average pooling layer in the next step and fused using a new serial-based approach. The fused features were later optimized using a feature selection algorithm known as Equilibrium-Jaya controlled Regula Falsi. The Regula Falsi was employed as a termination function in this algorithm. The selected features were finally classified using several machine learning classifiers. The experimental process was conducted on two publicly available datasets—CBIS-DDSM and INbreast. For these datasets, the achieved average accuracy is 95.4% and 99.7%. A comparison with state-of-the-art (SOTA) technology shows that the obtained proposed framework improved the accuracy. Moreover, the confidence interval-based analysis shows consistent results of the proposed framework. Full article
(This article belongs to the Special Issue Medical Image Processing and Analysis)
Show Figures

Figure 1

Article
Blockchain-Based Deep CNN for Brain Tumor Prediction Using MRI Scans
Diagnostics 2023, 13(7), 1229; https://doi.org/10.3390/diagnostics13071229 - 24 Mar 2023
Viewed by 1057
Abstract
Brain tumors are nonlinear and present with variations in their size, form, and textural variation; this might make it difficult to diagnose them and perform surgical excision using magnetic resonance imaging (MRI) scans. The procedures that are currently available are conducted by radiologists, [...] Read more.
Brain tumors are nonlinear and present with variations in their size, form, and textural variation; this might make it difficult to diagnose them and perform surgical excision using magnetic resonance imaging (MRI) scans. The procedures that are currently available are conducted by radiologists, brain surgeons, and clinical specialists. Studying brain MRIs is laborious, error-prone, and time-consuming, but they nonetheless show high positional accuracy in the case of brain cells. The proposed convolutional neural network model, an existing blockchain-based method, is used to secure the network for the precise prediction of brain tumors, such as pituitary tumors, meningioma tumors, and glioma tumors. MRI scans of the brain are first put into pre-trained deep models after being normalized in a fixed dimension. These structures are altered at each layer, increasing their security and safety. To guard against potential layer deletions, modification attacks, and tempering, each layer has an additional block that stores specific information. Multiple blocks are used to store information, including blocks related to each layer, cloud ledger blocks kept in cloud storage, and ledger blocks connected to the network. Later, the features are retrieved, merged, and optimized utilizing a Genetic Algorithm and have attained a competitive performance compared with the state-of-the-art (SOTA) methods using different ML classifiers. Full article
(This article belongs to the Special Issue Medical Image Processing and Analysis)
Show Figures

Figure 1

Article
Accurate Detection of Alzheimer’s Disease Using Lightweight Deep Learning Model on MRI Data
Diagnostics 2023, 13(7), 1216; https://doi.org/10.3390/diagnostics13071216 - 23 Mar 2023
Viewed by 950
Abstract
Alzheimer’s disease (AD) is a neurodegenerative disorder characterized by cognitive impairment and aberrant protein deposition in the brain. Therefore, the early detection of AD is crucial for the development of effective treatments and interventions, as the disease is more responsive to treatment in [...] Read more.
Alzheimer’s disease (AD) is a neurodegenerative disorder characterized by cognitive impairment and aberrant protein deposition in the brain. Therefore, the early detection of AD is crucial for the development of effective treatments and interventions, as the disease is more responsive to treatment in its early stages. It is worth mentioning that deep learning techniques have been successfully applied in recent years to a wide range of medical imaging tasks, including the detection of AD. These techniques have the ability to automatically learn and extract features from large datasets, making them well suited for the analysis of complex medical images. In this paper, we propose an improved lightweight deep learning model for the accurate detection of AD from magnetic resonance imaging (MRI) images. Our proposed model achieves high detection performance without the need for deeper layers and eliminates the use of traditional methods such as feature extraction and classification by combining them all into one stage. Furthermore, our proposed method consists of only seven layers, making the system less complex than other previous deep models and less time-consuming to process. We evaluate our proposed model using a publicly available Kaggle dataset, which contains a large number of records in a small dataset size of only 36 Megabytes. Our model achieved an overall accuracy of 99.22% for binary classification and 95.93% for multi-classification tasks, which outperformed other previous models. Our study is the first to combine all methods used in the publicly available Kaggle dataset for AD detection, enabling researchers to work on a dataset with new challenges. Our findings show the effectiveness of our lightweight deep learning framework to achieve high accuracy in the classification of AD. Full article
(This article belongs to the Special Issue Medical Image Processing and Analysis)
Show Figures

Figure 1

Article
End-to-End Deep Learning Method for Detection of Invasive Parkinson’s Disease
Diagnostics 2023, 13(6), 1088; https://doi.org/10.3390/diagnostics13061088 - 13 Mar 2023
Viewed by 579
Abstract
Parkinson’s disease directly affects the nervous system are causes a change in voice, lower efficiency in daily routine tasks, failure of organs, and death. As an estimate, nearly ten million people are suffering from Parkinson’s disease worldwide, and this number is increasing day [...] Read more.
Parkinson’s disease directly affects the nervous system are causes a change in voice, lower efficiency in daily routine tasks, failure of organs, and death. As an estimate, nearly ten million people are suffering from Parkinson’s disease worldwide, and this number is increasing day by day. The main cause of an increase in Parkinson’s disease patients is the unavailability of reliable procedures for diagnosing Parkinson’s disease. In the literature, we observed different methods for diagnosing Parkinson’s disease such as gait movement, voice signals, and handwriting tests. The detection of Parkinson’s disease is a difficult task because the important features that can help in detecting Parkinson’s disease are unknown. Our aim in this study is to extract those essential voice features which play a vital role in detecting Parkinson’s disease and develop a reliable model which can diagnose Parkinson’s disease at its early stages. Early diagnostic systems for the detection of Parkinson’s disease are needed to diagnose Parkinson’s disease early so that it can be controlled at the initial stages, but existing models have limitations that can lead to the misdiagnosing of the disease. Our proposed model can assist practitioners in continuously monitoring the Parkinson’s disease rating scale, known as the Total Unified Parkinson’s Disease Scale, which can help practitioners in treating their patients. The proposed model can detect Parkinson’s disease with an error of 0.10 RMSE, which is lower than that of existing models. The proposed model has the capability to extract vital voice features which can help detect Parkinson’s disease in its early stages. Full article
(This article belongs to the Special Issue Medical Image Processing and Analysis)
Show Figures

Figure 1

Article
Diabetic Retinopathy and Diabetic Macular Edema Detection Using Ensemble Based Convolutional Neural Networks
Diagnostics 2023, 13(5), 1001; https://doi.org/10.3390/diagnostics13051001 - 06 Mar 2023
Viewed by 864
Abstract
Diabetic retinopathy (DR) and diabetic macular edema (DME) are forms of eye illness caused by diabetes that affects the blood vessels in the eyes, with the ground occupied by lesions of varied extent determining the disease burden. This is among the most common [...] Read more.
Diabetic retinopathy (DR) and diabetic macular edema (DME) are forms of eye illness caused by diabetes that affects the blood vessels in the eyes, with the ground occupied by lesions of varied extent determining the disease burden. This is among the most common cause of visual impairment in the working population. Various factors have been discovered to play an important role in a person’s growth of this condition. Among the essential elements at the top of the list are anxiety and long-term diabetes. If not detected early, this illness might result in permanent eyesight loss. The damage can be reduced or avoided if it is recognized ahead of time. Unfortunately, due to the time and arduous nature of the diagnosing process, it is harder to identify the prevalence of this condition. Skilled doctors manually review digital color images to look for damage produced by vascular anomalies, the most common complication of diabetic retinopathy. Even though this procedure is reasonably accurate, it is quite pricey. The delays highlight the necessity for diagnosis to be automated, which will have a considerable positive significant impact on the health sector. The use of AI in diagnosing the disease has yielded promising and dependable findings in recent years, which is the impetus for this publication. This article used ensemble convolutional neural network (ECNN) to diagnose DR and DME automatically, with accurate results of 99 percent. This result was achieved using preprocessing, blood vessel segmentation, feature extraction, and classification. For contrast enhancement, the Harris hawks optimization (HHO) technique is presented. Finally, the experiments were conducted for two kinds of datasets: IDRiR and Messidor for accuracy, precision, recall, F-score, computational time, and error rate. Full article
(This article belongs to the Special Issue Medical Image Processing and Analysis)
Show Figures

Figure 1

Article
Hybrid Multilevel Thresholding Image Segmentation Approach for Brain MRI
Diagnostics 2023, 13(5), 925; https://doi.org/10.3390/diagnostics13050925 - 01 Mar 2023
Cited by 1 | Viewed by 1041
Abstract
A brain tumor is an abnormal growth of tissues inside the skull that can interfere with the normal functioning of the neurological system and the body, and it is responsible for the deaths of many individuals every year. Magnetic Resonance Imaging (MRI) techniques [...] Read more.
A brain tumor is an abnormal growth of tissues inside the skull that can interfere with the normal functioning of the neurological system and the body, and it is responsible for the deaths of many individuals every year. Magnetic Resonance Imaging (MRI) techniques are widely used for detection of brain cancers. Segmentation of brain MRI is a foundational process with numerous clinical applications in neurology, including quantitative analysis, operational planning, and functional imaging. The segmentation process classifies the pixel values of the image into different groups based on the intensity levels of the pixels and a selected threshold value. The quality of the medical image segmentation extensively depends on the method which selects the threshold values of the image for the segmentation process. The traditional multilevel thresholding methods are computationally expensive since these methods thoroughly search for the best threshold values to maximize the accuracy of the segmentation process. Metaheuristic optimization algorithms are widely used for solving such problems. However, these algorithms suffer from the problem of local optima stagnation and slow convergence speed. In this work, the original Bald Eagle Search (BES) algorithm problems are resolved in the proposed Dynamic Opposite Bald Eagle Search (DOBES) algorithm by employing Dynamic Opposition Learning (DOL) at the initial, as well as exploitation, phases. Using the DOBES algorithm, a hybrid multilevel thresholding image segmentation approach has been developed for MRI image segmentation. The hybrid approach is divided into two phases. In the first phase, the proposed DOBES optimization algorithm is used for the multilevel thresholding. After the selection of the thresholds for the image segmentation, the morphological operations have been utilized in the second phase to remove the unwanted area present in the segmented image. The performance efficiency of the proposed DOBES based multilevel thresholding algorithm with respect to BES has been verified using the five benchmark images. The proposed DOBES based multilevel thresholding algorithm attains higher Peak Signal-to-Noise ratio (PSNR) and Structured Similarity Index Measure (SSIM) value in comparison to the BES algorithm for the benchmark images. Additionally, the proposed hybrid multilevel thresholding segmentation approach has been compared with the existing segmentation algorithms to validate its significance. The results show that the proposed algorithm performs better for tumor segmentation in MRI images as the SSIM value attained using the proposed hybrid segmentation approach is nearer to 1 when compared with ground truth images. Full article
(This article belongs to the Special Issue Medical Image Processing and Analysis)
Show Figures

Figure 1

Article
D2BOF-COVIDNet: A Framework of Deep Bayesian Optimization and Fusion-Assisted Optimal Deep Features for COVID-19 Classification Using Chest X-ray and MRI Scans
Diagnostics 2023, 13(1), 101; https://doi.org/10.3390/diagnostics13010101 - 29 Dec 2022
Cited by 4 | Viewed by 2636
Abstract
Background and Objective: In 2019, a corona virus disease (COVID-19) was detected in China that affected millions of people around the world. On 11 March 2020, the WHO declared this disease a pandemic. Currently, more than 200 countries in the world have been [...] Read more.
Background and Objective: In 2019, a corona virus disease (COVID-19) was detected in China that affected millions of people around the world. On 11 March 2020, the WHO declared this disease a pandemic. Currently, more than 200 countries in the world have been affected by this disease. The manual diagnosis of this disease using chest X-ray (CXR) images and magnetic resonance imaging (MRI) is time consuming and always requires an expert person; therefore, researchers introduced several computerized techniques using computer vision methods. The recent computerized techniques face some challenges, such as low contrast CTX images, the manual initialization of hyperparameters, and redundant features that mislead the classification accuracy. Methods: In this paper, we proposed a novel framework for COVID-19 classification using deep Bayesian optimization and improved canonical correlation analysis (ICCA). In this proposed framework, we initially performed data augmentation for better training of the selected deep models. After that, two pre-trained deep models were employed (ResNet50 and InceptionV3) and trained using transfer learning. The hyperparameters of both models were initialized through Bayesian optimization. Both trained models were utilized for feature extractions and fused using an ICCA-based approach. The fused features were further optimized using an improved tree growth optimization algorithm that finally was classified using a neural network classifier. Results: The experimental process was conducted on five publically available datasets and achieved an accuracy of 99.6, 98.5, 99.9, 99.5, and 100%. Conclusion: The comparison with recent methods and t-test-based analysis showed the significance of this proposed framework. Full article
(This article belongs to the Special Issue Medical Image Processing and Analysis)
Show Figures

Figure 1

Back to TopTop