Next Article in Journal
Power Electronic Applications in Power and Energy Systems
Previous Article in Journal
Polycyclic Aromatic Hydrocarbons in Popcorn Corn Varieties and Popcorns
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Classification and Segmentation of Diabetic Retinopathy: A Systemic Review

1
Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47040, Pakistan
2
Department of Computer Science, University of Wah, Wah Cantt 47040, Pakistan
3
Department of Computer Science, University of Education, Jauharabad Campus, Khushāb 41200, Pakistan
4
Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
5
Artificial Intelligence Research Center (AIRC), Ajman University, Ajman 346, United Arab Emirates
6
Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 13-5053, Lebanon
7
University of Zilina, Univerzitna 1, 010 26 Zilina, Slovakia
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(5), 3108; https://doi.org/10.3390/app13053108
Submission received: 29 January 2023 / Revised: 14 February 2023 / Accepted: 16 February 2023 / Published: 28 February 2023

Abstract

:
Diabetic retinopathy (DR) is a major reason of blindness around the world. The ophthalmologist manually analyzes the morphological alterations in veins of retina, and lesions in fundus images that is a time-taking, costly, and challenging procedure. It can be made easier with the assistance of computer aided diagnostic system (CADs) that are utilized for the diagnosis of DR lesions. Artificial intelligence (AI) based machine/deep learning methods performs vital role to increase the performance of the detection process, especially in the context of analyzing medical fundus images. In this paper, several current approaches of preprocessing, segmentation, feature extraction/selection, and classification are discussed for the detection of DR lesions. This survey paper also includes a detailed description of DR datasets that are accessible by the researcher for the identification of DR lesions. The existing methods limitations and challenges are also addressed, which will assist invoice researchers to start their work in this domain.

1. Introduction

Diabetic retinopathy (DR) is a severe eye condition that results in visual loss. Unfortunately, this illness remains silent at the initial stages and is detected through routine eye checkups [1]. DR has become more common as diabetic patients’ life expectancy has increased. Untreated and serious cases of DR might result in blindness, so regular retina screening is necessary for DR patients to avoid becoming visually impaired [2]. DR is a crucial symptom of blindness in those under the age of 50 years. According to some experts, 90% of diabetic people who receive an early diagnosis may be saved from the disease [3]. It is predicted that about 600-Millions of people would have diabetes by 2040, and one-third of them will have DR according to WHO [4]. People are becoming more prone to DR daily; the number has been estimated at 191.0 million by 2030. In the early stage, no symptoms are shown; hence, the detection of DR is a difficult task [5]. The back thin layer of the eye is known as the retina; it manages the light-sensing process and converts this light into signals and sends them to the brain [6]. The optic disc (OD) is a disc-like region on the retina created by axons of retinal ganglion cells, which transmit messages from the eye’s photoreceptors toward the optic nerve; it provides assistance for vision.
All layers of the retina are supplied with blood by tiny blood capillaries, which are vulnerable to damage when blood sugar levels are elevated. When glucose level in blood is increased, the vessels start to disintegrate because the cells do not receive enough oxygen [7]. Blockage in retina vessels might cause serious eye injury. Therefore, the metabolic rate decreases, allowing DR to enter through structural anomalies in vessels [8]. This can lead to blindness at advanced stages. In all over the world, 2.6% amount of DR is the main reason for visual deterioration [9]. DR identification is achieved due to the existence of numerous types of lesions like microaneurysms (MAs), hemorrhages (HMs), hard exudate (HE), soft exudate (SE), and representations of OD and blood arteries in the retina, as shown in Figure 1.
  • MAs are the initial indication of DR, which appear as microscopic red circular marks on retina caused by the breakdown in the walls of vessel. Sharped margins with a size of less than 125 μm define the dots on retinal fundus images [11].
  • Hemorrhages (HMs) show large patches on the retina with irregular edges that are greater than 125 μm. It appears when the leakage of blood from blocked retinal vessels impairs vision in the eyes. HMs are further classified into two categories, flame (superficial HMs) and blot (deeper HMs) [12].
  • Hard exudate (HE) appears as waxy yellow patches on the retina due to plasma leakage. HE is caused by the production of lipoproteins, which flow from MAs and accumulate in the retina.
  • Soft exudate (SE) appears as white fluffy patches on the retina with distracted edges caused by the swelling of nerve fibers [13].
Dark red lesions on the retina indicate the presence of MAs and HMs, while bright lesions on the retina indicate the presence of HE and SE. The DR process comprises two distinct stages; the first stage is proliferative-DR (PDR) and the second stage is nonproliferative-DR (NPDR), as mentioned in Table 1. PDR arises when pre-existing micro-blood vessels in numerous parts of the retina produce new abnormal blood vessels. NPDR develops when diabetes damages the retina’s blood vessels, which causes blood to seep onto the retina’s surface. The leakage of blood minimizes the sensitivity of the retina; therefore, the retina becomes swollen and wet [14]. The different lesions of DR, such as MAs, HMs, HE, and SE, occur at this stage [14]. Depending upon the presence of these lesions, NPDR is additionally classified into three phases: NPDR mild (MAs only), NPDR moderate (MAs and HE), and NPDR severe (intra-retinal HMs and intra-retinal microvascular abnormalities) [15].
DR increases with critically from normal to moderate then from moderate to critical PDR, which is potentially vision-threatening. Highly skilled ophthalmologists are required for the manual detection of DR, which is an inefficient and difficult task. As a result, implementing accurate machine learning methods to detect DR automatically can prevent such flaws. Automatic techniques and screening systems for DR detection are time-saving, cost-saving, and efficient as compared to manual diagnosis methods. CADs depend on machine learning methods and are utilized for DR screening to recognize the retina with suspected DR from mild retina [15]. Figure 2 presents the overall framework of the survey while Table 2 shows a comparison of this study with other, already existing surveys.
The graph in Figure 3 shows an overview of research strategies in terms of preprocessing, segmentation, and features for the classification of DR lesions.

2. Methods for Identification of DR

DR identification at an initial stage is crucial, and with the help of early treatment and diagnosis methods, the disease’s progression can be slowed. Due to two significant vision-pressuring conditions, PDR and diabetic macular edema (DME), DR develops at varying rates in different people. As a result, researchers nowadays provide a wide range of techniques and methods for DR detection. Figure 4 depicts the process of the early identification of DR.

3. Preprocessing

Preprocessing methods are utilized to convert the data into meaningful information. Pre-processing is used for low-contrast illumination, to removing noise, blurriness, and enhance the image. For better performance of the model, preprocessing techniques are used. Some pre-processing techniques are applied on Messidor dataset as shown in Figure 5. CLAHE is applied to increase picture contrast and features by emphasizing anomalies of Messidor and Kaggle datasets. It is a variant of histogram equalization used for the reduction of noise distortion in an image. This technique is applied to two benchmark datasets and achieved 98.50% and 98% accuracy on Messidor and Kaggle datasets respectively [22]. The APTOS dataset contains different sizes and background space images. First, resize all images into equal sizes then the deformable registration method that depends on B-Spline is applied to erase the image’s background in a manner that the retina takes up all the area of images. These preprocessing approach are utilized on APTOS dataset for the classification of DR lesions and achieved 85.25% accuracy [23]. To balance the highly imbalanced Kaggle dataset different augmentation operations such as shearing, cropping, translating, flipping, zooming, rotating, GST, and Krizhevsky augmentation are used then the gaussian blur filter and the NLMD approach are employed to strengthen the quality of an image for the detection of NPDR and PDR lesions. The preprocessing technique gives the best accuracy of 97.10% for the classification of DR lesions [24,25,26,27,28,29,30,31,32,33,34,35,36,37]. The Morphological gradient (MG) technique is used to sharpen the edges of the image by applying dilation and erosion functions on it for better detection of DR lesions and obtained with an accuracy of 99.81% on Kaggle dataset [38]. The grayscale conversion and shade correction techniques are used to discriminate between optimal DR and no DR. The approach is applied on Drive dataset and gives an accuracy of 95.42% [39,40]. Two preprocessing operations were performed for OD and retina blood vessels segmentation on DIARETDB0 and DIARETDB1 datasets, first conversion of RGB into grayscale image, and then to minimize the consequence of noise and maintain the sharp edges in the retinal image is done by median filter [41]. Data augmentation approaches like flipping, cropping, and rotation are applied to each image of the Messidor-2 dataset for early DR detection. This technique improves the classification model accuracy of 99.2% [42]. Bounding box method is used to eliminate extra background parts from the fundus images. This method is performed on accessible datasets i.e., Messidor-1 and Kaggle datasets, and obtained 72.33% and 82.18% of accuracy respectively [43]. CLAHE is used to improve the image’s visual appeal and enhance its quality by eliminating noise for the detection of mild NPDR [44]. The RGB image transforms into a greyscale image then adaptive histogram equalization is employed for changing image contrast and eliminate noise over the image. CLAHE is applied for the exudate’s detection. These methods are applied on DIARETDB0 and DIARETDB1 datasets. The applied preprocessing technique improves the model accuracy 87.20% on DIARETDB0 and 85.80% on DIARETDB1 dataset [45]. After eliminating the image background and resizing the fundus image of the Kaggle dataset, gaussian blur is employed to remove the noise from fundus images for early detection of NPDR and PDR lesions and obtained an accuracy of 90% [46]. Gaussian filter is applied for eliminating the noise from digital fundus images and in the retinal dataset, some part of the images contains no information, so the process of cropping is applied to crop these type of regions that enhance the DR lesions of Kaggle and APTOS datasets [47]. To increase image robustness, gaussian filter and CLAHE is utilized to strengthen the contrast of retina fundus images due to indistinguishable appearance in color spaces of MAs, HMs and blood vessels [48]. For better contrast of an image cumulative histogram equalization and CLAHE were used. Cumulative histogram equalization alters the histogram intensity distribution to improve image contrast and the appearance of MAs in retinal images is enhanced by CLAHE [11,49]. The retinal capillaries are eliminated from the image using morphological techniques then perform the CLAHE operation on it. After applying the preprocessing technique to the IDRiD dataset the model gives an accuracy of 83.84% [50]. Gabor filters are used to extract textural information from fundus images to identify MAs [51]. High-pass and top-hat filter is used with the morphological operator to create the binary images for blood vessel segmentation [52]. After resizing the image channel splitting approach is implemented on the retinal image to split image patches into red, blue, and green colors. This technique is applied to IDRiD dataset for the detection of exudates and to obtain an accuracy of 96.95% [53].
The result of preprocessing methods is visualized in Figure 5 in which different preprocessing techniques such as CLAHE, grayscale conversion, gaussian smoothing, Gabor filter and cumulative histogram equalization are applied on Messidor images [54]. The retinal images is transformed into a lab image then apply histogram equalization approach on it to improve the brightness of scans. The adaptive filter is employed to improve the segmentation of blood arteries [55]. The Gaussian blur mask is applied on the publicly accessible APTOS-2019 datasets for noise reduction. Later average color local filter is employed to enhance the retinal and obtained 90% accuracy [56]. The transformation of retinal images into grayscale images was performed after resizing the retinal images; then, the green channel was utilized for preprocessing. The top-hat-transform operation was utilized to improve the low-severity regions such as MAs, HMs, and blood vessels. The applied technique achieved 87% sensitivity on DIARETDB0 and 93% specificity on the DIARETDB0 dataset. Non-local mean filter (NLFM), CLAHE, 2D gaussian, and top-hat transform are applied to smooth the image quality that provides help for the recognition of the dark retinal lesions such as MAs and HMs. These preprocessing techniques are implemented on three publicly available datasets and give better accuracy of 96.95% on e-Ophtha, 97.95% on DIARETDB0, and 97.35% on DIARETDB1 dataset [57]. Average filter is utilized to eliminate the micro blood vessels from retina images. This technique is applied to the publicly available Kaggle dataset [58]. The wavelet transform approach is employed to strengthen and enhance the HE image [59]. The overview of the preprocessing techniques is described in Table 3.

4. Segmentation

Segmentation is a crucial process used on fundus images because it helps to identify an area of interest that is frequently difficult to diagnose and greatly aids in DR detection. The retinal images are divided into several pixel groups or regions that each represent a different anatomical feature, such as fovea, OD, micro-blood vessels, and multiclass DR lesions including HMs, MAs, SE, and HE [61,62]. This allows the ophthalmologist to perform an eye-screening examination designed for the early detection of DR [63]. An ophthalmologist can manually identify DR by looking at the retinal fundus images and analyzing the morphological and macula changes in retinal blood vessels, HMs, HE, and MAs. This is a challenging, expensive, and time taking task. This task can be easily carried out by an automated system using artificial intelligence technology, particularly when testing for early DR [19]. Computerized techniques based on DL and other methods have aided early DR detection. Figure 6 shows the segmentation process of DR, in which images is taken from IDRiD dataset.
Retina blood vessels and OD segmentation is done by the U-Net model. The approach is employed on open-access datasets and obtains an accuracy of 96.60% on EyePACS-1, 93.90% on Messidor-2, and 92.20% on DIARETDB0 [64]. To segment, the OD and retina blood vessels the morphological operation and 2D discrete wavelet are used. The methodologies are evaluated on the DIARETDB1 dataset and obtain 87.56% of specificity [65]. The U-Net model depends on CNN utilized to segment the HMs. The proposed experiment obtained an accuracy of 98% [66]. For OD segmentation watershed transform and adaptive active contour is used. These methodologies are evaluated on the IDRiD dataset and obtain 60% accuracy [67]. The MSRNet model is proposed for the segmentation of MAs. The model is evaluated on the e_ophtha_MAs dataset and obtains a sensitivity of 71.50% [68]. The EAD-Net architecture based on CNN has proposed to segment the multiclass DR lesions like MAs, HMs, HE, and SE. The experiment is tested on e_ophtha_EX, IDRiD, and local datasets [69]. The fusing U-Net was employed for OD segmentation [70]. The approach uses thresholding, edge-based and region-based segmentation to segment retina blood vessels [71]. An approach based on GA and FCM is presented for the segmentation of DR lesions. The model is tested on 224 retinal images with a sensitivity of 78% [72]. Mathematical morphology operations are utilized to extract blood vessels for the segmentation of OD by using watershed transform. The experiment is tested on 130 fundus images that are taken from DIARETDB0 and DIARETDB1dataset and perform well with a sensitivity of 87% and specificity of 93% [60]. The sliding band filter with adaptive threshold and region-growing approach is utilized for MAs segmentation. The experiment is performed on e_ophtha_MAs and SCREEN-DR datasets with sensitivities of 64% and 81% respectively [73]. A residual based U-Net model that used ResNet34 pre-trained model as the encoder is presented to segment the DR lesions like MAs and HE. The network was tested on e_ophtha_EX and IDRiD datasets and obtain 99.88% accuracy [74]. To segment the HE, we use k-means clustering based on automatic region-growing segmentation. The experiment is done on Messidor-2 and RF datasets and obtain 98.83% of accuracy [75]. The Otsu thresholding and region-growing technique are employed for OD segmentation. The proposed method improves model accuracy by 99% [76]. The modified deep convolution neural networks (DCNNs) model based on Segnet is employed on IDRiD dataset [77]. For the segmentation of DR lesions, the local-global U- Nets model is used. The model has been tested on the ISBI 2018 dataset [78]. The CNN based residual network is designed for exudates segmentation. The model performs well on e_ophtha and DIARETDB1 datasets. The model achieves 98% accuracy [79,80]. To segment DR lesions such HMs, MAs, HE, and SE, the semantic segmentation model HEDNet is used. The proposed segmented model achieved a precision rate of 84.05% [81]. The deep CNN model that contains an encoder-decoder is proposed for the segmentation of OD, MAs, HMs, HE, and SE. The model is tested on Drishti-GS and IDRiD datasets and obtained with a Jaccard Index (IOU) of 85.72% [82]. The U-Net architecture is designed to segment the OD region. This model was employed on freely accessible datasets and achieved 95.80% of the dice coefficient [83]. DL model have been developed for MAs and exudates segmentation by using image patches. For experiment and performance analysis e-Ophtha is used as a dataset and achieved 95% of accuracy [84]. The methodology is proposed by using dynamic decision thresholding techniques for segment the exudates. The proposed segmented approach improve the model accuracy with 93.46% [85]. The bat algorithm [86] and threshold method is utilized for OD segmentation [87]. Adaptive-threshold and mathematical morphological operations have been utilized for exudates segmentation. The experiment is done on Messidor, DIARETDB1, E-Ophtha, and local datasets and gives a higher accuracy of 100% [88]. Circular Hough transform operation with morphological operations is utilized for the segmentation of OD Edge-based and morphological approaches are utilized for OD segmentation [89]. Region-growing techniques is proposed to segment the light and dark lesions of DR and assess the effectiveness of model. The methodology was evaluated on a local dataset and yielded 95% accuracy [90]. Table 4 summarizes the segmentation techniques.

5. Hand-Crafted Feature Extraction

Feature extraction methods are used for extracting information from an image. it is the process of reducing a huge amount of raw data into smaller relevant data [92,93]. Both the trained and handcrafted features are combined to get useful information. Based on characteristics it is distributed into two types such as deep feature extraction and handcrafted features extraction. For handcrafted features, the methods such as LBP, LTP, SIFT, SURF and HOG, and several others are utilized for DR classification [94,95]. Three handcrafted textural features such as GLRLM, GLDM, and GLCM are utilized for the analysis of the statistical texture of retinal images [27,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129]. MAs feature extraction is done by GLCM for MAs diagnosis through fundus images. The method is estimated on the DIARETDB0 dataset and performs best with 99.90% of accuracy [130]. For the characterization of DR lesions encoded LBP(ULBPEZ) features are extracted from preprocessed images of Messidor-2 and EyePACS datasets and obtained accuracies of 97.31% and 93.86%, respectively [131]. GLCM features are utilized for the classification of DR on DIARETDB1 dataset obtain with an accuracy of 77.30% [132]. The FOS, HOG, and HOS features are utilized. In a grey-level image, HOG features are extricated from OD region. While HOS and FOS features are extricated from RGB channels to identify DR disease [133]. GLCM, GLRLM, and CRT are utilized to extract high-level texture feature through retina images for the classification of DR lesions on DIARETDB1 and Kaggle datasets with an accuracy of 97.05% and 91% respectively [134,135]. HOG and GLCM texture features are extricating through green channel images as the classification of glaucoma. This methodology is employed on the ODIR dataset with a 99.39% of accuracy [136]. For the detection of multiclass DR lesions HOG descriptive feature is utilized for the representation of each DR image [137]. Four types of features like LBP, LTP, HOG, and DSIFT are extracted for characterize the extracted region of interest [138]. SURF and spatial LBP are utilized to effectively represent the DR lesions for the automated grading of DR [139]. HOG and canny edge detectors are utilized on Messidor-2 and EyePACS datasets for DR lesions recognition and obtained an accuracy of 97.88% and 97.01% respectively [140,141]. For the detection of multiclass DR lesion three different types of handcrafted features like LBP, entropy based, and texture energy measurement (TEM) are extracted from retinal images. The approach is performed on DIARETDB1 dataset and achieved an accuracy of 94.30% [135,142]. To capture the information on DR lesions such as MAs, HE, and HMs for efficient classification SURF, HOG and LBP are utilized on Kaggle dataset and obtained 97% of accuracy [143]. The texture, shape, and transfer learning-based features such as HOG, LBP, GLCM, GLRLM, morphology, tamura, seven CNN based architectures are utilized for glaucoma classification. The GLCM with CNN perform best for detection of glaucoma with an accuracy of 93.16% [144]. For the classification of glaucoma, the texture-based feature extraction is done by HOG, LBP, GLCM, GLDM and transform domain-based features extraction is done by Wavelet and Shearlet transform from retinal images. The method is performed on local dataset that consists of total 60 images, out of 60 images the 30 images have no DR and other 30 images have glaucoma in nature and obtain an accuracy of 93.61% [145]. Hand crafted features SURF and LOG are used for the classification of DR lesions. For interest point detection and localization SURF are utilized on retinal fundus images. The second-order Gaussian kernel is estimated using LOG and a box filter [146]. For the classification of DR lesions feature is extracted from retinal image is done by SURF descriptor. The technique is performed on Messidor data with an accuracy of 94% [147]. HOG and LBP features are utilized for extracted feature from gray scale and UWF images are used for the classification of DR lesions [148]. The summary of handcrafted features is show in Table 5.

6. Automated Classification of DR Lesions by Using Deep Features

Deep learning (DL) models enhanced learning through the extraction of high-level features that might be missed through hand-crafted methods [149]. The DL-based classification performed very well and efficiently in terms of early detection of DR [150]. The DL-based Densenet-264 with chimp optimization algorithm is utilized for feature extraction then these features are passes as input from SNN for the classification Of DR stages. This methodology is applied on Messidor dataset and obtained 99.73% of accuracy [151]. The DRNet is applied for classification with SVM [152]. For the categorization of DR lesions, the DAG network based on multi-feature fusion is proposed. The method is evaluated using the local dataset and DIARETDB1, and it achieves the accuracy of 98.70% and 98.50%, respectively [153]. The features are retrieved using the firefly Optimization (FFO) technique and optimization is done by iGWO to classify the DR lesions [154]. The DRNet is applied for classification with SVM [155]. The EyeNet and DenseNet (E-DenseNet) model is presented for DR classification [156]. VGGNet is employed to extract deep features from fundus images, and a transfer learning strategy is applied to increase classification performance [157]. Faster-RCNN with DenseNet-65 is utilized for the localization and categorization of multiclass DR lesions [48,158]. For better results of the classification of DR lesions first, the features are extracted from modified deep networks like Vgg19, ResNet101, InceptionV3 and selection from these features is done by four filter-based feature selection methods namely MRMR, ReliefF, and F-test then passes these features from SVM classifier [159]. DFTSA-Net model is presented for the detection of DR lesions, in which four pretrained deep-networks like GoogLeNet, SqueezeNet, ResNet-50, and Inception-v3 are utilized as feature extractors [160]. The ConvNet model are presented for deep feature extraction from retinal images for DR lesions identification. The proposed approach has experimented on APTOS 2019 dataset with an accuracy of 97.41% [161,162,163,164]. The DL approach name faster-RCNN are employed to retrieve features from fundus scans and classification of DR lesions is done by Softmax. The methodology is applied on two publicly datasets like DIARETDB1 and Messidor and get 95% accuracy [165]. The proposed CNN model utilized three pre-trained models namely VGG-16, SqueezeNet, and AlexNet as the classifier for classifying DR lesions. The model is assessed on Messidor dataset and achieved an accuracy of 98.15% [166]. The methodology is proposed that comprises of five deep CNN models are utilized for the identification of DR lesions [167]. AlexNet is used as a feature extractor, where feature reduction is done by PCA and Bow. At last, the classification of featured is done by SVM [168]. The DNN model is presented for the detection of DR, in which AlexNet is utilized for extracted features from retinal images and feature selection is done by PCA and LDA then passes these features from SVM classifier. The model is estimated on Kaggle dataset and accuracy of 97.93% [169]. The Residual network is applied for retrieving deep features and these extracted features are passed from the decision tree model to classify multiclass DR lesions [170]. A major goal of feature selection is to minimize the size of the feature space through dimensionality reduction while keeping important information preserved by choosing meaningful features. Better feature selection produces good classification results because DR classification performance relies on selected features [159]. In literature, much amount of work is done for the selection of prominent features and eliminate the noisy features using PCA, firefly algorithm [171], LDA [169], GA [172], PSO [173], wrapper-based methods [174], GWO [175], and FSAE [176]. The overview of deep features techniques is mentioned in Table 6.

7. Benchmark Datasets

For many years, fundus images are used to diagnose many retinal illnesses, including DR. The performance of the detection system can be evaluated to a large extent with a good, varied dataset. The researchers advise to performing experiments using benchmark datasets to obtain satisfactory outcomes. On some websites, researchers can access publicly available DR datasets that are essential for DR detection. Since the researchers collect images of the affected area using scanners, cameras, and other local resources, it is also possible that they build their own datasets. The researcher utilized these datasets for training, testing, and validating the system.
Messidor is the publicly accessible dataset including 1200 images. The datasets images were taking by 3CCD camera. These datasets are utilized for the detection of exudates, MAs, HMs and blood vessels [55]. Messidor-2 dataset provides 1748 images. The topcon digital camera takes these images. These datasets are used for the diagnosis of DR lesions [177]. The openly accessible E_ophtha Ex dataset carries 47 exudate images, and 35 healthy images. E_ophtha MAs consist of 233 images have no lesions and 148 images with minor HMs and MAs are included in the E_ophtha MAs dataset [178]. The 400 images in STARE dataset were taken with a TOP-CON-TRV camera. It is also obtained by the general public and utilized for the detection of MAs, HMs, and irregular blood vessels [179]. DRIVE dataset gives 40 color images with a resolution of 786 × 584. 20 of them are utilized for testing, while another 20 are used for training. These images were taken by 3CCD camera with 45° FoV [180]. Kaggle provides 88702 images with a high resolution of 433 × 289 to 5184 × 3456 pixels. The Kaggle dataset was gathered by EyePACS [181]. The whole count of color fundus images in DIARETDB0 dataset is 130 in which 20 images have no DR but the other 110 images containing DR lesions like HE, SE, MAs, HMs, and neovascularization. Images were taken using an unidentified camera setting on a digital camera with 50° FoV [182]. DIARETDB1 consists of total 89 images, in which 84 MAs images and 5 healthy images. The resolution of images is 1500 × 1152 and was captured by a digital camera at 45° FoV. Researchers utilized this dataset for identification of damage blood vessels, MAs, HMs, and Ex [10,183]. The two databases such as DR1 & DR2 were introduced by federal brazil university to assist the researcher for DR detection. DR1 dataset consists of total 234 images and DR2 dataset consists of total 520 images. Researchers identified the DR with the help of DR1 and DR2, which were employed for HE detection [184]. CHASE DB1 is the freely accessible database allowed to segment the retina blood vessels. It has 28 images, each measuring 1280 × 960 pixels and having a 45° FoV [185]. There are 100 publicly accessible retinal images in ROC that were shot at a 45° FoV. Size variations include 768 × 576 to 1389 × 1383 pixels. The images were marked up to identify MAs. This datasets contains only ground truth for training [186]. The segmentation of blood vessels was made possible by publicly accessible images in HRF dataset. Total 45 images with measuring 3504 × 2336 pixels in size. In which 15 images contain glaucomatous, 15 normal, and 15 are DR images [187]. The publicly available dataset HEI-MED consists of 54 healthy images and 115 irregular images. The images of datasets were acquired by Zeiss VISUCAM PRO camera. This dataset are utilized for the detection of exudates [188]. DRiDB dataset is accumulated by the University of Zagreb to assist the researcher for DR lesions identification. The dataset consists of 50 retinal images [189]. Table 7 shows a comparison of the datasets.

8. Performance Evaluation

To estimate the performance of the DL algorithm, various performance metrics are utilized for the diagnosis of DR [190]. The visual examination is not very efficient and there is no method to prove that the decision is correct. However, nowadays, to reduce the likelihood of errors, automated systems can take the place of visual examinations and are much more satisfactory, and some parameters can be used to check the system’s performance. The performance measures commonly used in DL are accuracy (ACC) [191], specificity (SP) [191], sensitivity (SF) [191], precision [192], true-positive rate (TPR) [193], false-positive rate (FPR) [194], false-negative rate (FNR) [194], F-score [195], and G-means [196,197].

9. Challenges and Discussion

The analysis of DR diagnosis methods shows that DL has improved DR identification procedures and advanced methodologies, but it is still an unresolved issue that requires further study. Table 8 describes the limitations of existing approaches.
After the complete survey, we observed that imbalance and small datasets lead the overfitting and poor approximation problem [152]. In several research, data augmentation has been employed to address the issue of class imbalance. The availability of few manually annotated images, limited the reliability of supervised learning systems. The solution of this issue to utilize the Generative Adversarial Networks (GANs) models [199]. The amount of datasets need to train the DL systems, as deep learning needs huge amount of data is one of the drawback of its use in medical field [200]. The addition of pre-processing stage in DL model for the better identification of DR lesions [201]. Transfer learning makes the procedure of building and designing of new model considerably simpler and faster than starting from scratch to create a new CNN architecture [190].

10. Future Directions

In medical image processing, a significant amount of research is done for the automated detection of DR. There are several areas in this field that could be done better such as the detection of OD boundary which is difficult due to blur edges. The segmentation of MAs is also a difficult task because these lesions are detected as a normal region. For DR lesion detection process, color and shape are significant factors due to the identical appearance of the OD and bright lesions in aspects of color and shape. As a consequence, no single method can address all of these challenges. The identification of retinal changes and structure associated with DR detection requires the development of more effective techniques. Manual diagnosis by ophthalmologists is a difficult process. Therefore, efficient deep DL approaches that can be trained on tiny retinal datasets are required. Preprocessing techniques play a crucial role for the better performance of model but still, there is a need to implement new preprocessing techniques to achieve good accuracy of model. It is necessary to develop innovative data augmentation methods that generate new samples from current samples that accurately reflect real samples. Transfer learning approaches are utilized for the training of CNN model to overcome the overfitting problem.

11. Conclusions

Recent literature has been conducted for the identification of DR, mainly focused on CADs based on classical/machine learning and deep learning methods. CADs used fundus imaging for the analysis of DR lesions based on the four major steps like preprocessing, segmentation, features extraction/selection and classification. In scope of the CADs system, preprocessing methods are used to enhance the sharpness of funds images that provide help in accurate detection of the DR lesions. In this work recent preprocessing methods are discussed on the benchmark datasets for the detection of MAs, HE, SE, and HMs DR lesions. The classical segmentation methods such as thresholding, region growing etc., as well as machine/DL methods based on convolutional neural networks are discussed with challenges and limitations. For the categorization of DR lesions, the features extraction/selection approaches are described in terms of hand crafted and DL strategies. Furthermore publicly/freely available DR detection datasets of fundus imaging are provided in detail with common performance metrics for the analysis of DR lesions. At last, the gaps, limitations, advantages, and challenges of the existing methods for DR detection are discussed. This research provided a thorough overview of existing methods for DR identification that will assist the researchers for further research in this domain.

Author Contributions

N.S. performed writing draft and conceptualization; J.A. contributed in basic designing the model; M.I.S. (Muhammad Imran Sharif), Part of result validation team and writing conclusion of the paper; M.I.S. (Muhammad Irfan Sharif). Conversion of paper as per journal formatting and putting data into pictorial form; S.K. Data curation, Investigation, Literature reviews, Resources, Project administration; L.S. Data curation, Investigation, Literature reviews, Resources, Fund acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported by the project of Operational Programme Integrated Infrastructure: Independent research and development of technological kits based on wearable electronics products, as tools for raising hygienic standards in a society exposed to the virus causing the COVID-19 disease, ITMS2014+ code 313011ASK8.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Elsharkawy, M.; Sharafeldeen, A.; Soliman, A.; Khalifa, F.; Ghazal, M.; El-Daydamony, E.; Atwan, A.; Sandhu, H.S.; El-Baz, A. A novel computer-aided diagnostic system for early detection of diabetic retinopathy using 3D-OCT higher-order spatial appearance model. Diagnostics 2022, 12, 461. [Google Scholar] [CrossRef]
  2. Amin, J.; Sharif, M.; Yasmin, M. A review on recent developments for detection of diabetic retinopathy. Scientifica 2016, 2016, 6838976. [Google Scholar] [CrossRef] [Green Version]
  3. Kharroubi, A.T.; Darwish, H.M. Diabetes mellitus: The epidemic of the century. World J. Diabetes 2015, 6, 850. [Google Scholar] [CrossRef]
  4. Dai, L.; Wu, L.; Li, H.; Cai, C.; Wu, Q.; Kong, H.; Liu, R.; Wang, X.; Hou, X.; Liu, Y.; et al. A deep learning system for detecting diabetic retinopathy across the disease spectrum. Nat. Commun. 2021, 12, 3242. [Google Scholar] [CrossRef]
  5. Oh, K.; Kang, H.M.; Leem, D.; Lee, H.; Seo, K.Y.; Yoon, S. Early detection of diabetic retinopathy based on deep learning and ultra-wide-field fundus images. Sci. Rep. 2021, 11, 1897. [Google Scholar] [CrossRef]
  6. Goel, S.; Gupta, S.; Panwar, A.; Kumar, S.; Verma, M.; Bourouis, S.; Ullah, M.A. Deep learning approach for stages of severity classification in diabetic retinopathy using color fundus retinal images. Math. Probl. Eng. 2021, 2021, 7627566. [Google Scholar] [CrossRef]
  7. Leontidis, G.; Al-Diri, B.; Hunter, A. Diabetic retinopathy: Current and future methods for early screening from a retinal hemodynamic and geometric approach. Expert Rev. Ophthalmol. 2014, 9, 431–442. [Google Scholar] [CrossRef] [Green Version]
  8. Kayal, D.; Banerjee, S. A new dynamic thresholding based technique for detection of hard exudates in digital retinal fundus image. In Proceedings of the 2014 International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 20–21 February 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 141–144. [Google Scholar]
  9. Mamtora, S.; Wong, Y.; Bell, D.; Sandinha, T. Bilateral birdshot retinochoroiditis and retinal astrocytoma. Case Rep. Ophthalmol. Med. 2017, 2017, 6586157. [Google Scholar] [CrossRef]
  10. Porwal, P.; Pachade, S.; Kamble, R.; Kokare, M.; Deshmukh, G.; Sahasrabuddhe, V.; Meriaudeau, F. Indian diabetic retinopathy image dataset (IDRiD): A database for diabetic retinopathy screening research. Data 2018, 3, 25. [Google Scholar] [CrossRef] [Green Version]
  11. Qomariah, D.; Nopember, I.T.S.; Tjandrasa, H.; Fatichah, C. Segmentation of microaneurysms for early detection of diabetic retinopathy using MResUNet. Int. J. Intell. Eng. Syst. 2021, 14, 359–373. [Google Scholar] [CrossRef]
  12. Mumtaz, R.; Hussain, M.; Sarwar, S.; Khan, K.; Mumtaz, S.; Mumtaz, M. Automatic detection of retinal hemorrhages by exploiting image processing techniques for screening retinal diseases in diabetic patients. Int. J. Diabetes Dev. Ctries. 2017, 38, 80–87. [Google Scholar] [CrossRef]
  13. Qiao, L.; Zhu, Y.; Zhou, H. Diabetic retinopathy detection using prognosis of microaneurysm and early diagnosis system for non-proliferative diabetic retinopathy based on deep learning algorithms. IEEE Access 2020, 8, 104292–104302. [Google Scholar] [CrossRef]
  14. Mishra, A.; Singh, L.; Pandey, M.; Lakra, S. Image based early detection of diabetic retinopathy: A systematic review on Artificial Intelligence (AI) based recent trends and approaches. J. Intell. Fuzzy Syst. 2022, 43, 6709–6741. [Google Scholar] [CrossRef]
  15. Wong, T.Y.; Sun, J.; Kawasaki, R.; Ruamviboonsuk, P.; Gupta, N.; Lansingh, V.C.; Maia, M.; Mathenge, W.; Moreker, S.; Muqit, M.M.; et al. Guidelines on diabetic eye care: The international council of ophthalmology recommendations for screening, follow-up, referral, and treatment based on resource settings. Ophthalmology 2018, 125, 1608–1622. [Google Scholar] [CrossRef] [Green Version]
  16. Atwany, M.Z.; Sahyoun, A.H.; Yaqub, M. Deep learning techniques for diabetic retinopathy classification: A survey. IEEE Access 2022, 10, 28642–28655. [Google Scholar] [CrossRef]
  17. Elsharkawy, M.; Elrazzaz, M.; Sharafeldeen, A.; Alhalabi, M.; Khalifa, F.; Soliman, A.; Elnakib, A.; Mahmoud, A.; Ghazal, M.; El-Daydamony, E.; et al. The role of different retinal imaging modalities in predicting progression of diabetic retinopathy: A survey. Sensors 2022, 22, 3490. [Google Scholar] [CrossRef]
  18. Nage, P.; Shitole, S. A survey on automatic diabetic retinopathy screening. SN Comput. Sci. 2021, 2, 439. [Google Scholar] [CrossRef]
  19. Bilal, A.; Sun, G.; Mazhar, S. Survey on recent developments in automatic detection of diabetic retinopathy. J. Français D’ophtalmologie 2021, 44, 420–440. [Google Scholar] [CrossRef]
  20. Stolte, S.; Fang, R. A survey on medical image analysis in diabetic retinopathy. Med. Image Anal. 2020, 64, 101742. [Google Scholar] [CrossRef]
  21. Amin, J.; Anjum, M.A.; Malik, M. Fused information of DeepLabv3+ and transfer learning model for semantic segmentation and rich features selection using equilibrium optimizer (EO) for classification of NPDR lesions. Knowl.-Based Syst. 2022, 249, 108881. [Google Scholar] [CrossRef]
  22. Nneji, G.U.; Cai, J.; Deng, J.; Monday, H.N.; Hossin, A.; Nahar, S. Identification of diabetic retinopathy using weighted fusion deep learning based on dual-channel fundus scans. Diagnostics 2022, 12, 540. [Google Scholar] [CrossRef] [PubMed]
  23. Oulhadj, M.; Riffi, J.; Chaimae, K.; Mahraz, A.M.; Ahmed, B.; Yahyaouy, A.; Fouad, C.; Meriem, A.; Idriss, B.A.; Tairi, H. Diabetic retinopathy prediction based on deep learning and deformable registration. Multimed. Tools Appl. 2022, 81, 28709–28727. [Google Scholar] [CrossRef]
  24. Jabbar, M.K.; Yan, J.; Xu, H.; Rehman, Z.U.; Jabbar, A. Transfer learning-based model for diabetic retinopathy diagnosis using retinal images. Brain Sci. 2022, 12, 535. [Google Scholar] [CrossRef] [PubMed]
  25. Amin, J.; Sharif, M.; Mallah, G.A.; Fernandes, S.L. An optimized features selection approach based on manta ray foraging optimization (MRFO) method for parasite malaria classification. Front. Public Health 2022, 10, 2846. [Google Scholar] [CrossRef]
  26. Amin, J.; Sharif, M.; Anjum, M.A.; Siddiqa, A.; Kadry, S.; Nam, Y.; Raza, M. 3D semantic deep learning networks for leukemia detection. Comput. Mater. Contin. 2021, 69, 785–799. [Google Scholar] [CrossRef]
  27. Malik, S.; Amin, J.; Sharif, M.; Yasmin, M.; Kadry, S.; Anjum, S. Fractured elbow classification using hand-crafted and deep feature fusion and selection based on whale optimization approach. Mathematics 2022, 10, 3291. [Google Scholar] [CrossRef]
  28. Shaukat, N.; Amin, J.; Sharif, M.; Azam, F.; Kadry, S.; Krishnamoorthy, S. Three-dimensional semantic segmentation of diabetic retinopathy lesions and grading using transfer learning. J. Pers. Med. 2022, 12, 1454. [Google Scholar] [CrossRef]
  29. Saleem, S.; Amin, J.; Sharif, M.; Mallah, G.A.; Kadry, S.; Gandomi, A.H. Leukemia segmentation and classification: A comprehensive survey. Comput. Biol. Med. 2022, 150, 106028. [Google Scholar] [CrossRef]
  30. Amin, J. Segmentation and classification of diabetic retinopathy. Univ. Wah J. Comput. Sci. 2019, 2. Available online: http://uwjcs.org.pk/index.php/ojs/article/view/14 (accessed on 10 January 2023).
  31. ul haq, I.; Amin, J.; Sharif, M.; Almas Anjum, M. Skin lesion detection using recent machine learning approaches. In Prognostic Models in Healthcare: AI and Statistical Approaches; Springer: Singapore, 2022; pp. 193–211. [Google Scholar]
  32. Amin, J.; Anjum, M.A.; Sharif, M.; Jabeen, S.; Kadry, S.; Ger, P.M. A new model for brain tumor detection using ensemble transfer learning and quantum variational classifier. Comput. Intell. Neurosci. 2022, 2022, 1–13. [Google Scholar] [CrossRef]
  33. Yunus, U.; Amin, J.; Sharif, M.; Yasmin, M.; Kadry, S.; Krishnamoorthy, S. Recognition of knee osteoarthritis (KOA) using YOLOv2 and classification based on convolutional neural network. Life 2022, 12, 1126. [Google Scholar] [CrossRef] [PubMed]
  34. Amin, J.; Anjum, M.A.; Sharif, A.; Sharif, M.I. A modified classical-quantum model for diabetic foot ulcer classification. Intell. Decis. Technol. 2022, 16, 23–28. [Google Scholar] [CrossRef]
  35. Sadaf, D.; Amin, J.; Sharif, M.; Yasmin, M. Detection of diabetic foot ulcer using machine/deep learning. In Advances in Deep Learning for Medical Image Analysis; CRC Press: Boca Raton, FL, USA, 2000; pp. 101–123. [Google Scholar]
  36. Amin, J.; Anjum, M.A.; Gul, N.; Sharif, M. A secure two-qubit quantum model for segmentation and classification of brain tumor using MRI images based on blockchain. Neural Comput. Appl. 2022, 34, 17315–17328. [Google Scholar] [CrossRef]
  37. Toğaçar, M. Detection of retinopathy disease using morphological gradient and segmentation approaches in fundus images. Comput. Methods Programs Biomed. 2022, 214, 106579. [Google Scholar] [CrossRef] [PubMed]
  38. Vinayaki, V.D.; Kalaiselvi, R. Multithreshold image segmentation technique using remora optimization algorithm for diabetic retinopathy detection from fundus images. Neural Process. Lett. 2022, 54, 2363–2384. [Google Scholar] [CrossRef] [PubMed]
  39. Dayana, A.M.; Emmanuel, W.R.S. An enhanced swarm optimization-based deep neural network for diabetic retinopathy classification in fundus images. Multimed. Tools Appl. 2022, 81, 20611–20642. [Google Scholar] [CrossRef]
  40. Vasireddi, H.K.; GNV, R.R. Deep feed forward neural network–based screening system for diabetic retinopathy severity classification using the lion optimization algorithm. Graefe’s Arch. Clin. Exp. Ophthalmol. 2022, 260, 1245–1263. [Google Scholar] [CrossRef]
  41. Li, F.; Wang, Y.; Xu, T.; Dong, L.; Yan, L.; Jiang, M.; Zhang, X.; Jiang, H.; Wu, Z.; Zou, H. Deep learning-based automated detection for diabetic retinopathy and diabetic macular oedema in retinal fundus photographs. Eye 2022, 36, 1433–1441. [Google Scholar] [CrossRef]
  42. Gangwar, A.K.; Ravi, V. Diabetic retinopathy detection using transfer learning and deep learning. In Evolution in Computational Intelligence; Springer: Singapore, 2021; pp. 679–689. [Google Scholar]
  43. Sarki, R.; Ahmed, K.; Wang, H.; Zhang, Y.; Ma, J.; Wang, K. Image preprocessing in classification and identification of diabetic eye diseases. Data Sci. Eng. 2021, 6, 455–471. [Google Scholar] [CrossRef]
  44. Sharma, A.; Shinde, S.; Shaikh, I.I.; Vyas, M.; Rani, S. Machine learning approach for detection of diabetic retinopathy with improved pre-processing. In Proceedings of the 2021 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), Greater Noida, India, 19–20 February 2021; pp. 517–522. [Google Scholar]
  45. Mushtaq, G.; Siddiqui, F. Detection of diabetic retinopathy using deep learning methodology. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1070, 012049. [Google Scholar] [CrossRef]
  46. Albahli, S.; Nazir, T.; Irtaza, A.; Javed, A. Recognition and detection of diabetic retinopathy using densenet-65 based faster-RCNN. Comput. Mater. Contin. 2021, 67, 1333–1351. [Google Scholar] [CrossRef]
  47. Xu, Y.; Zhou, Z.; Li, X.; Zhang, N.; Zhang, M.; Wei, P. Ffu-net: Feature fusion u-net for lesion segmentation of diabetic retinopathy. BioMed Res. Int. 2021, 2021, 1–12. [Google Scholar] [CrossRef] [PubMed]
  48. Al-hazaimeh, O.M.; Abu-Ein, A.A.; Tahat, N.M.; Al-Smadi, M.M.A.; Al-Nawashi, M.M. Combining artificial intelligence and image processing for diagnosing diabetic retinopathy in retinal fundus images. Int. J. Online Biomed. Eng. 2022, 18, 131–151. [Google Scholar] [CrossRef]
  49. Valizadeh, A.; Ghoushchi, S.J.; Ranjbarzadeh, R.; Pourasad, Y. Presentation of a segmentation method for a diabetic retinopathy patient’s fundus region detection using a convolutional neural network. Comput. Intell. Neurosci. 2021, 2021, 7714351. [Google Scholar] [CrossRef] [PubMed]
  50. Jadhav, M.L.; Shaikh, M.Z.; Sardar, V.M. Automated microaneurysms detection in fundus images for early diagnosis of diabetic retinopathy. In Data Engineering and Intelligent Computing; Springer: Singapore, 2021; pp. 87–95. [Google Scholar]
  51. Nair, A.T.; Muthuvel, K. Automated screening of diabetic retinopathy with optimized deep convolutional neural network: Enhanced moth flame model. J. Mech. Med. Biol. 2021, 21, 2150005. [Google Scholar] [CrossRef]
  52. Rathore, S.; Aswal, A.; Saranya, P. Bright lesion detection in retinal fundus images for diabetic retinopathy detection using machine learning approach. Ann. Rom. Soc. Cell Biol. 2021, 25, 4360–4367. [Google Scholar]
  53. Decencière, E.; Zhang, X.; Cazuguel, G.; Lay, B.; Cochener, B.; Trone, C.; Gain, P.; Ordonez, R.; Massin, P.; Erginay, A.; et al. Feedback on a publicly distributed image database: The Messidor database. Image Anal. Ster. 2014, 33, 231–234. [Google Scholar] [CrossRef] [Green Version]
  54. Roshini, T.; Ravi, R.V.; Mathew, A.R.; Kadan, A.B.; Subbian, P.S. Automatic diagnosis of diabetic retinopathy with the aid of adaptive average filtering with optimized deep convolutional neural network. Int. J. Imaging Syst. Technol. 2020, 30, 1173–1193. [Google Scholar] [CrossRef]
  55. Pham, H.N.; Tan, R.J.; Cai, Y.T.; Mustafa, S.; Yeo, N.C.; Lim, H.J.; Do, T.T.T.; Nguyen, B.P.; Chua, M.C.H. Automated grading in diabetic retinopathy using image processing and modified efficientnet. In Computational Collective Intelligence; Springer International Publishing: Cham, Switerland, 2020; pp. 505–515. [Google Scholar]
  56. Mohan, N.J.; Murugan, R.; Goel, T.; Roy, P. An improved accuracy rate in microaneurysms detection in retinal fundus images using non-local mean filter. In Machine Learning, Image Processing, Network Security and Data Sciences; Springer: Singapore, 2020; pp. 183–193. [Google Scholar]
  57. Li, Y.-H.; Yeh, N.-N.; Chen, S.-J.; Chung, Y.-C. Computer-assisted diagnosis for diabetic retinopathy based on fundus images using deep convolutional neural network. Mob. Inf. Syst. 2019, 2019, 6142839. [Google Scholar] [CrossRef]
  58. Benzamin, A.; Chakraborty, C. Detection of hard exudates in retinal fundus images using deep learning. In Proceedings of the 2018 Joint 7th International Conference on Informatics, Electronics & Vision (ICIEV) and 2018 2nd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), Kitakyushu, Japan, 25–29 June 2018; pp. 465–469. [Google Scholar]
  59. Kumar, S.; Adarsh, A.; Kumar, B.; Singh, A.K. An automated early diabetic retinopathy detection through improved blood vessel and optic disc segmentation. Opt. Laser Technol. 2020, 121, 105815. [Google Scholar] [CrossRef]
  60. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  61. David, D.S. Retinal image classification system for diagnosis of diabetic retinopathy using morphological edge detection and feature extraction techniques. Artech J. Eff. Res. Eng. Technol. 2020, 1, 28–33. [Google Scholar]
  62. Bilal, A.; Sun, G.; Mazhar, S.; Imran, A.; Latif, J. A Transfer learning and U-Net-based automatic detection of diabetic retinopathy from fundus images. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2022, 10, 663–674. [Google Scholar] [CrossRef]
  63. Kaur, J.; Kaur, P. Automated computer-aided diagnosis of diabetic retinopathy based on segmentation and classification using k-nearest neighbor algorithm in retinal images. Comput. J. 2022, bxac059. [Google Scholar] [CrossRef]
  64. Skouta, A.; Elmoufidi, A.; Jai-Andaloussi, S.; Ouchetto, O. Hemorrhage semantic segmentation in fundus images for the diagnosis of diabetic retinopathy by using a convolutional neural network. J. Big Data 2022, 9, 78. [Google Scholar] [CrossRef]
  65. Sau, P.C.; Bansal, A. A novel diabetic retinopathy grading using modified deep neural network with segmentation of blood vessels and retinal abnormalities. Multimed. Tools Appl. 2022, 81, 39605–39633. [Google Scholar] [CrossRef]
  66. Xia, H.; Lan, Y.; Song, S.; Li, H. A multi-scale segmentation-to-classification network for tiny microaneurysm detection in fundus images. Knowl.-Based Syst. 2021, 226, 107140. [Google Scholar] [CrossRef]
  67. Wan, C.; Chen, Y.; Li, H.; Zheng, B.; Chen, N.; Yang, W.; Wang, C.; Li, Y. EAD-Net: A novel lesion segmentation method in diabetic retinopathy using neural networks. Dis. Markers 2021, 2021, 6482665. [Google Scholar] [CrossRef]
  68. Fu, Y.; Chen, J.; Li, J.; Pan, D.; Yue, X.; Zhu, Y. Optic disc segmentation by U-net and probability bubble in abnormal fundus images. Pattern Recognit. 2021, 117, 107971. [Google Scholar] [CrossRef]
  69. Das, S.; Kharbanda, K.; Suchetha, M.; Raman, R.; Dhas, E. Deep learning architecture based on segmented fundus image features for classification of diabetic retinopathy. Biomed. Signal Process. Control. 2021, 68, 102600. [Google Scholar] [CrossRef]
  70. Ghoushchi, S.J.; Ranjbarzadeh, R.; Dadkhah, A.H.; Pourasad, Y.; Bendechache, M. An extended approach to predict retinopathy in diabetic patients using the genetic algorithm and fuzzy C-means. BioMed Res. Int. 2021, 2021, 5597222. [Google Scholar] [CrossRef]
  71. Melo, T.; Mendonça, A.M.; Campilho, A. Microaneurysm detection in color eye fundus images for diabetic retinopathy screening. Comput. Biol. Med. 2020, 126, 103995. [Google Scholar] [CrossRef] [PubMed]
  72. Sambyal, N.; Saini, P.; Syal, R.; Gupta, V. Modified U-Net architecture for semantic segmentation of diabetic retinopathy images. Biocybern. Biomed. Eng. 2020, 40, 1094–1109. [Google Scholar] [CrossRef]
  73. Ali, A.; Qadri, S.; Mashwani, W.K.; Kumam, W.; Kumam, P.; Naeem, S.; Goktas, A.; Jamal, F.; Chesneau, C.; Anam, S.; et al. Machine learning based automated segmentation and hybrid feature analysis for diabetic retinopathy classification using fundus image. Entropy 2020, 22, 567. [Google Scholar] [CrossRef] [PubMed]
  74. Khan, T.M.; Mehmood, M.; Naqvi, S.S.; Butt, M.F.U. A region growing and local adaptive thresholding-based optic disc detection. PLoS ONE 2020, 15, e0227566. [Google Scholar] [CrossRef] [Green Version]
  75. Furtado, P.; Baptista, C.; Paiva, I. Segmentation of diabetic retinopathy lesions by deep learning: Achievements and limitations. Bioimaging 2020, 95–101. [Google Scholar]
  76. Yan, Z.; Han, X.; Wang, C.; Qiu, Y.; Xiong, Z.; Cui, S. Learning mutually local-global U-Nets for high-resolution retinal lesion segmentation in fundus images. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 597–600. [Google Scholar]
  77. Khojasteh, P.; Júnior, L.A.P.; Carvalho, T.; Rezende, E.; Aliahmad, B.; Papa, J.P.; Kumar, D.K. Exudate detection in fundus images using deeply-learnable features. Comput. Biol. Med. 2019, 104, 62–69. [Google Scholar] [CrossRef]
  78. Chowdhury, A.R.; Chatterjee, T.; Banerjee, S. A random forest classifier-based approach in the detection of abnormalities in the retina. Med. Biol. Eng. Comput. 2019, 57, 193–203. [Google Scholar] [CrossRef]
  79. Xiao, Q.; Zou, J.; Yang, M.; Gaudio, A.; Kitani, K.; Smailagic, A.; Costa, P.; Xu, M. Improving lesion segmentation for diabetic retinopathy using adversarial learning. In Image Analysis and Recognition; Springer International Publishing: Cham, Switzerland, 2019; pp. 333–344. [Google Scholar]
  80. Saha, O.; Sathish, R.; Sheet, D. Fully convolutional neural network for semantic segmentation of anatomical structure and pathologies in colour fundus images associated with diabetic retinopathy. arXiv 2019, arXiv:1902.03122. [Google Scholar]
  81. Mohan, D.; Kumar, J.R.H.; Seelamantula, C.S. High-performance optic disc segmentation using convolutional neural networks. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 4038–4042. [Google Scholar]
  82. Lam, C.; Yu, C.; Huang, L.; Rubin, D. Retinal lesion detection with deep learning using image patches. Investig. Opthalmo. Vis. Sci. 2018, 59, 590–596. [Google Scholar] [CrossRef]
  83. Kaur, J.; Mittal, D. A generalized method for the segmentation of exudates from pathological retinal fundus images. Biocybern. Biomed. Eng. 2018, 38, 27–53. [Google Scholar] [CrossRef]
  84. Yang, X.-S. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74. [Google Scholar]
  85. Abdullah, A.S.; Özok, Y.E.; Rahebi, J. A novel method for retinal optic disc detection using bat meta-heuristic algorithm. Med. Biol. Eng. Comput. 2018, 56, 2015–2024. [Google Scholar] [CrossRef] [PubMed]
  86. Amin, J.; Sharif, M.; Rehman, A.; Raza, M.; Mufti, M.R. Diabetic retinopathy detection and classification using hybrid feature set. Microsc. Res. Tech. 2018, 81, 990–996. [Google Scholar] [CrossRef] [PubMed]
  87. Chalakkal, R.J.; Abdulla, W.H.; Thulaseedharan, S.S. Automatic detection and segmentation of optic disc and fovea in retinal images. IET Image Process. 2018, 12, 2100–2110. [Google Scholar] [CrossRef]
  88. Jaya, T.; Dheeba, J.; Singh, N.A. Detection of hard exudates in colour fundus images using fuzzy support vector machine-based expert system. J. Digit. Imaging 2015, 28, 761–768. [Google Scholar] [CrossRef] [Green Version]
  89. Köse, C.; Şevik, U.; Ikibaş, C.; Erdöl, H. Simple methods for segmentation and measurement of diabetic retinopathy lesions in retinal fundus images. Comput. Methods Programs Biomed. 2012, 107, 274–293. [Google Scholar] [CrossRef]
  90. Wahid, M.F.; Shahriar, M.F.; Sobuj, M.S.I. A classical approach to handcrafted feature extraction techniques for bangla handwritten digit recognition. In Proceedings of the 2021 International Conference on Electronics, Communications and Information Technology (ICECIT), Khulna, Bangladesh, 14–16 September 2021; pp. 1–4. [Google Scholar]
  91. Zafar, M.; Sharif, M.I.; Sharif, M.I.; Kadry, S.; Bukhari, S.A.C.; Rauf, H.T. Skin lesion analysis and cancer detection based on machine/deep learning techniques: A comprehensive survey. Life 2023, 13, 146. [Google Scholar] [CrossRef]
  92. Nanni, L.; Ghidoni, S.; Brahnam, S. Handcrafted vs. non-handcrafted features for computer vision classification. Pattern Recognit. 2017, 71, 158–172. [Google Scholar] [CrossRef]
  93. Sivapriya, G.; Praveen, V.; Gowri, P.; Saranya, S.; Sweetha, S.; Shekar, K. Segmentation of hard exudates for the detection of diabetic retinopathy with RNN based sematic features using fundus images. Mater. Today Proc. 2022, 64, 693–701. [Google Scholar] [CrossRef]
  94. Barges, E.; Thabet, E. GLDM and Tamura features based KNN and particle swarm optimization for automatic diabetic retinopathy recognition system. Multimed. Tools Appl. 2022, 82, 271–295. [Google Scholar] [CrossRef]
  95. Amin, J.; Sharif, M.; Yasmin, M.; Fernandes, S.L. A distinctive approach in brain tumor detection and classification using MRI. Pattern Recognit. Lett. 2020, 139, 118–127. [Google Scholar] [CrossRef]
  96. Amin, J.; Sharif, M.; Yasmin, M.; Ali, H.; Fernandes, S.L. A method for the detection and classification of diabetic retinopathy using structural predictors of bright lesions. J. Comput. Sci. 2017, 19, 153–164. [Google Scholar] [CrossRef]
  97. Sharif, M.I.; Li, J.P.; Amin, J.; Sharif, A. An improved framework for brain tumor analysis using MRI based on YOLOv2 and convolutional neural network. Complex Intell. Syst. 2021, 7, 2023–2036. [Google Scholar] [CrossRef]
  98. Saba, T.; Mohamed, A.S.; El-Affendi, M.; Amin, J.; Sharif, M. Brain tumor detection using fusion of hand crafted and deep learning features. Cogn. Syst. Res. 2020, 59, 221–230. [Google Scholar] [CrossRef]
  99. Amin, J.; Sharif, M.; Raza, M.; Saba, T.; Anjum, M.A. Brain tumor detection using statistical and machine learning method. Comput. Methods Programs Biomed. 2019, 177, 69–79. [Google Scholar] [CrossRef]
  100. Amin, J.; Sharif, M.; Raza, M.; Yasmin, M. Detection of brain tumor based on features fusion and machine learning. J. Ambient. Intell. Humaniz. Comput. 2018, 1–17. [Google Scholar] [CrossRef]
  101. Amin, J.; Sharif, M.; Gul, N.; Yasmin, M.; Shad, S.A. Brain tumor classification based on DWT fusion of MRI sequences using convolutional neural network. Pattern Recognit. Lett. 2020, 129, 115–122. [Google Scholar] [CrossRef]
  102. Sharif, M.; Amin, J.; Raza, M.; Yasmin, M.; Satapathy, S.C. An integrated design of particle swarm optimization (PSO) with fusion of features for detection of brain tumor. Pattern Recognit. Lett. 2020, 129, 150–157. [Google Scholar] [CrossRef]
  103. Amin, J.; Sharif, M.; Yasmin, M.; Saba, T.; Anjum, M.A.; Fernandes, S.L. A new approach for brain tumor segmentation and classification based on score level fusion using transfer learning. J. Med. Syst. 2019, 43, 1–16. [Google Scholar] [CrossRef]
  104. Amin, J.; Sharif, M.; Raza, M.; Saba, T.; Sial, R.; Shad, S.A. Brain tumor detection: A long short-term memory (LSTM)-based learning model. Neural Comput. Appl. 2020, 32, 15965–15973. [Google Scholar] [CrossRef]
  105. Amin, J.; Sharif, M.; Raza, M.; Saba, T.; Rehman, A. Brain tumor classification: Feature fusion. In Proceedings of the 2019 International Conference on Computer and Information Sciences (ICCIS), Sakaka, Saudi Arabia, 3–4 April 2019; IEEE: Piscataway, NJ, USA; pp. 1–6. [Google Scholar]
  106. Amin, J.; Sharif, M.; Yasmin, M.; Saba, T.; Raza, M. Use of machine intelligence to conduct analysis of human brain data for detection of abnormalities in its cognitive functions. Multimed. Tools Appl. 2020, 79, 10955–10973. [Google Scholar] [CrossRef]
  107. Amin, J.; Sharif, A.; Gul, N.; Anjum, M.A.; Nisar, M.W.; Azam, F.; Bukhari, S.A.C. Integrated design of deep features fusion for localization and classification of skin cancer. Pattern Recognit. Lett. 2020, 131, 63–70. [Google Scholar] [CrossRef]
  108. Amin, J.; Sharif, M.; Gul, N.; Raza, M.; Anjum, M.A.; Nisar, M.W.; Bukhari, S.A.C. Brain tumor detection by using stacked autoencoders in deep learning. J. Med. Syst. 2020, 44, 1–12. [Google Scholar] [CrossRef] [PubMed]
  109. Sharif, M.; Amin, J.; Raza, M.; Anjum, M.A.; Afzal, H.; Shad, S.A. Brain tumor detection based on extreme learning. Neural Comput. Appl. 2020, 32, 15975–15987. [Google Scholar] [CrossRef]
  110. Amin, J.; Sharif, M.; Anjum, M.A.; Raza, M.; Bukhari, S.A.C. Convolutional neural network with batch normalization for glioma and stroke lesion detection using MRI. Cogn. Syst. Res. 2020, 59, 304–311. [Google Scholar] [CrossRef]
  111. Muhammad, N.; Sharif, M.; Amin, J.; Mehboob, R.; Gilani, S.A.; Bibi, N.; Javed, H.; Ahmed, N. Neurochemical Alterations in sudden unexplained perinatal deaths—A review. Front. Pediatr. 2018, 6, 6. [Google Scholar] [CrossRef] [Green Version]
  112. Sharif, M.; Amin, J.; Nisar, M.W.; Anjum, M.A.; Muhammad, N.; Shad, S. A unified patch based method for brain tumor detection using features fusion. Cogn. Syst. Res. 2020, 59, 273–286. [Google Scholar] [CrossRef]
  113. Sharif, M.; Amin, J.; Siddiqa, A.; Khan, H.U.; Malik, M.S.A.; Anjum, M.A.; Kadry, S. Recognition of different types of leukocytes using YOLOv2 and optimized bag-of-features. IEEE Access 2020, 8, 167448–167459. [Google Scholar] [CrossRef]
  114. Anjum, M.A.; Amin, J.; Sharif, M.; Khan, H.U.; Malik, M.S.A.; Kadry, S. Deep semantic segmentation and multi-class skin lesion classification based on convolutional neural network. IEEE Access 2020, 8, 129668–129678. [Google Scholar] [CrossRef]
  115. Sharif, M.; Amin, J.; Yasmin, M.; Rehman, A. Efficient hybrid approach to segment and classify exudates for DR prediction. Multimed. Tools Appl. 2020, 79, 11107–11123. [Google Scholar] [CrossRef]
  116. Amin, J.; Sharif, M.; Anjum, M.A.; Khan, H.U.; Malik, M.S.A.; Kadry, S. An integrated design for classification and localization of diabetic foot ulcer based on CNN and YOLOv2-DFU models. IEEE Access 2020, 8, 228586–228597. [Google Scholar] [CrossRef]
  117. Amin, J.; Sharif, M.; Yasmin, M. Segmentation and classification of lung cancer: A review. Immunol. Endocr. Metab. Agents Med. Chem. (Former. Curr. Med. Chem. Immunol. Endocr. Metab. Agents) 2016, 16, 82–99. [Google Scholar] [CrossRef]
  118. Amin, J.; Sharif, M.; Anjum, M.A.; Nam, Y.; Kadry, S.; Taniar, D. Diagnosis of COVID-19 infection using three-dimensional semantic segmentation and classification of computed tomography images. Comput. Mater. Contin. 2021, 68, 2451–2467. [Google Scholar] [CrossRef]
  119. Amin, J.; Sharif, M.; Gul, E.; Nayak, R.S. 3D-semantic segmentation and classification of stomach infections using uncertainty aware deep neural networks. Complex Intell. Syst. 2021, 8, 3041–3057. [Google Scholar] [CrossRef]
  120. Amin, J.; Anjum, M.A.; Sharif, M.; Saba, T.; Tariq, U. An intelligence design for detection and classification of COVID19 using fusion of classical and convolutional neural network and improved microscopic features selection approach. Microsc. Res. Tech. 2021, 84, 2254–2267. [Google Scholar] [CrossRef]
  121. Amin, J.; Anjum, M.A.; Sharif, M.; Kadry, S.; Nam, Y.; Wang, S. Convolutional Bi-LSTM based human gait recognition using video sequences. CMC-Comput. Mater. Contin. 2021, 68, 2693–2709. [Google Scholar] [CrossRef]
  122. Amin, J.; Anjum, M.A.; Sharif, M.; Rehman, A.; Saba, T.; Zahra, R. Microscopic segmentation and classification of COVID-19 infection with ensemble convolutional neural network. Microsc. Res. Tech. 2021, 85, 385–397. [Google Scholar] [CrossRef]
  123. Saleem, S.; Amin, J.; Sharif, M.; Anjum, M.A.; Iqbal, M.; Wang, S.-H. A deep network designed for segmentation and classification of leukemia using fusion of the transfer learning models. Complex Intell. Syst. 2021, 8, 3105–3120. [Google Scholar] [CrossRef]
  124. Umer, M.J.; Amin, J.; Sharif, M.; Anjum, M.A.; Azam, F.; Shah, J.H. An integrated framework for COVID-19 classification based on classical and quantum transfer learning from a chest radiograph. Concurr. Comput. Pract. Exp. 2021, 34, e6434. [Google Scholar] [CrossRef]
  125. Amin, J.; Anjum, M.A.; Sharif, M.; Kadry, S.; Nam, Y. Fruits and vegetable diseases recognition using convolutional neural networks. Comput. Mater. Contin. 2021, 70, 619–635. [Google Scholar] [CrossRef]
  126. Linsky, T.W.; Vergara, R.; Codina, N.; Nelson, J.W.; Walker, M.J.; Su, W.; Barnes, C.O.; Hsiang, T.Y.; Esser-Nobis, K.; Yu, K.; et al. De novo design of potent and resilient hACE2 decoys to neutralize SARS-CoV-2. Science 2020, 370, 1208–1214. [Google Scholar] [CrossRef]
  127. Bhimavarapu, U.; Battineni, G. Automatic microaneurysms detection for early diagnosis of diabetic retinopathy using improved discrete particle swarm optimization. J. Pers. Med. 2022, 12, 317. [Google Scholar] [CrossRef] [PubMed]
  128. Berbar, M.A. Features extraction using encoded local binary pattern for detection and grading diabetic retinopathy. Health Inf. Sci. Syst. 2022, 10, 14. [Google Scholar] [CrossRef] [PubMed]
  129. Hardas, M.; Mathur, S.; Bhaskar, A.; Kalla, M. Retinal fundus image classification for diabetic retinopathy using SVM predictions. Phys. Eng. Sci. Med. 2022, 45, 1–11. [Google Scholar] [CrossRef] [PubMed]
  130. Tamim, N.; Elshrkawey, M.; Nassar, H. Accurate diagnosis of diabetic retinopathy and glaucoma using retinal fundus images based on hybrid features and genetic algorithm. Appl. Sci. 2021, 11, 6178. [Google Scholar] [CrossRef]
  131. Ramasamy, L.K.; Padinjappurathu, S.G.; Kadry, S.; Damaševičius, R. Detection of diabetic retinopathy using a fusion of textural and ridgelet features of retinal images and sequential minimal optimization classifier. PeerJ Comput. Sci. 2021, 7, e456. [Google Scholar] [CrossRef] [PubMed]
  132. Gharaibeh, N.; Al-Hazaimeh, O.M.; Al-Naami, B.; Nahar, K.M. An effective image processing method for detection of diabetic retinopathy diseases from retinal fundus images. Int. J. Signal Imaging Syst. Eng. 2018, 11, 206–216. [Google Scholar] [CrossRef]
  133. Orfao, J.; van der Haar, D. A comparison of computer vision methods for the combined detection of glaucoma, diabetic retinopathy and cataracts. In Medical Image Understanding and Analysis; Springer International Publishing: Cham, Switzerland, 2021; pp. 30–42. [Google Scholar]
  134. Hatua, A.; Subudhi, B.N.; Veerakumar, T.; Ghosh, A. Early detection of diabetic retinopathy from big data in hadoop framework. Displays 2021, 70, 102061. [Google Scholar] [CrossRef]
  135. Bibi, I.; Mir, J.; Raja, G. Automated detection of diabetic retinopathy in fundus images using fused features. Phys. Eng. Sci. Med. 2020, 43, 1253–1264. [Google Scholar] [CrossRef]
  136. Deepa, V.; Kumar, C.S.; Andrews, S.S. Automated Grading of Diabetic Retinopathy using Local-Spatial Descriptors. In Proceedings of the 2020 IEEE International Conference on Computing, Power and Communication Technologies (GUCON), Greater Noida, India, 2–4 October 2020; pp. 660–664. [Google Scholar]
  137. Yaqoob, M.K.; Ali, S.F.; Kareem, I.; Fraz, M.M. Feature-based optimized deep residual network architecture for diabetic retinopathy detection. In Proceedings of the 2020 IEEE 23rd International Multitopic Conference (INMIC), Bahawalpur, Pakistan, 5–7 November 2020; pp. 1–6. [Google Scholar]
  138. Srivastava, V.; Purwar, R.K. Classification of eye-fundus images with diabetic retinopathy using shape based features integrated into a convolutional neural network. J. Inf. Optim. Sci. 2020, 41, 217–227. [Google Scholar] [CrossRef]
  139. Jadhav, A.S.; Patil, P.B.; Biradar, S. Analysis on diagnosing diabetic retinopathy by segmenting blood vessels, optic disc and retinal abnormalities. J. Med. Eng. Technol. 2020, 44, 299–316. [Google Scholar] [CrossRef]
  140. Honnungar, S.; Mehra, S.; Joseph, S. Diabetic retinopathy identification and severity classification. Fall 2016, 2016. [Google Scholar]
  141. Claro, M.; Veras, R.; Santana, A.; Araújo, F.; Silva, R.; Almeida, J.; Leite, D. An hybrid feature space from texture information and transfer learning for glaucoma classification. J. Vis. Commun. Image Represent. 2019, 64, 102597. [Google Scholar] [CrossRef]
  142. Anupama, B.C.; Rao, S.N. Performance analysis of learning algorithms for automated detection of glaucoma. Int. Res. J. Eng. Technol. 2019, 6, 1992–1998. [Google Scholar]
  143. Leeza, M.; Farooq, H. Detection of severity level of diabetic retinopathy using Bag of features model. IET Comput. Vis. 2019, 13, 523–530. [Google Scholar] [CrossRef]
  144. Kamil, R.; Al-Saedi, K.; Al-Azawi, R. An accurate system to measure the diabetic retinopathy using svm classifier. Ciência Técnica Vitivinícola 2018, 33, 135–139. [Google Scholar]
  145. Levenkova, A.; Sowmya, A.; Kalloniatis, M.; Ly, A.; Ho, A. Automatic detection of diabetic retinopathy features in ultra-wide field retinal images. In Proceedings of the Medical Imaging 2017: Computer-Aided Diagnosis, Orlando, FL, USA, 11–16 February 2017; SPIE, 2017; Volume 10134, pp. 409–416. [Google Scholar]
  146. Sungheetha, A.; Sharma, R. Design an early detection and classification for diabetic retinopathy by deep feature extraction based convolution neural network. J. Trends Comput. Sci. Smart Technol. 2021, 3, 81–94. [Google Scholar] [CrossRef]
  147. Ragab, M.; Aljedaibi, W.H.; Nahhas, A.F.; Alzahrani, I.R. Computer aided diagnosis of diabetic retinopathy grading using spiking neural network. Comput. Electr. Eng. 2022, 101, 108014. [Google Scholar] [CrossRef]
  148. Murugappan, M.; Prakash, N.; Jeya, R.; Mohanarathinam, A.; Hemalakshmi, G.; Mahmud, M. A novel few-shot classification framework for diabetic retinopathy detection and grading. Measurement 2022, 200, 111485. [Google Scholar] [CrossRef]
  149. Fang, L.; Qiao, H. Diabetic retinopathy classification using a novel DAG network based on multi-feature of fundus images. Biomed. Signal Process. Control. 2022, 77, 103810. [Google Scholar] [CrossRef]
  150. Vijayalakshmi, P.S.; Kumar, M.J. An improved grey wolf optimization algorithm (IGWO) for the detection of diabetic retinopathy using convnets and region based segmentation techniques. Int. J. Health Sci. 2022, 13100–13118. [Google Scholar] [CrossRef]
  151. Gundluru, N.; Rajput, D.S.; Lakshmanna, K.; Kaluri, R.; Shorfuzzaman, M.; Uddin, M.; Khan, M.A.R. Enhancement of detection of diabetic retinopathy using Harris Hawks optimization with deep learning model. Comput. Intell. Neurosci. 2022, 2022, 8512469. [Google Scholar] [CrossRef]
  152. AbdelMaksoud, E.; Barakat, S.; Elmogy, M. A computer-aided diagnosis system for detecting various diabetic retinopathy grades based on a hybrid deep learning technique. Med. Biol. Eng. Comput. 2022, 60, 2015–2038. [Google Scholar] [CrossRef]
  153. Yaqoob, M.; Ali, S.; Bilal, M.; Hanif, M.; Al-Saggaf, U. ResNet based deep features and random forest classifier for diabetic retinopathy detection. Sensors 2021, 21, 3883. [Google Scholar] [CrossRef]
  154. Mohan, N.J.; Murugan, R.; Goel, T.; Mirjalili, S.; Roy, P. A novel four-step feature selection technique for diabetic retinopathy grading. Phys. Eng. Sci. Med. 2021, 44, 1351–1366. [Google Scholar] [CrossRef]
  155. Atteia, G.; Samee, N.A.; Hassan, H.Z. DFTSA-Net: Deep feature transfer-based stacked autoencoder network for DME diagnosis. Entropy 2021, 23, 1251. [Google Scholar] [CrossRef]
  156. Bodapati, J.D.; Naralasetti, V.; Shareef, S.N.; Hakak, S.; Bilal, M.; Maddikunta, P.K.R.; Jo, O. Blended multi-modal deep convnet features for diabetic retinopathy severity prediction. Electronics 2020, 9, 914. [Google Scholar] [CrossRef]
  157. Gharaibeh, N.; Al-Hazaimeh, O.M.; Abu-Ein, A.; Nahar, K.M. A hybrid svm naïve-bayes classifier for bright lesions recognition in eye fundus images. Int. J. Electr. Eng. Informatics 2021, 13, 530–545. [Google Scholar] [CrossRef]
  158. Mateen, M.; Wen, J.; Nasrullah, N.; Sun, S.; Hayat, S. Exudate detection for diabetic retinopathy using pretrained convolutional neural networks. Complexity 2020, 2020, 5801870. [Google Scholar] [CrossRef] [Green Version]
  159. Paradisa, R.H.; Sarwinda, D.; Bustamam, A.; Argyadiva, T. Classification of diabetic retinopathy through deep feature extraction and classic machine learning approach. In Proceedings of the 2020 3rd International Conference on Information and Communications Technology (ICOIACT), Yogyakarta, Indonesia, 24–25 November 2020; pp. 377–381. [Google Scholar]
  160. Nazir, T.; Irtaza, A.; Rashid, J.; Nawaz, M.; Mehmood, T. Diabetic retinopathy lesions detection using faster-RCNN from retinal images. In Proceedings of the 2020 First International Conference of Smart Systems and Emerging Technologies (SMARTTECH), Riyadh, Saudi Arabia, 3–5 November 2020; pp. 38–42. [Google Scholar]
  161. Rehman, M.U.; Khan, S.H.; Abbas, Z.; Rizvi, S.D. Classification of diabetic retinopathy images based on customised CNN architecture. In Proceedings of the 2019 Amity International Conference on Artificial Intelligence (AICAI), Dubai, United Arab Emirates, 4–6 February 2019; pp. 244–248. [Google Scholar]
  162. Qummar, S.; Khan, F.G.; Shah, S.; Khan, A.; Shamshirband, S.; Rehman, Z.U.; Khan, I.A.; Jadoon, W. A deep learning ensemble approach for diabetic retinopathy detection. IEEE Access 2019, 7, 150530–150539. [Google Scholar] [CrossRef]
  163. Chan, G.C.Y.; Shah, S.A.A.; Tang, T.B.; Lu, C.-K.; Muller, H.; Meriaudeau, F. Deep features and data reduction for classification of SD-OCT images: Application to diabetic macular edema. In Proceedings of the 2018 International Conference on Intelligent and Advanced System (ICIAS), Kuala Lumpur, Malaysia, 13–14 August 2018; pp. 1–4. [Google Scholar]
  164. Mansour, R.F. Deep-learning-based automatic computer-aided diagnosis system for diabetic retinopathy. Biomed. Eng. Lett. 2018, 8, 41–57. [Google Scholar] [CrossRef]
  165. Gargeya, R.; Leng, T. Automated identification of diabetic retinopathy using deep learning. Ophthalmology 2017, 124, 962–969. [Google Scholar] [CrossRef]
  166. Gadekallu, T.R.; Khare, N.; Bhattacharya, S.; Singh, S.; Maddikunta, P.K.R.; Ra, I.-H.; Alazab, M. Early detection of diabetic retinopathy using PCA-firefly based deep learning model. Electronics 2020, 9, 274. [Google Scholar] [CrossRef] [Green Version]
  167. Welikala, R.; Fraz, M.; Dehmeshki, J.; Hoppe, A.; Tah, V.; Mann, S.; Williamson, T.; Barman, S. Genetic algorithm based feature selection combined with dual classification for the automated detection of proliferative diabetic retinopathy. Comput. Med. Imaging Graph. 2015, 43, 64–77. [Google Scholar] [CrossRef] [Green Version]
  168. Herliana, A.; Arifin, T.; Susanti, S.; Hikmah, A.B. Feature selection of diabetic retinopathy disease using particle swarm optimization and neural network. In Proceedings of the 2018 6th International Conference on Cyber and IT Service Management (CITSM), Parapat, Indonesia, 7–9 August 2018; pp. 1–4. [Google Scholar]
  169. Le, T.M.; Vo, T.M.; Pham, T.N.; Dao, S.V.T. A novel wrapper–Based feature selection for early diabetes prediction enhanced with a metaheuristic. IEEE Access 2021, 9, 7869–7884. [Google Scholar] [CrossRef]
  170. Bilal, A.; Sun, G.; Mazhar, S.; Imran, A. Improved Grey Wolf optimization-based feature selection and classification using CNN for diabetic retinopathy detection. In Evolutionary Computing and Mobile Sustainable Networks; Springer: Singapore, 2022; pp. 1–14. [Google Scholar]
  171. Lohrmann, C.; Luukka, P. Fuzzy similarity and entropy (FSAE) feature selection revisited by using intra-class entropy and a normalized scaling factor. In Intelligent Systems and Applications in Business and Finance; Luukka, P., Stoklasa, J., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 61–92. [Google Scholar]
  172. Abràmoff, M.D.; Lou, Y.; Erginay, A.; Clarida, W.; Amelon, R.; Folk, J.C.; Niemeijer, M. Improved automated detection of diabetic retinopathy on a publicly available dataset through integration of deep learning. Investig. Ophthalmol. Vis. Sci. 2016, 57, 5200–5206. [Google Scholar] [CrossRef] [Green Version]
  173. Decencière, E.; Cazuguel, G.; Zhang, X.; Thibault, G.; Klein, J.-C.; Meyer, F.; Marcotegui, B.; Quellec, G.; Lamard, M.; Danno, R.; et al. TeleOphta: Machine learning and image processing methods for teleophthalmology. IRBM 2013, 34, 196–203. [Google Scholar] [CrossRef]
  174. Hoover, A.D.; Kouznetsova, V.; Goldbaum, M. Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans. Med. Imaging 2000, 19, 203–210. [Google Scholar] [CrossRef] [Green Version]
  175. Staal, J.; Abramoff, M.D.; Niemeijer, M.; Viergever, M.A.; Van Ginneken, B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23, 501–509. [Google Scholar] [CrossRef]
  176. Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J.; et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 2016, 316, 2402–2410. [Google Scholar] [CrossRef]
  177. Kauppi, T.; Kalesnykiene, V.; Kamarainen, J.K.; Lensu, L.; Sorri, I.; Uusitalo, H.; Kälviäinen, H.; Pietilä, J. DIARETDB0: Evaluation database and methodology for diabetic retinopathy algorithms. Mach. Vis. Pattern Recognit. Res. Group Lappeenranta Univ. Technol. Finl. 2006, 73, 1–17. [Google Scholar]
  178. Kauppi, T.; Kalesnykiene, V.; Kamarainen, J.-K.; Lensu, L.; Sorri, I.; Raninen, A.; Voutilainen, R.; Uusitalo, H.; Kalviainen, H.; Pietila, J. The diaretdb1 diabetic retinopathy database and evaluation protocol. In Proceedings of the British Machine Vision Conference 2007, Coventry, UK, 10–13 September 2007; Volume 1, p. 10. [Google Scholar]
  179. Naqvi, S.A.G.; Zafar, M.F.; Haq, I.U. Referral system for hard exudates in eye fundus. Comput. Biol. Med. 2015, 64, 217–235. [Google Scholar] [CrossRef] [PubMed]
  180. Fraz, M.M.; Remagnino, P.; Hoppe, A.; Uyyanonvara, B.; Rudnicka, A.R.; Owen, C.G.; Barman, S.A. An Ensemble classification-based approach applied to retinal blood vessel segmentation. IEEE Trans. Biomed. Eng. 2012, 59, 2538–2548. [Google Scholar] [CrossRef] [PubMed]
  181. Niemeijer, M.; van Ginneken, B.; Cree, M.J.; Mizutani, A.; Quellec, G.; Sanchez, C.I.; Zhang, B.; Hornero, R.; Lamard, M.; Muramatsu, C.; et al. Retinopathy online challenge: Automatic detection of microaneurysms in digital color fundus photographs. IEEE Trans. Med. Imaging 2010, 29, 185–195. [Google Scholar] [CrossRef] [PubMed]
  182. Budai, A.; Bock, R.; Maier, A.; Hornegger, J.; Michelson, G. Robust vessel segmentation in fundus images. Int. J. Biomed. Imaging 2013, 2013, 154860. [Google Scholar] [CrossRef] [Green Version]
  183. Rokade, P.M.; Manza, R.R.J. Automatic detection of hard exudates in retinal images using haar wavelet transform. Eye 2015, 4, 402–410. [Google Scholar]
  184. Prentašić, P.; Lončarić, S.; Vatavuk, Z.; Benčić, G.; Subašić, M.; Petković, T.; Dujmović, L.; Malenica-Ravlić, M.; Budimlija, N.; Tadić, R. Diabetic retinopathy image database(DRiDB): A new database for diabetic retinopathy screening programs research. In Proceedings of the 2013 8th International Symposium on Image and Signal Processing and Analysis (ISPA), Trieste, Italy, 4–6 September 2013; pp. 711–716. [Google Scholar]
  185. Alyoubi, W.L.; Shalash, W.M.; Abulkhair, M.F. Diabetic retinopathy detection through deep learning techniques: A review. Inform. Med. Unlocked 2020, 20, 100377. [Google Scholar] [CrossRef]
  186. Bandara, A.; Giragama, P. A retinal image enhancement technique for blood vessel segmentation algorithm. In Proceedings of the 2017 IEEE International Conference on Industrial and Information Systems (ICIIS), Peradeniya, Sri Lanka, 15–16 December 2017; pp. 1–5. [Google Scholar]
  187. Gondal, W.M.; Köhler, J.M.; Grzeszick, R.; Fink, G.A.; Hirsch, M. Weakly-supervised localization of diabetic retinopathy lesions in retinal fundus images. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 2069–2073. [Google Scholar]
  188. Jeena, R.; Kumar, A.S.; Mahadevan, K. Stroke diagnosis from retinal fundus images using multi texture analysis. J. Intell. Fuzzy Syst. 2019, 36, 2025–2032. [Google Scholar] [CrossRef]
  189. Wang, X.; Jiang, X.; Ren, J. Blood vessel segmentation from fundus image by a cascade classification framework. Pattern Recognit. 2019, 88, 331–341. [Google Scholar] [CrossRef]
  190. Li, X.; Pang, T.; Xiong, B.; Liu, W.; Liang, P.; Wang, T. Convolutional neural networks based transfer learning for diabetic retinopathy fundus image classification. In Proceedings of the 2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Shanghai, China, 14–16 October 2017; pp. 1–11. [Google Scholar]
  191. Kwasigroch, A.; Jarzembinski, B.; Grochowski, M. Deep CNN based decision support system for detection and assessing the stage of diabetic retinopathy. In Proceedings of the 2018 International Interdisciplinary PhD Workshop (IIPhDW), Świnoujście, Poland, 9–12 May 2018; pp. 111–116. [Google Scholar]
  192. Khojasteh, P.; Aliahmad, B.; Arjunan, S.P.; Kumar, D.K. Introducing a novel layer in convolutional neural network for automatic identification of diabetic retinopathy. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 5938–5941. [Google Scholar]
  193. Hasan, M.K.; Alam, M.A.; Elahi, M.T.; Roy, S.; Martí, R. DRNet: Segmentation and localization of optic disc and Fovea from diabetic retinopathy image. Artif. Intell. Med. 2021, 111, 102001. [Google Scholar] [CrossRef]
  194. Bengani, S. Automatic segmentation of optic disc in retinal fundus images using semi-supervised deep learning. Multimed. Tools Appl. 2021, 80, 3443–3468. [Google Scholar] [CrossRef]
  195. Le, D.; Alam, M.; Yao, C.K.; Lim, J.I.; Hsieh, Y.-T.; Chan, R.V.P.; Toslak, D.; Yao, X. Transfer learning for automated OCTA Detection of diabetic retinopathy. Transl. Vis. Sci. Technol. 2020, 9, 35. [Google Scholar] [CrossRef] [PubMed]
  196. Dietter, J.; Haq, W.; Ivanov, I.V.; Norrenberg, L.A.; Völker, M.; Dynowski, M.; Röck, D.; Ziemssen, F.; Leitritz, M.A.; Ueffing, M. Optic disc detection in the presence of strong technical artifacts. Biomed. Signal Process. Control. 2019, 53, 101535. [Google Scholar] [CrossRef]
  197. Li, F.; Liu, Z.; Chen, H.; Jiang, M.; Zhang, X.; Wu, Z. Automatic detection of diabetic retinopathy in retinal fundus photographs based on deep learning algorithm. Transl. Vis. Sci. Technol. 2019, 8, 4. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  198. Costa, P.; Galdran, A.; Smailagic, A.; Campilho, A. A weakly-supervised framework for interpretable diabetic retinopathy detection on retinal images. IEEE Access 2018, 6, 18747–18758. [Google Scholar] [CrossRef]
  199. Abbas, Q.; Fondon, I.; Sarmiento, A.; Jiménez, S.; Alemany, P. Automatic recognition of severity level for diagnosis of diabetic retinopathy using deep visual features. Med. Biol. Eng. Comput. 2017, 55, 1959–1974. [Google Scholar] [CrossRef] [PubMed]
  200. van Grinsven, M.J.J.P.; van Ginneken, B.; Hoyng, C.B.; Theelen, T.; Sanchez, C.I. Fast convolutional neural network training using selective data sampling: Application to hemorrhage detection in color fundus images. IEEE Trans. Med. Imaging 2016, 35, 1273–1284. [Google Scholar] [CrossRef]
  201. Chen, X.; Xu, Y.; Wong, D.W.K.; Wong, T.Y.; Liu, J. Glaucoma detection based on deep convolutional neural network. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 715–718. [Google Scholar]
  202. Worrall, D.E.; Wilson, C.M.; Brostow, G.J. Automated retinopathy of prematurity case detection with convolutional neural networks. In Deep Learning and Data Labeling for Medical Applications; Springer International Publishing: Cham, Switzerland, 2016; pp. 68–76. [Google Scholar]
Figure 1. (a) NPDR lesions, (b) optic disc and blood vessel [10].
Figure 1. (a) NPDR lesions, (b) optic disc and blood vessel [10].
Applsci 13 03108 g001
Figure 2. Framework of survey.
Figure 2. Framework of survey.
Applsci 13 03108 g002
Figure 3. Overview of research strategies/methods for DR detection.
Figure 3. Overview of research strategies/methods for DR detection.
Applsci 13 03108 g003
Figure 4. DR identification process [21].
Figure 4. DR identification process [21].
Applsci 13 03108 g004
Figure 5. Image preprocessing techniques applied on Messidor dataset [54].
Figure 5. Image preprocessing techniques applied on Messidor dataset [54].
Applsci 13 03108 g005
Figure 6. Segmentation process for DR [10].
Figure 6. Segmentation process for DR [10].
Applsci 13 03108 g006
Table 1. Levels of DR.
Table 1. Levels of DR.
Types of DRLesions
NormalNo DR lesions
Mild NPDRDevelop MAs only
Moderate NPDRIncrease in the number of MAs, HE, SE, and HMs in the retina.
Severe NPDRThe unusual feature is visible in all four retinal quadrants.
PDRIrregular small vessels of blood present in the retina.
Table 2. Comparison between this study and other surveys.
Table 2. Comparison between this study and other surveys.
ContentsPresent Study[16][17][18][19][20]
Methods for Identification of DR
Preprocessing
Segmentation
Hand-Crafted Feature Extraction
Automated Classification of DR Lesions by Using Deep Features
Benchmark Datasets
Performance Evaluation
Challenges and Discussion
Table 3. Overview of reported preprocessing techniques for retinal images.
Table 3. Overview of reported preprocessing techniques for retinal images.
Ref #YearMethodologyPreprocessing MethodsDatasetsResults
[22]2022CLAHE, Contrast-Enhanced Canny Edge Detection (CECED)CLAHEMessidor, KaggleAccuracy (ACC) = 98.50%, Sensitivity (SF) = 98.90%, Specificity (SP) = 98%, ACC = 98%, SF = 98.70%,
SP = 97.80%
[23]2022Deformable Transformation,
B-Spline Registration,
Xception, Inception-V3, DenseNet-121, ResNet-50
Deformable RegistrationAPTOSACC = 85.28%
[24]2022Gaussian Scale Space (GST), Krizhevsky Augmentation, Weighted Gaussian Blur, NLMDWeighted Gaussian BlurKaggleACC = 97.10%
[38]2022Morphological Gradient, Atom Search OptimizationMorphological GradientKaggleACC= 99.81%
[39]2022MTRO, WGA,
Grayscale Conversion,
Shade Correction
Grayscale Conversion,
Shade Correction
DRIVEACC = 95.42%, SF = 93.10%, SF = 93.20%
[40]2022U-Net,
Hybrid Entropy Model,
Gabor Filter, Median Filter
Median FilterDIARETDB0, DIARETDB1ACC = 95.90%
ACC = 95.48%
[41]2022Adaptive Histogram Equalization Filter, CLAHE, Gamma Correction,
Morphological Reconstruction,
K-Means Clustering
Adaptive Histogram Equalization Filter,
CLAHE
MessidorACC = 97.60%,
SN = 98.40%,
SP = 90.70%
[42]2022Data Augmentation,
Cropping, Flipping, Rotation, Multi-Inception-V4, Stochastic Gradient Descent (SGD)
Data Augmentation,
Cropping, Flipping, Rotation
Messidor-2ACC = 99.20%,
SF = 92.50%,
SP = 96.10%
[43]2021Blurring,
Bounding Box,
Inception-Resnet
Bounding BoxMessidor, APTOSACC = 72.33%,
ACC = 82.18%
[44]2021CLAHE, Green Channel, Erosion, Dilation, Otsu ThresholdingCLAHEMessidor, Messidor-2, DRISHTI-GSSF = 100%,
SF = 94.44%,
SF = 100%
[45]2021Grayscale Conversion, Binarization, Adaptive histogram Equalization, CLAHE
Canny Edge Detection,
Green Channel, Dilation, Erosion
Adaptive Histogram Equalization, CLAHEDIARETDB0, DIARETDB1ACC = 87.20%,
ACC = 85.80%
[46]2021Gaussian Blur,
Data Augmentation,
Global Average Pooling 2D,
Adam Optimization
Gaussian BlurKaggleACC = 90%
[47]2021Annotation’s Bounding Box,
Region of Interest,
Gaussian Filter, Cropping,
Contrast Variations
Gaussian Filter, CroppingKaggle, APTOSACC = 97.20%
[48]2021U-Net, OTSU,
Region of Interest,
Gaussian Filter, CLAHE
Gaussian Filter, CLAHEIDRIDSF = 87.55%
[11]2021UNet, MResUNet, CLAHE, Cropping, Patching,
Cumulative Histogram Equalization,
Weighted Cross-Entropy Loss Function,
Mathematical Morphology
CLAHE,
Cumulative Histogram Equalization,
Mathematical Morphology
IDRID, DiaretDB1SF = 61.96%, SF = 85.87%
[50]2021Green Channel, CLAHE, Morphological Operation, ThresholdingCLAHE, Morphological OperationIDRiDACC = 83.84%
[51]2021Gabor Filter, SVM,
Candidate Region
Gabor FilterIDRiDACC = 80.80%,
SF = 76.75%
[52]2021High-Pass Filter,
Morphological Operations, Top-Hat Filter, Gaussian Mixture Model (GMM)
High-Pass Filter, Morphological Operations,
Top-Hat Filter
DIARETDB 0, DIARETDB 1, IDRiDACC = 94.19%,
ACC = 97.43%,
ACC = 93.18%
[53]2021Channel Splitting, Blue Channel
Hue Saturation Value (HSV),
Patch Segmentation, Grayscale Conversion, SVM
Grayscale Conversion,
Hue Saturation Value (HSV)
IDRiDACC = 96.95%,
SF = 89%,
SP = 96%
[55]2020Threshold, Contrast-Enhanced, Adaptive Average Filter, Meta-Heuristic Algorithm (FP-CSO), Deep CNN, RGB to Lab, Histogram Equalization, Convert RGB to Lab, SIFTRGB to Lab, Histogram EqualizationHigh-Resolution Fundus (HRF)ACC = 93.30%
[56]2020Efficientnet-B5, Batch Normalization, Rectified Adam Optimizer, Group Normalization, Gaussian Blur Mask, CLAHE, Local Average Color FilterLocal Average Color Filter, Gaussian Blur Mask, CLAHEAPTOSACC = 90%
[60]2020Grayscale Conversion, Morphological Operations, Regional Minima (RMIN) Operator, CLAHE, Marker-Controlled Watershed Segmentation, Morphological Gradient (MG), Top-Hat TransformTop-Hat Transform, Grayscale Conversion, Morphological Operations, CLAHE, Morphological Gradient (MG)DIARETDB0, DIARETDB1SF = 87%, SP = 93%
[57]2020Non-Local Mean Filter (NLFM), CLAHE, 2D Gaussian Low-Pass Filter, Top-Hat Transform, Green ChannelNon-Local Mean Filter (NLFM), CLAHE, 2D Gaussian Low-Pass Filtere-Ophtha, DIARETDB0, DIARETDB1ACC = 96.95%,
ACC = 97.95%,
ACC = 97.35%
[58]2019Local Average Filter, Clipping, Fractional, SVM, TLBO, Max-PoolingLocal Average Filter, ClippingKaggleACC = 86.17%
[59]2018Image Resize, Wavelet Transform, Maxpool Operation, Batch Normalization, Drop Out, Adam OptimizerImage ResizeIDRiDACC = 98.60%
Table 4. Overview of reported techniques for DR lesion segmentation.
Table 4. Overview of reported techniques for DR lesion segmentation.
Ref #YearMethodologySegmentation TechniquesDatasetsResults
[64]2022U-Net, VGG-Net, Image Resize, Green ChannelU-NetEyePACS-1, Messidor-2, DIARETDB0ACC = 96.60%, ACC = 93.95%, ACC = 92.25%
[65]2022Morphological Operation, 2D Discrete Wavelet, K-Nearest Neighbor2D Discrete Wavelet, Morphological OperationDIARETDB1ACC = 95%,
SP = 87.56%,
SF = 92.60%
[66]2022CNN U-Net, AlexNet, VGGNet, Green Channel, Adam OptimizerCNN U-NetIDRiD, DIARETDB1ACC = 98.68%, Dice Score = 86.51%
[67]2022Adaptive Active Contour, Otsu Thresholding, Morphological Operation, Median Filtering, Open-Close Watershed Transform, GLCM, ROI, LTPAdaptive Active Contour, Watershed Transform, Otsu ThresholdingIDRiDACC = 60%
[68]2021MSRNet, MS-EfficientNet, U-Net, Adam OptimizerMSRNet, U-Nete_ophtha_MAsSF = 71.50%
[69]2021EAD-Net, U-Net, CAM, PAMEAD-Net, U-Nete_ophtha_EX, IDRiD, local datasetACC = 97%, ACC = 78%, ACC = 84.86
[70]2021U-Net, Model-Driven Bubble Approach, Hough Transform, IRHSF Illumination Correction, Logarithmic TransformationU-NetMessidorACC = 91%
[72]2021Region Growing, Genetic Algorithm (GA), FCM, Clustering Method, K-MeansRegion GrowingLocal DatasetSF = 78%
[60]2020Watershed Transform, Mathematical Morphology Operation, CLAHE, RBF- NN, Regional MinimaWatershed TransformDIARETDB0, DIARETDB1SF = 87%, SP = 93%
[73]2020Local Convergence Filters (LCFs), Sliding Band Filter, De-Noising Techniques, Image-Adapted Thresholds, Region Growing, Non-Maximum Suppression (NMS)Image-Adapted Threshold, Region Growinge_ophtha_MAs, SCREEN-DRSF = 64%,
SF = 81%
[74]2020U-Net, ResNet34, Initialized to Convolution NN Resize (ICNR)U-NetIDRiD, e_ophtha_MAs, e_ophtha_HEACC = 99.88%, ACC = 99.98%, ACC = 99.98%
[75]2020Region Growing, Gaussian and Gabor Filters, Histogram Equalization, Grayscale Conversion, K-Means, Wavelet (W), COM, Histogram (H), RLM, LMT, SLg, Multi-Layer Perceptron (MLP)K-Means Clustering, Region Growing Segmentation2D RFACC = 99.73%
[76]2020Region Growing, Ellipse Fitting, Green Channel, Morphological Dilation Operation, Otsu Thresholding, Morphological OperationOtsu Thresholding, Morphological Operation, Region GrowingMessidor, DIARETDB1, ONHSD, DRIONS, DRISHTI, RIM-ONEACC = 99%
[77]2020Deep CNN, DeepLabV3, Segnet, Conditional Random Field (CRF)DeepLabV3, SegnetIDRIDACC = 88%
[78]2019U-Nets, LocalNet, GlobalNet, Fusion Module, Data Augmentation, Concatenate, Global Supervision, Local SupervisionU-NetsISBI 2018ACC = 89%
[79]2019CNN, ResNet-50, Discriminative Restricted Boltzmann Machines, OPF, KNN, SVMCNNDIARETDB1, e_ophthaACC = 90.60%, ACC = 89.10%
[80]2019Random Forest Classifier, K-Means, Naïve Bayes, Morphological Operation, Grayscale Conversion, Gamma Correction, Region-Based FeaturesK-Means, Morphological OperationDIARETDB0, DIARETDB1ACC = 93.58%, ACC = 83.63%
[81]2019U-Net, HEDNet, HEDNet+cGAN, Conditional Generative Adversarial Network (cGAN), PatchGAN, VGG16 Weighted Binary Cross-Entropy, Loss, CLAHE, Bilateral FilterU-Net, HEDNet, HEDNet+cGANIDRiDPrecision = 84.05%
[82]2019Deep-CNN, Binary Cross Entropy, VGG16Deep-CNNIDRiD, Drishti-GS
Jaccard Index (IOU) = 85.72%
[83]2018CNN-Based U-Net, Bootstrapped Cross-Entropy, Instance Normalization, Atrous ConvolutionsCNN-Based U-NetMessidor, DRIONS-DB, DRISHTI-GSDice = 95.70%, Dice = 95.50%, Dice = 96.40%
[84]2018CNN, GoogLeNet, Inception-V3, VGG16, ResNet, AlexNet, Sliding WindowsCNNKaggle, e_ophthaACC = 98%, ACC = 95%
[85]2018Dynamic Decision Thresholding, Adaptive Contrast Enhancement, Canny Edge Detection, Circular Hough Transform, Morphological FillingDynamic Decision ThresholdingMessidor, DIARETDB1, STARE, E_Optha_EXACC = 93.40%, ACC = 93.4%, ACC = 93.4%, ACC = 93.4%
[87]2018Bat Meta-Heuristic Algorithm, Optimum Thresholding, Grayscale Conversion, Morphological Operations, Ellipse FittingBat Meta-Heuristic Algorithm, Optimum ThresholdingMessidor, DIARETDB1ACC = 99%, ACC = 97%
[88]2018Adaptive Threshold, Local Contrast Enhancement, Mathematical Morphology, Grayscale Conversion, Gaussian Smoothing, Histogram Equalization, ANN, KNN, Geometric, Tree-Based, and Probabilistic ClassifierAdaptive Threshold, Mathematical Morphology, Gaussian Smoothing, Histogram EqualizationDIARETDB1ACC = 100%
[91]2018Circular Hough Transform, Morphological Operations, Average Histogram, Contrast Enhancement, CCACircular Hough TransformMessidor, DRIVE, DIARETDB1, IDRiD, Local DatasetSF = 96.80%
[89]2015FSVM, Morphological Operations, Circular Hough TransformMorphological Operations, Circular Hough TransformLocal DatasetSF = 94.10%,
SP = 90%
[90]2012Naïve Bayes, Region Growing, and Background CorrectionAdaptive Region GrowingLocal DatasetACC = 95%
Table 5. Hand-crafted feature techniques used for DR detection.
Table 5. Hand-crafted feature techniques used for DR detection.
Ref #YearMethodologyHand-Crafted Feature Extraction TechniquesDatasetsResults
[95]2022RNN, Binary Image Extraction, Histogram Equalization, Pseudo-Color Preprocessing, GLCMGLCMMessidorACC = 97%,
SP = 99%,
SF = 95%
[96]2022KNN, SVM, DA, GLCM, GLDM, GLRLM, PSOGLCM, GLDM, GLRLMDriveACC = 100%
[130]2022PBPSO Clustering, GLCM, PSO Algorithm, ANN, Fuzzy Logic (FL), Neuro-Fuzzy, Fuzzy Contrast EnhancementGLCMDIARETDB0ACC = 99.90%
[131]2022SVM, CNN, Histogram Matching, Green Channel, CLAHE, Unsharp Filter, Median Filter, Run-Length Encoding, LBF(ULBPEZ)LBF (ULBPEZ)Messidor-2, EyePACSACC = 97.31%, ACC = 93.86%
[132]2022GMM, K-Means, GLCM, PCA, MAP, Grayscale Conversion, Morphological Operations, Average Filter, Adaptive Equalization, Histogram EqualizationGLCMDIARETDB1ACC = 77.30%
[133]2021FOS, HOS, HOG, Decision Tree (DT), Naive Bayes, KNN, Genetic Algorithm (GA)FOS, HOS, HOGHigh-Resolution Fundus (HRF)ACC = 96.67%
[134]2021Sequential Minimal Optimization (SMO), GLCM, GLRLM, CRT, Image Conversion, Morphological OperationsGLCM, GLRLM, CRTDIARETDB1, KaggleACC = 97.05%, ACC = 91%
[136]2021HOG, GLCM, Green Channel, Grayscale Conversion, Inception-V3, SVM, SqueezeNet, Xception, DenseNet 201, ResNet50 v2HOG, GLCMODIRACC = 99.39%
[137]2021HOG, PCA, KNN, Hadoop DFSHOGDIARETDB0, Messidor-2SP = 80.77%,
SP = 96.42%
[138]2020LBP, LTP, HOG, DSIFT, SVM, Grayscale Conversion, PCA, CLAHELBP, LTP, HOG, DSIFTLocal DatasetSF = 96.40%,
SP = 96.90%
[139]2020SURF, Spatial LBP, CLAHE, ANN, ELM, KNNSURF, Spatial LBPLocal Dataset, Kaggle, DIARETDB0, DIARETDB1ACC= 89.89%
[140]2020ResNet-50, Inception-V3, Canny Edge Detector, HOG, Stochastic Gradient Descent (SGD)HOGMESSIDOR-2, EyePACSACC = 97.01%,
ACC = 97.88%
[141]2020CNN, Median Filter, Adaptive Histogram Equalization, Otsu Method, Radial Length (RL), Discrete Fourier Transformation (DFT), HOGHOGLocal Dataset 1, Local Dataset 2Precision = 100%, Precision = 95.16%
[142]2020Green Channel, CLAHE, Watershed Transform, Thresholding Method, Top-Hat Transformation, Gabor Filtering, LBP, TEM, Entropy, DBN, NNLBP, TEM, Entropy-BasedDIARETDB1ACC = 94.30%
[143]2019SURF, LOG, BoF, Box Filters, K-Means Clustering, ANN, SVMSURF, LOGMessidorSF = 95.92%,
SP = 98.90%
[144]2019CNNVgg-s, CNN-Vgg-m, CNNVgg-f, CNN-CaffeNet, GLRLM, GLCM, HOG, LBP, Morphology, SVM, MLP, Random ForestGLCM, LBP, HOGHRF, JSIEC, ACRIMAACC = 95.30%,
ACC = 98.10%,
ACC = 99.10%
[145]2019SVM, KNN, Green Channel, CLAHE, Wavelet Transform, Shearlet Transform, HOG, LBP, GLCM, GLDMHOG, LBP, GLCM, GLDMLocal DatasetACC = 93.61%
[147]2018Bag-of-Words (BoW), SVM, SURF, Redial Basis Function (RBF)SURFMessidorACC = 94%
SF = 91%
SP = 93%
[148]2017HOG, LBP, Decision Tree (DT), Random Forest (RF), SVMHOG, LBPLocal DatasetACC = 95.31%
[142]2016SURF, LBP, HOG, SVM, CNN, Logistic Regression, Random Forest, Crop and Resize, Green Channel, CLAHE, Median FilterSURF, LBP, HOGKaggleACC = 97%
Table 6. Reported deep feature and classification techniques used in various reviewed studies.
Table 6. Reported deep feature and classification techniques used in various reviewed studies.
Ref #YearMethodologyDeep Feature Extraction MethodClassifiersDatasetsResults
[151]2022Kapur’s Entropy, COA-DN, SNN, Image Rescale, ClippingCOA-DNSNNMessidorACC = 99.73%
[152]2022AlexNet, VGG16, ResNet, Inception-V3, SVM, DRNET, Few-Shot Learning (FSL), GCAMsDRNetSVMAPTOS2019ACC = 99.73%, SF = 99.82%, SP = 99.63%
[153]2022DAG, Softmax, ReLU, Convolution, Contrast Enhancement, CLAHE, Binarization Threshold, Fuzzy ClusteringDAG NetworkSoftmaxDIARETDB1, Local DatasetACC = 98.70%, ACC = 98.70%
[154]2022CLAHE, Median Filter, Gaussian Filter, Min–Max Normalization, RBT, iGWO, FF0, CNN, IGWO-FFOIGWO-FFOCNN SoftmaxAPTOS2019ACC = 94.11%
[155]2022KNN, XGBOOT, SVM, PCA, HHO, DT, DNN-PCA-HHODNN-PCA-HHOKNN, XGBOOT, SVMUCIACC = 97%
[156]2022CNN, EyeNet, DenseNet E-DenseNet, Average Pooling (GAP)E-DenseNetSoftmaxIDRiD, Messidor, EyePACS, APTOS 2019ACC = 93%, ACC = 91.60%, ACC = 96.80%, ACC = 84%
[157]2022CLAHE, Weighted Gaussian Blur, Average Pooling, Augmentation, VGGNetVGGNetAverage PoolingEyePACSACC = 97.10%
[48]2021Faster-RCNN, DenseNet-65, Gaussian Filter, VGG, AlexNet, ResNetFaster-RCNNDenseNet-65Kaggle, APTOSACC = 97.20%
[158]2021Random Forest, ResNet-50, MobileNet, VGG16, VGG-19, Xception, Inception-V3ResNet-50Random ForestMessidor-2, EyePACSACC = 96%, ACC = 75.09%
[159]2021Inception-V3, ResNet101, VGG-19, Naïve Bayes, KNN, SVMInception-V3, ResNet101, Vgg19SVMKaggle, Messidor-2, IDRiDACC = 97.78%, SF = 97.60%, SP = 99.30%
[160]2021CNN, SqueezeNet, ResNet-50, Inception-V3, DFTSA-Net, CLAHEDFTSA-NetSoftmaxIDRiDACC = 96.80%, SF = 97.50%, SP = 95.50%
[161]2020DNN, KNN, SVM, MLP, VGG16, Xception, ResNetV2, NASNETVGG16, Xception, ResNetV2, NASNETDNN, KNN, SVM, Naïve Bayes Classifier, Decision Tree, Logistic Regression, MLPAPTOS 2019ACC = 97.41%
[163]2020CNN, Inception-V3, Softmax, GMM, ALRInception-V3, Softmaxe-Ophtha, DIARETDB1ACC = 98.43%, ACC = 98.91%
[164]2020CNN, CLAHE, ResNet-50, SVM, KNN, Random Forest, XGBoostResNet-50SVM, KNN, Random Forest, XGBoostDIARETDB1ACC = 99%
[165]2020RCNN, Morphological Operation, RPN, Softmax, Bounding BoxFaster-RCNNSoftmaxMessidorACC = 96.80%
[166]2019Cropping, Resizing, Histogram Equalization, CNN, VGG-16, SqueezeNet, AlexNetConvolution LayersVGG-16, SqueezeNet, AlexNetMessidorACC = 98.15%
[167]2019CNN, Deep CNN, Inception-V3, Dense-169, ResNet-50, Xception, Dense-121, Dense-121, Inception-V3, Dense-169, Xception ResNet-50Binary Classification, Multi-Class ClassificationKaggleSP = 99%
[168]2018PCA, Bag of Words (Bow), CNN, AlexNetAlexNetSVMSD-OCTACC = 96.80%
SF = 93.75% SP= 100%
[169]2018CNN, AlexNet DNN, SVM, PCA, LDA, SIFT, Histogram Equalization, GMMAlexNet DNNSVMKaggleACC = 97.93%
[170]2017Augmentation, Image Transformation, Contrast Enhancement, Decision TreeResidual NetworkDecision TreeMessidor-2,
e-Ophtha,
EyePACS
SP = 87%,
SP = 94%,
SP = 98%
Table 7. Description of DR datasets.
Table 7. Description of DR datasets.
Ref #DatasetsImage ResolutionImage AcquisitionAvailabilityNo. of ImagesUse
[177]Messidor-21440 × 960, 2240 × 1488, 2304 × 1536Topcon Digital Camera with 45-Degree Field of ViewOnline/Free1748MA, HM, and Retinal Vessel Detection
[178]E_ophtha Ex and E_ophtha_MAs2048 × 1360Captured by OPHDIATOnline/Free463MA and Ex Detection
[179]STARE605 × 700Topcon TRV 50 35 Field of ViewOnline/Free400Irregular Blood Vessel, HM, Ex, and MA Detection
[181]Kaggle433 × 289 to 5184 × 3456Different Digital CamerasOnline/Free88,702Exudate, MA, HMs, and Blood Vessel Detection
[182]DIARETDB01500 × 1152Digital Camera with 50° FoVOnline/Free130HE, SE, MA, HM, and Neovascularization Detection
[183]DIARETDB11500 × 1152Digital Camera with 45° FoVOnline/Free89Irregular Blood Vessel, MA, HM, and Ex Detection
[10]IDRiD4288 × 2848Digital Camera with 45° FoVOnline/Free516Exudate, MA, HM, and Blood Vessel Detection
[184]DR1 and DR2857 × 569Digital Camera with 50° FoVOnline/Free234 DR1 and 520 DR2HE, SE, MA, HM, and Neovascularization Detection
[185]CHASE DB11280 × 960Digital Camera with 30° FoVOnline/Free28Segmentation of Retinal Blood Vessels
[186]ROC768 × 576 to 1389 × 1383Digital Camera with 45° FoVOnline/Free100MA Detection
[187]HRF3504 × 2336Canon CR-1 CameraOnline/Free45Retinal Blood Vessel Segmentation
[188]HEI-MED2196 × 1958Zeiss VISUCAM Camera with 45° FoVOnline/Free169Exudate Detection
[189]DRiDB720 × 576Zeiss VISUCAM Camera with 45° FoVOnline/Free50Exudate Detection
Table 8. Limitations in existing methods.
Table 8. Limitations in existing methods.
Ref #YearMethodsDatasetsResultsLimitations
[43]2022Inception-V4, Image Flipping, Image Rotation, SGDMessidor-296.10% SPUsing high-resolution and high-quality images at the time of training increases the performance rate.
[152]2022DRNet, ResNeX, GAP, FC, FSLAPTOS20198.18% ACCImbalanced and small dataset leads to overfitting and poor approximation problems.
[198]2021DRNet, CNN, Regression, Image Augmentation, Image Resize, Gaussian Distribution, Euclidean DistanceIDRiD,
DRIVE,
DRISHTI-GS,
RIMONE
84.50% ACC,
92.10% ACC,
93.30% ACC,
90.10% ACC
  • In some cases, DRNet fails to produce accurate outcomes for OD localization and segmentation.
  • Low contrast and blurred edges make OD segmentation a challenging process.
[199]2021CAE, Image Resize, Data Augmentation, ReLU, Skip ConnectionsDRISHTI-GS, RIM-ONE96.70%
Dice score,
90.20%
Dice score
The availability of a few manually annotated images limits the reliability of supervised learning systems.
[200]2020CNN, VGG-16, Softmax, FC Layer, ReLU, Transfer LearningOCTA90.82% SP, 83.76% SFLarge number of datasets and transfer learning approaches are utilized for the training of CNN model to overcome the overfitting problem.
[201]2019Vessel Tree Structure, Circular Hough Transform, Sliding Windows, Weighted Colour Channels, Image AugmentationLocal Dataset88.80% ACCThe presence of dust particles, reflection, and flash, on the lens of the camera in retinal images, leads to inaccurate results for the detection of OD.
[202]2019Inception-V3, CNN, CLAHE, Image Resize, Cropping, PaddingMessidor-293.49% ACCMulticlass classification is a challenging task if the patient dataset contains a variety of retinal disorders.
[190]2018CNN, SURF, Encoding, Max-Pooling, ILT, BLT, SVMMessidor,
DR1, DR2
90% ACC,
93% ACC, 96% ACC
Constructing DL approaches that rely on CNN with a deep architecture means the addition of a great volume of annotated images.
[46]2017DLNN, SLDR, GLOH, DColor-SIFT, DFVDIARETDB192.18% SF, 94.50% SPSpeech recognition, 3D object recognition, dimensionality reduction, and deep color visual features play a great role in the categorization of DR.
[125]2016SeS CNN, NSeS CNN, Circular Template Matching, Image Resize, Image Augmentation, Gaussian FilterKaggle, Messidor89.40% ACC, 97.20% ACCIt is required to develop innovative data augmentation methods that generate new samples from current samples that accurately reflect real samples.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shaukat, N.; Amin, J.; Sharif, M.I.; Sharif, M.I.; Kadry, S.; Sevcik, L. Classification and Segmentation of Diabetic Retinopathy: A Systemic Review. Appl. Sci. 2023, 13, 3108. https://doi.org/10.3390/app13053108

AMA Style

Shaukat N, Amin J, Sharif MI, Sharif MI, Kadry S, Sevcik L. Classification and Segmentation of Diabetic Retinopathy: A Systemic Review. Applied Sciences. 2023; 13(5):3108. https://doi.org/10.3390/app13053108

Chicago/Turabian Style

Shaukat, Natasha, Javeria Amin, Muhammad Imran Sharif, Muhammad Irfan Sharif, Seifedine Kadry, and Lukas Sevcik. 2023. "Classification and Segmentation of Diabetic Retinopathy: A Systemic Review" Applied Sciences 13, no. 5: 3108. https://doi.org/10.3390/app13053108

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop