Next Article in Journal
Similar Sensitivity of SARS-CoV-2 Detection in Oropharyngeal/Nasopharyngeal and Saliva Samples on the Hologic Panther Platform
Previous Article in Journal
Development of an Artificial Intelligence-Based Breast Cancer Detection Model by Combining Mammograms and Medical Health Records
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Survey on Deep-Learning-Based Diabetic Retinopathy Classification

Department of Computer Science and Engineering, Qatar University, Doha P.O. Box 2713, Qatar
*
Author to whom correspondence should be addressed.
Diagnostics 2023, 13(3), 345; https://doi.org/10.3390/diagnostics13030345
Submission received: 17 November 2022 / Revised: 21 December 2022 / Accepted: 22 December 2022 / Published: 18 January 2023
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)

Abstract

:
The number of people who suffer from diabetes in the world has been considerably increasing recently. It affects people of all ages. People who have had diabetes for a long time are affected by a condition called Diabetic Retinopathy (DR), which damages the eyes. Automatic detection using new technologies for early detection can help avoid complications such as the loss of vision. Currently, with the development of Artificial Intelligence (AI) techniques, especially Deep Learning (DL), DL-based methods are widely preferred for developing DR detection systems. For this purpose, this study surveyed the existing literature on diabetic retinopathy diagnoses from fundus images using deep learning and provides a brief description of the current DL techniques that are used by researchers in this field. After that, this study lists some of the commonly used datasets. This is followed by a performance comparison of these reviewed methods with respect to some commonly used metrics in computer vision tasks.

1. Introduction

During the past two decades, the number of people affected by diabetes has increased alarmingly. According to the IDF Diabetes Atlas [1], almost half a billion people of all ages have been diagnosed with it across the globe. This is expected to reach seven-hundred million by 2045. It is a global health concern. The IDF Diabetes Atlas also warns that, by 2040, one in three diabetes patients will develop Diabetic Retinopathy (DR). DR is a condition that can be identified by the presence of injured blood vessels behind the retina. This may result in serious complications such as the loss of vision when it goes undetected for a long time, hence the importance of addressing this issue. At present, doctors manually examine the fundus images of the eye to assess the severity of DR. This consumes much time, and there is a shortage of available medical professionals with respect to the actual number of patients. Due to these reasons, many patients do not receive medical care in a timely manner. Even though patients suffering from diabetes are advised by physicians to receive regular medical screenings of their fundus, many cases are left undetected until the disease becomes severe [2]. Hence, it is desirable to have an automated system to help in the detection of diabetic retinopathy.
Most studies in this field use fundus images, which provide visual records that document the present ophthalmic appearance of a person’s retina. The presence of DR symptoms in these fundus images can be used to classify it using several steps such as retinal blood vessel segmentation, lesion segmentation, and DR detection [3]. The detection of DR and its current stage can be determined by investigating the presence/absence of several lesions. Some of the lesions are microaneurysms (MAs), superficial retinal hemorrhages (SRHs), exudates (Exs)—both soft exudates (SEs) and hard exudates (HEs)—intraretinal hemorrhages (IHEs), and cotton wool spots (CWSs). Figure 1 shows a comparison between a healthy retina and an unhealthy retina.
With the development of AI techniques, including machine learning and deep learning, high-performance detection and grading of the retina to detect and segment the infected parts of the retina become possible. Machine learning approaches are widely used for DR classification and grading. Nazir et al. [4] used a new way to represent fundus images called the “tetragonal local octa pattern (T-LOP) features”. Later, this classification was performed using extreme learning machine. Three ML classifiers—support vector machine (SVM), random forest, and J48—were used by the authors in [5]. The Gabor wavelet method followed by the AdaBoost classifier were used by the authors in [6] to grade DR. Recently, many deep learning techniques have been utilized by researchers to perform these tasks. This study provides a review of the present literature in this area with a focus on how DL is being used for DR detection and grading from fundus images. DL is a branch of AI that makes use of artificial neural networks with multiple processing layers to gradually extract the high-level features from the data. In this paper, we also summarize the DL architectures that have been used by the different reviewed studies.
However, significant research in this field using DL is also being carried out using optical coherence tomography (OCT) images, which have a higher resolution [7,8,9]. OCT images are more suitable than fundus images for developing systems that require micrometer resolution and a penetration depth of millimeters, which is why they are used by researchers for DR diagnosis, especially at the early stages [7].
The paper is organized as follows. The related works on DR detection and DR grading are presented in Section 2. Section 3 describes some of the preprocessing techniques that are used. Section 4 describes the datasets used. A comparison and discussion of the experiments are provided in Section 5. Some of future directions are provided in Section 6 The conclusion is presented in Section 7.

2. Literature Review

The diagnosis of diabetic retinopathy can be performed using two techniques: detection and grading. The detection of diabetic retinopathy is performed using binary classification (DR or normal retina), while diabetic retinopathy grading consists of detecting and annotating the infected parts, including the types of infection: mild, moderate, or severe. Figure 2 summarizes these two different types of DR studies. This section describes these studies by categorizing them into diabetic-retinopathy-detection-based studies and DR-grading-based studies. All these studies are summarized in Table 1.

2.1. DR-Detection-Based Studies

The diabetic retinopathy detection studies perform binary classification of the input images as healthy or DR. Here, we focus on deep-learning-based methods, which are the most effective approaches compared with other machine-learning-based or traditional techniques. For example, Kazakh-British et al. [10] proposed a simple convolutional neural network (CNN) to automatically classify DR. They used the original images and images filtered using an anisotropic diffusion filter in the experiments. From the obtained results, the authors found that the use of the anisotropic diffusion filter improved the performance. In the same context, the authors in [11,12,13,14] used CNN architectures to perform binary classification to identify the presence of diabetic retinopathy. After applying the Wiener filter to the fundus images and using OTSU for the segmentation, the authors of [15] proposed a deep CNN for multi-class classification of the fundus images into those having several vision-threatening diseases such as DR and the normal fundus images.
Instead of using simple convolutional neural networks, some authors have used pre-trained models (backbones) for transfer learning or for feature extraction to implement their methods. These are shown in Figure 3. For example, InceptionV3 was used by the authors in [16] to classify DR on RGB and textures features. Umapathy et al. [17] used a pre-trained InceptionV3 to perform DR classification. A binary CNN (BCNN) was proposed by the authors in [18] for DR classification to reduce memory consumption and improve runtime. Both binomial classification and multinomial classification of fundus images were performed by the authors in [19] using the MobileNetV2 architecture since this architecture requires less training time and can be used in mobile systems. Saranya et al. [20] used the DenseNet-121 model to detect DR from fundus images, while transfer learning using EfficientNet-B0, EfficientNet-B4, and EfficientNet-B7 were exploited to detect DR in [21]. The same backbones were used in [22] to classify DR into referable/vision-threatening DR. The EfficientNet-B3 backbone initialized with ImageNet weights and fully connected layers initialized with HE initialization were used for training by the author in [23]. From the experiments, the EfficientNet model gave good results compared to the ground-truth.
Another Backbone was used by Sudarmadji et al. [24] for diabetic retinopathy detection. The proposed method used the VGG network for feature extraction to implement the proposed CNN-based model. Boral and Thorat [25] used a transfer learning approach using InceptionV3 followed by SVM to perform DR classification. In another paper, five transfer learning models, Xception, InceptionResNetV2, MobileNetV2, DenseNet-121, and NASNetMobile, were used by the authors in [26] to perform binary classification of DR. DenseNet-121 was used as the transfer-learning-based method by the authors in [27] to identify MAs, Exs, and hemorrhages from the input images to detect DR. Furthermore, transfer learning, VGG, AlexNet, Inception, GoogleNet, DenseNet, and ResNet were used by the authors in [28]. Another study [29] involved a comparison of three types of deep-learning-based architectures including Transformer-based networks, CNNs, and multi-layered perceptrons (MLPs) for DR classification. Different models included in the study were EfficientNet, ResNet, Swin-Transformer, Vision-Transformer (ViT), and MLP-Mixer. The models that are based on the transformer architecture were found to have the best accuracy among these. An ensemble model consisting of three CNN models was used by the authors in [30] for DR classification. It was based on stack generalization. ResNet-50 and VGG-16 were also used. Four vital features of using the CNN for DR classification, different architectures of the CNN, preprocessing techniques, class imbalance, and fine-tuning were evaluated by the authors in [31]. AlexNet, ResNet-50, and VGG-16 were employed for this purpose. The performances of twenty-eight deep hybrid architectures for binary classification of DR into referable DR and non-referable DR were empirically evaluated by the authors of [32]. This was compared with end-to-end deep learning (DL) architectures. A hybrid architecture using the SVM classifier and MobileNetV2 for feature extraction was found to be the best-performing among these. A three-class classification of fundus images into normal, glaucomatous, and diabetic retinopathy eyes was performed by the authors in [33]. Multiple CNN models—MobileNetV2, DenseNet-121, InceptionV3, InceptionResNetV2, ResNet-50, and VGG-16—were used for DR classification.
A model based on ResNet with gradient-weighted class activation mapping (Grad-CAM) was used by the authors of [34] for lesion detection and DR classification. The lesions included MAs, HEs, hemorrhages, and CWSs. Quellec et al. [35] found that, when training for image-level classification was used with ConvNet, it became capable of performing lesion detection. The training was performed with a simplification of the back-propagation method. The images were classified into non-referable DR and referable DR. A new neural network called the lesion-guided network (LGN) was proposed by Tang et al. [36] to diagnose DR. For lesion detection, the backbone was RetinaNet with ResNet-50. A lesion-aware module (LAM) was also used to improve the rough lesion maps. Enhanced DR detection was performed by using the Harris hawks optimization (HHO) algorithm along with a DCNN by the authors of [37]. Gunasekaran et al. [38] used a deep RNN (DRNN) to perform early detection of DR. A CNN-based method was proposed in [39] to detect DR. A very recent work [40] used seven different CNNs for DR diagnosis. Experiments in this study included single-modality and joint fusion strategies.

2.2. DR-Grading-Based Studies

As per the International Clinical Diabetic Retinopathy (ICDR) [41] scale, diabetic retinopathy can be graded into separate grades: no apparent retinopathy, mild non-proliferative diabetic retinopathy (NPDR), moderate NPDR, severe NPDR, and proliferative diabetic retinopathy (PDR). An example of each grade is presented in Figure 4. Many studies have been proposed for multi-class classification and grading of fundus images into the above-mentioned five stages.
A simple CNN model was used by the authors in [42] after applying a green channel filter to assess the stage of DR from fundus images. A CNN, which combined multi-view fundus images, was used along with attention mechanisms by the authors in [43]. It was called MVDRNet and used VGG-16 as the basic network. A locally collected dataset containing multi-view fundus images was employed for this. Another study that used a locally collected dataset from the University Hospital Saint Joan, Tarragona, Spain, is [44]. The CNN model used had batch normalization followed by the ReLU function. This was followed by a linear classifier and a softmax function. Two datasets—a balanced dataset with no augmentation and another one with augmentation—were used by the authors in [45]. A CNN was used to demonstrate the improvement in accuracy in DR grading due to the augmentation. Agustin and Sunyoto [46] performed a comparison of different regularization methods regarding how they reduce the overfitting of CNNs when used for DR severity grading. Dropout regularization was found to reduce overfitting and to increase accuracy.
Table 1. Retinopathy-grading-based studies during the period 2017–2020.
Table 1. Retinopathy-grading-based studies during the period 2017–2020.
MethodYearMethodDataset(s)
Li et al. [47]2017CNN-based transfer learning, SVMDR1 and MESSIDOR
Ardiyanto et al. [48]2017Deep-DR-NetFINDeRS
Kwasigroch et al. [49]2018Transfer learning and VGGKaggle EyePACS
Wang et al. [50]2018AlexNet, VGG-16, and InceptionV3Kaggle EyePACS
Zhou et al. [51]2018Inception-ResNet-v2, BaseNetKaggle EyePACS
Shrivastava and Joshi [52]2018InceptionV3, SVMKaggle EyePACS
Arora and Pandey [53]2019AlexNet, VGG-16, and InceptionV3Kaggle EyePACS
Kassani et al. [54]2019InceptionV3, MobileNet, and ResNet-50Kaggle APTOS
Hathwar and Srinivasa [55]2019Inception-ResNet-V2, and XceptionKaggle EyePACS, IDRiD
Bellemo et al. [56]2019Ensemble of Adapted VGG and ResNetKitwe Central Hospital, Zambia
Kumar [57]2019Ensemble of GoogleNet, AlexNet, and ResNet-50Kaggle EyePACS
Thota and Reddy [58]2020Pre-trained VGG-16Kaggle EyePACS
Nguyen et al. [59]2020VGG-16 and VGG-19Kaggle EyePACS
Lavanya et al. [60]2020ImageNetKaggle DR
Elzennary et al. [61]2020DenseNet-121Kaggle APTOS
Barhate et al. [62]2020Autoencoder and VGGKaggle EyePACs
Wang et al. [63]2020Multichannel-based semisupervised GANMESSIDOR
Khaled et al. [64]2020VGG-16Kaggle EyePACS
Islam et al. [65]2020Transfer Learning and VGG-16Kaggle APTOS
Wang et al. [66]2020Hierarchical multi-task deep learning frameworkShenzhen, Guangdong, China
AbdelMaksoud et al. [67]2020E-DenseNetKaggle EyePACS, Kaggle APTOS
Yaqoob et al. [68]2020ResNet-50MESSIDOR-2 and Kaggle EyePACS
Taufiqurrahman et al. [69]2020MobileNetV2, SVMKaggle APTOS
Vaishnavi et al. [70]2020AlexNetKaggle EyePACS
Shankar et al. [71]2020Synergic deep Learning (SDL) modelMESSIDOR
Karki and Kulkarni [72]2021EfficientNetKaggle APTOS
Qian et al. [73]2021Res2Net and DenseNetKaggle EyePACS
Shorfuzzaman et al. [74]2021CNNKaggle APTOS, MESSIDOR, IDRiD
Sugeno et al. [75]2021EfficientNet-B3Kaggle APTOS, DIARETDB1
Lee and Ke [76]2021VGG-16 and ResNet-50IDRiD
Nazir et al. [77]2021DenseNet-100, CenterNetKaggle APTOS, IDRiD
Xiao et al. [78]2021SE-MIDNetKaggle EyePACS
Li et al. [79]2021SAGN, GCNNKaggle APTOS, Kaggle EyePACS
Martinez-Murcia et al. [80]2021ResNet-18 and ResNet-50MESSIDOR
Rajkumar et al. [81]2021ResNet-50Kaggle EyePACS
Swedhaasri et al. [82]2021SE-ResNet-50, EfficientNetKaggle APTOS
Reguant et al. [83]2021InceptionV3, ResNet50, and XceptionKaggle EyePACS, DIARETDB1
Hari et al. [84]2021Xception, InceptionV3, and DenseNet-169Kaggle EyePACS
Saeed et al. [85]2021VGG-19, ResNet, and DPN107Kaggle EyePACS, MESSIDOR
Jabbar et al. [86]2022VGGKaggle EyePACS
Shaik and Cherukuri [87]2022HA-NetKaggle APTOS, IDRiD
Chandrasekaran and Loganathan [88]2022ResNet and AlexNetKaggle EyePACS
Oulhadj et al. [89]2022DenseNet, InceptionV3, and ResNet-50Kaggle APTOS
Nair et al. [90]2022VGG-16, ResNet-50, and EfficientNet-B5Kaggle APTOS
Deepa et al. [91]2022Xception, InceptionV3, and ResNet-50Kaggle DR, DIARETDB, STARE
Farag et al. [92]2022DenseNet-169 with CBAMKaggle APTOS
Canayaz [93]2022EfficientNet-B0, DenseNet-121Kaggle APTOS
Bilal et al. [94]2022U-NetKaggle EyePACS, MESSIDOR-2
Murugappan et al. [95]2022DRNetKaggle APTOS
Chen and Chang [96]2022InceptionV3 and EfficientNetKaggle APTOS
Butt et al. [97]2022ResNet-18 and GoogleNetKaggle APTOS
Elwin et al. [98]2022DCNN, ShCNNIDRiD, DDR
Deepa et al. [99]2022DCNNKaggle EyePACS, DIARETDB, STARE
A deep CNN model called DR|GRADUATE was presented by the authors in [100]. It was a new DL approach for DR grading, which could give a pathologically explainable description to support its judgment. It also provided an assessment of the ambiguity of its prediction. Feature extraction using a multipath CNN was used by the authors in [5]. After this, DR was graded using three different ML classifiers, SVM, random forest, and J48. Sugeno et al. [75] used the EfficientNet model to grade DR after using morphological operations and image processing for lesion detection. A multi-task model with EfficientNet-B5 was used by the authors of [101] for DR grading. Feature extraction performed with the EfficientNet backbone was fed to the dropout layer, which was followed by an ordinal regression section and a classification section. Shankar et al. [71] proposed a deep CNN model called the synergic deep learning (SDL) model to grade DR. Histogram-based segmentation was performed before this.
A pre-trained VGG-16 was used by the authors of [58] to train their proposed CNN to improve the accuracy of DR grading. VGG-16 and VGG-19 were used by the authors of [59] to grade DR. They mirrored and rotated the images to augment the dataset. The VGG-16 and ResNet-50 models were modified and used by the authors in [76] to grade DR with the help of the dropout concept. A cascaded model consisting of two VGG-16 models was used by the authors of [64]. The first model outputs “yes” or “no” to detect DR, and the second model classifies the fundus images into four different DR stages. Shaik and Cherukuri [87] used a model named “Hinge Attention Network (HA-Net)” which has multiple attention stages for DR severity grading. Initial spatial representations from the input images were extracted using a pre-trained VGG-16 base.
An automated DR detection system using a Raspberry Pi was developed by the authors of [60]. They used ImageNet for DR grading. Elzennary et al. [61] used the DenseNet-121 neural network architecture with the aid of transfer learning to determine the severity of DR. Both of these studies used the Python framework called Flask to create interfaces that can be used by doctors to detect DR. A custom CenterNet with DenseNet-100 support was used by the authors of [77] to detect eye diseases from retinal images. This study graded the severity of DR by separating the fundus images according to the lesions present.
Another classification network for DR-SE-MIDNet was introduced by the authors of [78]. It was built using an enhanced Inception module along with the squeeze-and-excitation (SE) module for grading. With the SE module, global information for the feature map on each channel was found. Feature extraction using InceptionV3 was performed using a hierarchical approach by the authors in [52]. The first layer was for binary classification into DR/no DR. The next one was to grade DR into the five DR stages. SVM with the radial basis function (RBF) kernel was utilized for classification. Wang et al. [63] used a multichannel-based semi-supervised GAN (SSGAN) for DR grading, which was capable of using labeled and unlabeled data as the training data. They used feature extraction to reduce the noise of the input images and for extracting the features of lesions. They also graded the lesions into three levels.
A new DL algorithm called Deep-DR-Net capable of being fit onto a small embedded board was introduced by the authors of [48] to grade DR. For this, they arranged a cascaded encoder–classifier network with a residual style to ensure that it was small in size. Li et al. [79] proposed a semi-supervised auto-encoder graph network (SAGN) to diagnose DR. In this, an autoencoder was used for feature learning. After this, the RBF was used to calculate neighbor correlations. Finally, a graph CNN (GCNN) was used to grade DR. A graph neural network (GNN), which extracts lesion ROI sub-images to emphasize only lesions in fundus images, was proposed by Sakaguchi et al. [102]. A graph is constructed from these sub-images for DR classification.
Transfer learning and the VGG architecture were used by Kwasigroch et al. [49]. For this reason, the ImageNet dataset was used to pre-train the VGG architecture. Another DL model that used transfer learning—VGG-16—was used along with a new color version preprocessing method by Islam et al. [65] for DR grading. ResNet-18 and ResNet-50 were used along with residual transfer learning by Martinez-Murcia et al. [80] for the same. Another transfer learning approach—the ResNet-50 architecture trained on the ImageNet dataset—was used for DR classification and grading by the authors in [81]. Another study that used transfer learning by fine-tuning using a well-annotated ImageNet dataset to train Inception-ResNet-V2 and Xception models was given in [55]. The latter was found to have better performance. CNN-based transfer learning followed by SVM were used by the authors of [47]. AlexNet and VGG were pre-trained using the ImageNet dataset. Features extracted with the help of transfer learning were provided to SVM for DR grading. An ensemble model consisting of SE-ResNeXt50, EfficientNet-B4, and EfficientNet-B5 along with transfer learning was used by the authors of [82] for DR grading. The InceptionV3, ResNet-50, InceptionResNet50, and Xception models were used for DR grading by the authors in [83]. The parameters were initialized using transfer learning. They created visualization maps to investigate the clinical significance of the decisions made by the CNN models. Wang et al. [50] used AlexNet, VGG-16, and InceptionV3 along with transfer learning for DR grading. InceptionV3 was found to provide the best accuracy in their study. Jabbar et al. [86] used a transfer-learning-based VGG architecture for DR grading. Various data augmentation techniques were used to balance the classes in the training data.
Experiments using several deep neural networks (DNNs) were carried out to yield algorithms that grade DR conforming to the ICDR standards by the authors in [103]. The network was also trained to make several other binary classifications. Synchronized diagnosis of DR severity, DR features, and referable DR was conducted by the authors of [66]. A hierarchical multi-task DL framework with a skip connection was utilized for automatically merging the DR-related feature output with DR severity analysis. An ensemble of two CNN architectures—a modified VGG and RNN—was utilized for grading DR by the authors in [56]. Apart from the grading of DR as per the ICDR scale, the images were classified into referable DR/vision-threatening DR. Xception, InceptionV3, and DenseNet-169 were used by the authors of [84] for DR grading. They used the Kaggle DR dataset and created two versions of it: balanced and imbalanced. The Xception model, which was trained using the imbalanced version of the dataset, was found to have the best performance. VGG-19, ResNet-152, and DPN107 were used with two-stage transfer learning by the authors in [85] for grading DR. The initial layers of the pre-trained models were adjusted for the preceding layers to understand the lesions and also the normal areas. Zhou et al. [51] used a multi-cell architecture, which could increase the depth of the DNN, as well as the resolution of the input image. A three-layer architecture that used Inception-ResNet-v2 and BaseNet to grade DR was proposed. AlexNet, VGG-16, and InceptionV3 were used by the authors in [53] to determine DR stage classification. Image augmentation techniques were used before training. The DR grading performance of three models, a shallow CNN, ResNet with soft attention, and AlexNet for DR using a new hyper-analytic wavelet (HW) phase activation function, was compared by the authors in [88]. AlexNet for DR was found to show the maximum improvement in performance in their experiments. Oulhadj et al. [89] applied a deformable registration to the retina and graded DR using four CNN models, DenseNet-121, Xception, InceptionV3, and ResNet-50. Three pre-trained models, VGG-16, ResNet-50, and EfficientNet-B5, were used for DR grading by the authors in [90]. ResNet-50 was found to perform best among the three. The performance of three pre-trained models, Xception, InceptionV3, and ResNet-50, in DR grading was compared by the authors of [91]. Their simulation result found the Xception model to perform better.
ResNet was used by the authors in [104] for feature extraction. After this, they used SVM, as well as a neural network (NN) pixelwise classifier to grade DR. AD2Net—a new CNN model having the qualities of Res2Net and DenseNet—was used by the authors of [73] for DR grading. An attention mechanism was used to make the network concentrate on understanding useful information from the images. A deep supervision of inception-residual network (DSIRNet) was used by [105], which was based on the network design ideas of GoogleNet and ResNet for feature extraction to grade DR. They also used a deep monitoring method to enhance the thermal classification effect of the training network. Yaqoob et al. [68] trained an optimized ResNet-50 having features from a canny edge detector and histogram of gradients to perform the grading of DR using two public datasets. An ensemble made of GoogleNet, AlexNet, and ResNet-50 was utilized by the authors of [57]. The images were preprocessed and fed to this ensemble model for DR grading. A CNN-based DL ensemble framework in which weights from distinct models were merged to make a solo model, which can extract prominent features from many lesions in the input images, was used by Shorfuzzaman et al. [74] to grade DR. Some CNN models that were pre-trained with the ImageNet dataset—the ResNet-50, DenseNet-121, Xception, and Inception models—were used for this. After preprocessing with CLAHE for segmentation, Vaishnavi et al. [70] used the AlexNet architecture for feature extraction. Finally, a softmax layer was utilized to grade the images according to DR severity.
An ensemble of five models from the EfficientNet family was used for DR grading by the authors in [72] by pre-training on ImageNet. These models were also used independently for the same, and EfficientNet-B3 performed better than the ensemble model and the other four models. A hybrid and effective model, MobileNetV2-SVM, was used by the authors of [69] to grade DR images. A stack of residual bottleneck layers, which consisted of a stack of bottleneck residual blocks, was used to construct the MobileNetV2 model. Jiang et al. [106] used three models—InceptionV3, ResNet-152, and Inception—ResNet-V2 to grade DR. An ensemble model consisting of these models, using the Adaboost algorithm, was also used. Another study used an embedded model consisting of five deep CNNs—ResNet-50, Xception, InceptionV3, DenseNet-121, and DenseNet-169 [107]. Stacked individual channels of the image were taken as the input. The forecast from separate models was averaged and used to fix the final target label. The green channel was found to give the best performance in grading DR. A novel hybrid DL model known as E-DenseNet was proposed by the authors of [67] to grade DR. It was a hybrid between a customized EyeNet and DenseNet based on DenseNet-121. The Xception deep feature extractor was used by the authors of [54] to advance the capability of the typical Xception architecture in classifying DR. They also used transfer learning along with hyper-parameter tuning.
A novel CNN model based on the DenseNet-169 architecture combined with a convolutional block attention module (CBAM) was used by the authors of [92] for DR severity classification. The ResNet-101 model was used for DR grading and to analyze the risk of macular edema by the authors in [108], and it was found to perform better than the ResNet-50 model. A heuristically constructed deep neural network was used by the authors of [109] to determine the severity levels of DR. An architecture consisting of an autoencoder along with a VGG network was used by the authors of [62] to reduce overfitting during DR detection. The network was pre-trained in a self-supervised manner.
The binary bat algorithm (BBA), equilibrium optimizer (EO), gravity search algorithm (GSA), and gray wolf optimizer (GWO) were used as the wrapper methods to select the best features that were obtained from the EfficientNet-B0 and DenseNet-121 models for DR grading by the authors in [93]. Transfer-learning-based InceptionV3 was used by the authors of [94] for DR grading. They used two separate U-Net models for OD and blood vessel segmentation. Five DL models—DenseNet-121, InceptionV3, ResNet-153, VGG-16, MobileNet, and InceptionResNet—were used with transfer learning for DR grading by the authors of [110]. Out of these, the VGG-16 model was found to provide the highest accuracy in their experiments. Deepa et al. [99] used a pre-trained Xception model along with hierarchical clustering of image patches by the Siamese network to grade DR fundus images. A boosting-based ensemble learning method followed by a CNN was used by the authors of [111] for DR grading. A novel few-shot classification framework called DRNet was used by the authors of [95] for DR detection and grading. Episodic training was used to train the model on few-shot classification tasks. Both DR detection and DR grading were performed by the authors of [112] using a Bayesian neural network (BNN). Experiments using nine BNNs were performed to utilize their capability of uncertainty estimation in classifying DR. Chen and Chang [96] used the InceptionV3 and EfficientNet models to grade fundus images according to DR severity. A novel hybrid model called E-DenseNet was used by the authors of [113] for DR grading. It was a combination of the EyeNet and DenseNet models based on transfer learning. Another study by the authors of [97] used a similar hybrid model based on transfer learning for the detection and grading of DR. The model consisted of ResNet-18 and GoogleNet. Ar-HGSO, which is an autoregressive-Henry gas-sailfish-optimization-enabled deep learning model was used by the authors of [98]. The DCNN was used for DR detection, and the Shepard CNN (ShCNN) was used for severity classification. Rajavel et al. [114] introduced a cloud-enabled DR grading system that used an optimized deep belief network (O-DBN) classifier model. Dimensionality reduction and noise removal were performed by them using the stochastic neighbor embedding (SNE) feature extraction approach. LeNet-5 was used by the authors in [115] for DR grading. A spiking neural network (SNN) was used for DR grading by the authors in [116]. They used the chimp optimization algorithm with DenseNet (COA-DN) for feature extraction.
Table 1 summarize the studies that were presented in this section.

3. Preprocessing Techniques Used to Grade DR Fundus Images

Image enhancement is performed in most DR studies with the help of several preprocessing techniques. Preprocessing can consist of several steps such as image variation attenuation, intensity conversion, denoising, and contrast enhancement [117]. The attenuation of fundus images is required since there will be a wide variation in the color of the retina of different patients. Intensity conversion is used to make the features clearly visible in an image. Denoising of fundus images is required since much noise may be introduced into these images during the image acquisition process. Finally, contrast enhancement is essential since retinal images captured with the help of a fundus camera will have maximum contrast at the image center, which gradually reduces when moving away from the center. Other common preprocessing steps include image resizing and performing several image augmentations using techniques such as rotation, flipping, and zooming.

4. DR Datasets

The success of all these DL studies relies greatly upon the datasets that are used. The quality of the dataset used and the precision of the annotations will have a huge impact on the results that will be obtained by these methods. Hence, we created a list of some commonly used fundus image datasets for DR diagnosis. Table 2 presents this list.
A few of the commonly used publicly available datasets in these studies are STARE, IDRiD, MESSIDOR, DIARET DB1, the Kaggle APTOS dataset, and the Kaggle EyePACS dataset. Out of these, Kaggle’s EyePACS and APTOS datasets are the most widely used datasets for DR detection/grading. However, these contain fundus images taken with different cameras and settings. The largest among these is the Kaggle EyePACS dataset with more than 88,000 fundus images, whereas some datasets, such as DIARETDB1, HRF, and DRiDB, have less than 100 fundus images.
Almost all of them are annotated for DR detection, while some datasets such as MESSIDOR and Kaggle EyePACS have been annotated also for DR grading. Most of the studies used different datasets/combinations of datasets for training and validation purposes since most of the datasets are small in size. However, some studies have used their own locally collected datasets for their experiments [43,44].

5. Discussion

In order to evaluate the diabetic retinopathy detection and grading methods on different datasets, a set of metrics is used, including model accuracy, sensitivity, sensitivity, and the AUC. These metrics are generally the most-used ones for detection and segmentation in computer vision tasks. In this section, we present the obtained results per dataset using the cited method for detection and grading methods. These results are reported in tables and figures in order to show the most-performed techniques using different architectures.
Table 3 and Table 4 and Figure 5 and Figure 6 show a comparison of the results obtained by some of the studies that have been reviewed. Studies that have used the same publicly available datasets have been grouped for comparison. Kaggle APTOS and Kale EyePACS are the largest datasets that have enabled these researchers to perform their experiments.

5.1. Diabetic Retinopathy Detection

Diabetic retinopathy detection methods are performed on datasets of two classes that represent the images with diabetic retinopathy and the images without diabetic retinopathy. To show that, Table 3 compares some DR-detection-based studies. The most studies used Kaggle’s APTOS and EyePACS datasets, due to their size, which is large compared to the others. The binary classification to detect the fundus images that have DR lesions and, thus, detect the presence of DR is performed by the proposed methods. For that reason, we can see that all the methods can classify diabetic retinopathy with good performance in accuracy, while the sensitivity and specificity values were not mentioned in some of the studies. From the table of the obtained results using the proposed method on the Kaggle APTOS dataset, we can find that the authors in [11] achieved the best accuracy value of 94% with a difference of 4% better than the accuracy obtained using [40] and more than 8% for the other methods. Using the sensitivity and specificity metrics, the method in [22] achieved the best results. On the MESSIDOR and MESIDOR2 datasets, the methods used in [21,24] achieved the best accuracy, respectively. However, we can see that, for MESSIDOR2, the accuracies were lower than the obtained accuracies on MESSIDOR, due to the fact that the size of MESSDOR2 is larger than MESSIDOR, which can explain the difference between the accuracy on MESSDOR2 being 91% and 99% on MESSIDOR. The same observation is made for Kaggle EyePACS, which is a large-scale dataset; the accuracy performances were generally less than 91%, except for [18,24,25], which achieved an accuracy of up to 97%. For all the datasets including STARE, HRF, and IDRid, the performance of the proposed methods needs improvements due to the importance of the topic, as well as the impact of the error if these techniques are used in real-world diagnostics.

5.2. Diabetic Retinopathy Grading

Diabetic-retinopathy-grading-based studies comprise another classification category for diabetic retinopathy analysis. The proposed methods for diabetic retinopathy grading are based on deep learning using different CNN architectures. For that, transfer learning has been widely used in the reviewed studies. This is due to the effectiveness of the known backbones for the image classification tasks. This includes deep learning architectures/models such as encoder–decoder, VGG, DenseNet, Inception, Xception, EfficientNet, graph neural networks, etc. In addition, preprocessing techniques were also used in different studies to improve performance, as mentioned in Section 3. Grayscale conversion, resizing, CLAHE, and green channel extraction are some commonly preferred preprocessing techniques.
These techniques aid in improving the feature extraction process by removing unnecessary noise from the images.
In this section, we attempt to present the grading-based methods on popular DR datasets. The evaluation used a set of metrics including the accuracy, sensitivity, and specificity. Table 4 presents a comparison of the obtained results using the proposed method on studies that have used the Kaggle EyePACS, MESSIDOR2, DDR, and IDRid datasets. Figure 5 and Figure 6 illustrate the experimental results using the proposed methods on the Kaggle APTOS and MESSIDOR datasets. From Table 4, we can find that the proposed methods succeeded in achieving high accuracies on MESSIDOR2, DDR, IDRid, reaching up to 97%. The same observation is made for the other metrics including the sensitivity and specificity. On Kaggle EyePACS, the proposed method in [5] achieved the best accuracy, as well as the best specificity metric value, while we can find that the majority of the methods achieved an accuracy of less than 90%. This is due to the complexity and size of the dataset. On Kaggle APTOS, from the obtained results represented in Figure 5, we can find that most methods that used accuracy as an evaluation metric achieved an accuracy of less than 97%, while only the method in [71] achieved an accuracy of 99%. For the MESSIDOR dataset, the proposed methods used the accuracy, sensitivity, and specificity metrics to evaluate their results. The obtained results are presented in Figure 6. It shows that many methods achieved an accuracy of up to 99% including [5,50,79,116], while the others achieved an accuracy of up to 92%.
From the presented results on different datasets, we can conclude that some of the methods such as [5] succeeded in classifying diabetic retinopathy with grading-based and detection-based methods with high accuracies, while some of the proposed methods were good for some datasets and less efficient for others. This makes diabetic retinopathy classification a challenging task even with the improvements achieved during the last ten years using different deep learning techniques.
Table 4. Performance comparison of diabetic-retinopathy-grading-based studies that used the Kaggle APTOS, MESSIDOR2, DDR, and IDRid datasets. The bold and underlined fonts, respectively, represent first and second place.
Table 4. Performance comparison of diabetic-retinopathy-grading-based studies that used the Kaggle APTOS, MESSIDOR2, DDR, and IDRid datasets. The bold and underlined fonts, respectively, represent first and second place.
DatasetMethodAccuracySensitivitySpecificity
MESSIDOR 2Yaqoob et al. [68]0.970--
Bilal et al. [94]0.9460.9480.944
DDRRahhal et al. [110]1.00--
Elwin et al. [98]0.9140.9250.905
IDRidShorfuzzaman et al. [74]0.9230.980-
Elswah et al. [104]0.866--
Sakaguchi et al. [102]0.793--
Gayathri et al. [5]0.990-0.997
Lee and Ke [76]0.9720.7020.921
Nazir et al. [77]0.981--
Shaik and Cherukuri [87]0.664--
Nithiyasri et al. [108]0.9770.9780.989
AbdelMaksoud et al. [113]0.9300.9670.720
Elwin et al. [98]0.9140.9250.905
Sri et al. [115]0.970--
Kaggle EyePACSVaishnavi et al. [70]0.9580.9200.978
Thota and Reddy [58]0.7400.8000.650
Barhate et al. [62]0.762--
Kwasigroch et al. [49]0.508--
Wang et al. [50]0.632--
Zhou et al. [51]0.632--
Shrivastava and Joshi [52]0.818--
Arora and Pandey [53]0.744--
Kumar [57]0.699--
Maistry et al. [45]0.870--
Nguyen et al. [59]0.8200.8000.820
Khaled et al. [64]0.631--
Harihanth and Karthikeyan [107]0.819--
AbdelMaksoud et al. [113]0.9680.9830.72
Yaqoob et al. [68]0.979--
Qian et al. [73]0.832--
Gayathri et al. [5]0.999-1.00
Xiao et al. [78]0.8820.9940.976
Li et al. [79]0.9440.8400.822
Rajkumar et al. [81]0.8940.9870.999
Reguant et al. [83]0.9500.8600.960
Hari et al. [84]0.830--
Saeed et al. [85]0.9970.9600.998
Jabbar et al. [86]0.966--
Chandrasekaran and Loganathan [88]0.9800.990-
Bilal et al. [94]0.9790.9690.969
Deepa et al. [99]0.960--

6. Future Directions

Finally, we would like to provide some future research directions that were identified during this study. The latest trends such as using interpretable AI and cloud-enabled systems are also being used by some researchers in this field, as well as in medical imaging analysis [118,119,120,121]. Since interpretation will be preferred by doctors to diagnose DR, more studies on explainable AI may come up in the future such as those by Shorfuzzaman et al. [74] and Chetoui and Akhloufi [22]. Such DR-diagnosing systems will be able to help doctors rely on them with more confidence. The use of cloud-enabled systems for computer-aided DR detection systems such as the one by Rajavel et al. [114] will improve scalability. This will enable the development of large-scale systems for DR diagnosis.
Furthermore, developing low-cost standalone DR detection systems such as the one developed by the authors in [60] using a Raspberry Pi will be useful for deployment at health centers at a lower cost. Similar low-cost systems can also be created by developing DR diagnosis systems using smartphone-based retinal imaging systems such as the one by the authors in [122].
Another possible research direction is to develop more automated systems that are capable of determining more than one condition of the eyes, for example systems capable of diagnosing DR, as well as other conditions of the eyes such as glaucoma and diabetic macular edema, such as the one by the authors in [123].

7. Conclusions

In this work, we reviewed recent deep-learning-based approaches for diabetic retinopathy detection/diagnosis performed on fundus images. We classified the studies in this field into two categories including DR-detection-based studies and DR-severity-grading-based studies. Most studies graded fundus images into the severity levels suggested by the ICDR.
Almost all of the latest DL networks have been used efficiently by different studies for DR detection and grading. It was also noticed that there has been a considerable increase in the number of studies in this field recently. A list of the commonly used retinal fundus image datasets for DR detection and grading was also created in this study. Similar studies from each of the two categories of DR studies were compared according to their performance using the commonly used metrics of accuracy, sensitivity, and specificity. In future work, we will make a similar survey about the latest DR segmentation and lesion detection studies that have used DL.

Author Contributions

Conceptualization, A.S., O.E. and S.A.-M.; data curation, A.S.; formal analysis, A.S.; methodology, A.S., O.E. and S.A.-M.; project administration, A.S. and N.A.; supervision, S.A.-M. and N.A.; validation, A.S., O.E., S.A.-M. and N.A.; visualization, A.S. and O.E.; writing—original draft, A.S.; writing—review and editing, A.S., O.E., S.A.-M. and N.A. All authors have read and agreed to the published version of the manuscript.

Funding

Open Access funding provided by the Qatar National Library.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This publication was supported by Qatar University Internal Grant QUHI-CENG-22/23-548. The findings achieved herein are solely the responsibility of the authors.

Conflicts of Interest

The authors declare no conflict of interest.

Sample Availability

Samples of the compounds...are available from the authors.

Abbreviations

The following abbreviations are used in this manuscript:
DRDiabetic retinopathy
DLDeep learning
AIArtificial intelligence
CNNConvolutional neural network

References

  1. IDF Diabetes Atlas 9th Edition. Available online: https://diabetesatlas.org/atlas/ninth-edition/ (accessed on 1 August 2022).
  2. Nijalingappa, P.; Sandeep, B. Machine learning approach for the identification of diabetes retinopathy and its stages. In Proceedings of the 2015 International Conference on Applied and Theoretical Computing and Communication Technology (iCATccT), Davangere, Karnataka, India, 29–31 October 2015; pp. 653–658. [Google Scholar]
  3. Raja, C.; Balaji, L. An automatic detection of blood vessel in retinal images using convolution neural network for diabetic retinopathy detection. Pattern Recognit. Image Anal. 2019, 29, 533–545. [Google Scholar] [CrossRef]
  4. Nazir, T.; Irtaza, A.; Shabbir, Z.; Javed, A.; Akram, U.; Mahmood, M.T. Diabetic retinopathy detection through novel tetragonal local octa patterns and extreme learning machines. Artif. Intell. Med. 2019, 99, 101695. [Google Scholar] [CrossRef] [PubMed]
  5. Gayathri, S.; Gopi, V.P.; Palanisamy, P. Diabetic retinopathy classification based on multipath CNN and machine learning classifiers. Phys. Eng. Sci. Med. 2021, 44, 639–653. [Google Scholar] [CrossRef] [PubMed]
  6. Washburn, P.S. Investigation of severity level of diabetic retinopathy using adaboost classifier algorithm. Mater. Today Proc. 2020, 33, 3037–3042. [Google Scholar] [CrossRef]
  7. Li, X.; Shen, L.; Shen, M.; Tan, F.; Qiu, C.S. Deep learning based early stage diabetic retinopathy detection using optical coherence tomography. Neurocomputing 2019, 369, 134–144. [Google Scholar] [CrossRef]
  8. Islam, K.T.; Wijewickrema, S.; O’Leary, S. Identifying diabetic retinopathy from oct images using deep transfer learning with artificial neural networks. In Proceedings of the 2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS), Cordoba, Spain, 5–7 June 2019; pp. 281–286. [Google Scholar]
  9. Wahid, M.F.; Hossain, A.A. Classification of Diabetic Retinopathy from OCT Images using Deep Convolutional Neural Network with BiLSTM and SVM. In Proceedings of the 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT), Kharagpur, India, 6–8 July 2021; pp. 1–5. [Google Scholar]
  10. Kazakh-British, N.P.; Pak, A.; Abdullina, D. Automatic detection of blood vessels and classification in retinal images for diabetic retinopathy diagnosis with application of convolution neural network. In Proceedings of the 2018 International Conference on Sensors, Signal and Image Processing, Prague, Czech Republic, 12–14 October 2018; pp. 60–63. [Google Scholar]
  11. Anoop, B. Binary Classification of DR-Diabetic Retinopathy using CNN with Fundus Colour Images. Mater. Today Proc. 2022, 58, 212–216. [Google Scholar]
  12. Nasir, N.; Oswald, P.; Alshaltone, O.; Barneih, F.; Al Shabi, M.; Al-Shammaa, A. Deep DR: Detection of Diabetic Retinopathy using a Convolutional Neural Network. In Proceedings of the 2022 Advances in Science and Engineering Technology International Conferences (ASET), Dubai, United Arab Emirates, 21–24 February 2022; pp. 1–5. [Google Scholar]
  13. Chakrabarty, N. A deep learning method for the detection of diabetic retinopathy. In Proceedings of the 2018 5th IEEE Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON), Gorakhpur, India, 2–4 November 2018; pp. 1–5. [Google Scholar]
  14. Jiang, Y.; Wu, H.; Dong, J. Automatic screening of diabetic retinopathy images with convolution neural network based on caffe framework. In Proceedings of the 1st International Conference on Medical and Health Informatics 2017, Taichung city, Taiwan, 20–22 May 2017; pp. 90–94. [Google Scholar]
  15. Niranjana, R.; Narayanan, K.L.; Rani, E.F.I.; Agalya, A.; Chandraleka, C.; Indhumathi, K. Resourceful Retinal Vessel segmentation for Early Exposure of Vision Threatening Diseases. In Proceedings of the 2022 International Conference on Advanced Computing Technologies and Applications (ICACTA), Coimbatore, India, 4–5 March 2022; pp. 1–6. [Google Scholar]
  16. Rêgo, S.; Dutra-Medeiros, M.; Soares, F.; Monteiro-Soares, M. Screening for Diabetic Retinopathy Using an Automated Diagnostic System Based on Deep Learning: Diagnostic Accuracy Assessment. Ophthalmologica 2021, 244, 250–257. [Google Scholar] [CrossRef]
  17. Umapathy, A.; Sreenivasan, A.; Nairy, D.S.; Natarajan, S.; Rao, B.N. Image processing, textural feature extraction and transfer learning based detection of diabetic retinopathy. In Proceedings of the 2019 9th International Conference on Bioscience, Biochemistry and Bioinformatics, Singapore, 7–9 January 2019; pp. 17–21. [Google Scholar]
  18. Kolla, M.; Venugopal, T. Efficient Classification of Diabetic Retinopathy using Binary CNN. In Proceedings of the 2021 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE), Dubai, United Arab Emirates, 17–18 March 2021; pp. 244–247. [Google Scholar]
  19. Pamadi, A.M.; Ravishankar, A.; Nithya, P.A.; Jahnavi, G.; Kathavate, S. Diabetic Retinopathy Detection using MobileNetV2 Architecture. In Proceedings of the 2022 International Conference on Smart Technologies and Systems for Next Generation Computing (ICSTSN), Villupuram, India, 25–26 March 2022; pp. 1–5. [Google Scholar]
  20. Saranya, P.; Devi, S.K.; Bharanidharan, B. Detection of Diabetic Retinopathy in Retinal Fundus Images using DenseNet based Deep Learning Model. In Proceedings of the 2022 International Mobile and Embedded Technology Conference (MECON), Noida, India, 10–11 March 2022; pp. 268–272. [Google Scholar]
  21. Mudaser, W.; Padungweang, P.; Mongkolnam, P.; Lavangnananda, P. Diabetic Retinopathy Classification with pre-trained Image Enhancement Model. In Proceedings of the 2021 IEEE 12th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), New York, NY, USA, 1–4 December 2021; pp. 629–632. [Google Scholar]
  22. Chetoui, M.; Akhloufi, M.A. Explainable diabetic retinopathy using EfficientNET. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 1966–1969. [Google Scholar]
  23. Zhang, Z. Deep-learning-based early detection of diabetic retinopathy on fundus photography using efficientnet. In Proceedings of the 2020 the 4th International Conference on Innovation in Artificial Intelligence, Shenzhen, China, 7–9 August 2020; pp. 70–74. [Google Scholar]
  24. Sudarmadji, P.W.; Pakan, P.D.; Dillak, R.Y. Diabetic Retinopathy Stages Classification using Improved Deep Learning. In Proceedings of the 2020 International Conference on Informatics, Multimedia, Cyber and Information System (ICIMCIS), Jakarta, Indonesia, 19–20 November 2020; pp. 104–109. [Google Scholar]
  25. Boral, Y.S.; Thorat, S.S. Classification of Diabetic Retinopathy based on Hybrid Neural Network. In Proceedings of the 2021 5th International Conference on Computing Methodologies and Communication (ICCMC), Erode, India, 8–10 April 2021; pp. 1354–1358. [Google Scholar]
  26. Sanjana, S.; Shadin, N.S.; Farzana, M. Automated Diabetic Retinopathy Detection Using Transfer Learning Models. In Proceedings of the 2021 5th International Conference on Electrical Engineering and Information & Communication Technology (ICEEICT), Mirpur, Dhaka, 18–20 November 2021; pp. 1–6. [Google Scholar]
  27. Hossen, M.S.; Reza, A.A.; Mishu, M.C. An automated model using deep convolutional neural network for retinal image classification to detect diabetic retinopathy. In Proceedings of the International Conference on Computing Advancements, Colombo, Sri Lanka, 10–11 December 2020; pp. 1–8. [Google Scholar]
  28. Qomariah, D.U.N.; Tjandrasa, H.; Fatichah, C. Classification of diabetic retinopathy and normal retinal images using CNN and SVM. In Proceedings of the 2019 12th International Conference on Information & Communication Technology and System (ICTS), Surabaya, Indonesia, 18 July 2019; pp. 152–157. [Google Scholar]
  29. Kumar, N.S.; Karthikeyan, B.R. Diabetic Retinopathy Detection using CNN, Transformer and MLP based Architectures. In Proceedings of the 2021 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Hualien City, Taiwan, 16–19 November 2021; pp. 1–2. [Google Scholar]
  30. Kaushik, H.; Singh, D.; Kaur, M.; Alshazly, H.; Zaguia, A.; Hamam, H. Diabetic retinopathy diagnosis from fundus images using stacked generalization of deep models. IEEE Access 2021, 9, 108276–108292. [Google Scholar] [CrossRef]
  31. Lian, C.; Liang, Y.; Kang, R.; Xiang, Y. Deep convolutional neural networks for diabetic retinopathy classification. In Proceedings of the 2nd International Conference on Advances in Image Processing, Chengdu, China, 6–18 June 2018; pp. 68–72. [Google Scholar]
  32. Lahmar, C.; Idri, A. Deep hybrid architectures for diabetic retinopathy classification. In Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization; Taylor & Francis: Oxfordshire, UK, 2022; pp. 1–19. [Google Scholar]
  33. Dwivedi, S.A.; Kalin, Y.A.; Roy, G.K.; Singla, K. Real-Time Detection for Normal, Glaucomatous and Diabetic Retinopathy Eyes for Ophthalmoscopy using Deep Learning. In Proceedings of the 2022 9th International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 23–25 March 2022; pp. 598–603. [Google Scholar]
  34. Jiang, H.; Xu, J.; Shi, R.; Yang, K.; Zhang, D.; Gao, M.; Ma, H.; Qian, W. A multi-label deep learning model with interpretable Grad-CAM for diabetic retinopathy classification. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 1560–1563. [Google Scholar]
  35. Quellec, G.; Charrière, K.; Boudi, Y.; Cochener, B.; Lamard, M. Deep image mining for diabetic retinopathy screening. Med. Image Anal. 2017, 39, 178–193. [Google Scholar] [CrossRef] [Green Version]
  36. Tang, X.; Huang, Y.; Lin, L.; Li, M.; Yuan, J. Automated Diabetic Retinopathy Identification via Lesion Guided Network. In Proceedings of the The Fourth International Symposium on Image Computing and Digital Medicine, Shenyang, China, 5–8 December 2020; pp. 141–144. [Google Scholar]
  37. Gundluru, N.; Rajput, D.S.; Lakshmanna, K.; Kaluri, R.; Shorfuzzaman, M.; Uddin, M.; Rahman Khan, M.A. Enhancement of Detection of Diabetic Retinopathy Using Harris Hawks Optimization with Deep Learning Model. Comput. Intell. Neurosci. 2022, 2022, 8512469. [Google Scholar] [CrossRef]
  38. Gunasekaran, K.; Pitchai, R.; Chaitanya, G.K.; Selvaraj, D.; Annie Sheryl, S.; Almoallim, H.S.; Alharbi, S.A.; Raghavan, S.; Tesemma, B.G. A Deep Learning Framework for Earlier Prediction of Diabetic Retinopathy from Fundus Photographs. BioMed Res. Int. 2022, 2022, 3163496. [Google Scholar] [CrossRef] [PubMed]
  39. Baba, S.M.; Bala, I. Detection of Diabetic Retinopathy with Retinal Images using CNN. In Proceedings of the 2022 6th International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 25–27 May 2022; pp. 1074–1080. [Google Scholar]
  40. El-Ateif, S.; Idri, A. Single-modality and joint fusion deep learning for diabetic retinopathy diagnosis. Sci. Afr. 2022, 17, e01280. [Google Scholar] [CrossRef]
  41. Wilkinson, C.; Ferris III, F.L.; Klein, R.E.; Lee, P.P.; Agardh, C.D.; Davis, M.; Dills, D.; Kampik, A.; Pararajasegaram, R.; Verdaguer, J.T.; et al. Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology 2003, 110, 1677–1682. [Google Scholar] [CrossRef] [PubMed]
  42. Harshitha, C.; Asha, A.; Pushkala, J.L.S.; Anogini, R.N.S.; Karthikeyan, C. Predicting the Stages of Diabetic Retinopathy using Deep Learning. In Proceedings of the 2021 6th International Conference on Inventive Computation Technologies (ICICT), Coimbatore, India, 20–22 January 2021; pp. 1–6. [Google Scholar]
  43. Luo, X.; Pu, Z.; Xu, Y.; Wong, W.K.; Su, J.; Dou, X.; Ye, B.; Hu, J.; Mou, L. MVDRNet: Multi-view diabetic retinopathy detection by combining DCNNs and attention mechanisms. Pattern Recognit. 2021, 120, 108104. [Google Scholar] [CrossRef]
  44. Baget-Bernaldiz, M.; Pedro, R.A.; Santos-Blanco, E.; Navarro-Gil, R.; Valls, A.; Moreno, A.; Rashwan, H.A.; Puig, D. Testing a Deep Learning Algorithm for Detection of Diabetic Retinopathy in a Spanish Diabetic Population and with MESSIDOR Database. Diagnostics 2021, 11, 1385. [Google Scholar] [CrossRef] [PubMed]
  45. Maistry, A.; Pillay, A.; Jembere, E. Improving the accuracy of diabetes retinopathy image classification using augmentation. In Proceedings of the Conference of the South African Institute of Computer Scientists and Information Technologists 2020, Cape Town, South Africa, 14–16 September 2020; pp. 134–140. [Google Scholar]
  46. Agustin, T.; Sunyoto, A. Optimization Convolutional Neural Network for Classification Diabetic Retinopathy Severity. In Proceedings of the 2020 3rd International Conference on Information and Communications Technology (ICOIACT), Yogyakarta, Indonesia, 24–25 November 2020; pp. 66–71. [Google Scholar]
  47. Li, X.; Pang, T.; Xiong, B.; Liu, W.; Liang, P.; Wang, T. Convolutional neural networks based transfer learning for diabetic retinopathy fundus image classification. In Proceedings of the 2017 10th International Congress on Image and Signal Processing, Biomedical Engineering and Informatics (CISP-BMEI), Shanghai, China, 14–16 October 2017; pp. 1–11. [Google Scholar]
  48. Ardiyanto, I.; Nugroho, H.A.; Buana, R.L.B. Deep learning-based diabetic retinopathy assessment on embedded system. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju Island, South Korea, 11–15 July 2017; pp. 1760–1763. [Google Scholar]
  49. Kwasigroch, A.; Jarzembinski, B.; Grochowski, M. Deep CNN based decision support system for detection and assessing the stage of diabetic retinopathy. In Proceedings of the 2018 International Interdisciplinary PhD Workshop (IIPhDW), Swinoujscie, Poland, 9–12 May 2018; pp. 111–116. [Google Scholar]
  50. Wang, X.; Lu, Y.; Wang, Y.; Chen, W.B. Diabetic retinopathy stage classification using convolutional neural networks. In Proceedings of the 2018 IEEE International Conference on Information Reuse and Integration (IRI), Salt Lake City, UT, USA, 7–9 July 2018; pp. 465–471. [Google Scholar]
  51. Zhou, K.; Gu, Z.; Liu, W.; Luo, W.; Cheng, J.; Gao, S.; Liu, J. Multi-cell multi-task convolutional neural networks for diabetic retinopathy grading. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 2724–2727. [Google Scholar]
  52. Shrivastava, U.; Joshi, M. Automated Multiclass Diagnosis of Diabetic Retinopathy using Hierarchical Learning. In Proceedings of the 11th Indian Conference on Computer Vision, Graphics and Image Processing, Hyderabad, India, 18–22 December 2018; pp. 1–7. [Google Scholar]
  53. Arora, M.; Pandey, M. Deep neural network for diabetic retinopathy detection. In Proceedings of the 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon), Faridabad, India, 14–16 February 2019; pp. 189–193. [Google Scholar]
  54. Kassani, S.H.; Kassani, P.H.; Khazaeinezhad, R.; Wesolowski, M.J.; Schneider, K.A.; Deters, R. Diabetic retinopathy classification using a modified xception architecture. In Proceedings of the 2019 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Ajman, United Arab Emirates, 10–12 December 2019; pp. 1–6. [Google Scholar]
  55. Hathwar, S.B.; Srinivasa, G. Automated grading of diabetic retinopathy in retinal fundus images using deep learning. In Proceedings of the 2019 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), Kuala Lumpur, Malaysia, 17–19 September 2019; pp. 73–77. [Google Scholar]
  56. Bellemo, V.; Lim, Z.W.; Lim, G.; Nguyen, Q.D.; Xie, Y.; Yip, M.Y.; Hamzah, H.; Ho, J.; Lee, X.Q.; Hsu, W.; et al. Artificial intelligence using deep learning to screen for referable and vision-threatening diabetic retinopathy in Africa: A clinical validation study. Lancet Digit. Health 2019, 1, e35–e44. [Google Scholar] [CrossRef] [Green Version]
  57. Kumar, S. Diabetic Retinopathy Diagnosis with Ensemble Deep-Learning. In Proceedings of the 3rd International Conference on Vision, Image and Signal Processing, Vancouver, BC, Canada, 26–28 August 2019; pp. 1–5. [Google Scholar]
  58. Thota, N.B.; Reddy, D.U. Improving the accuracy of diabetic retinopathy severity classification with transfer learning. In Proceedings of the IEEE 63rd International Midwest Symposium on Circuits and Systems (MWSCAS), Springfield, MA, USA, 9–12 August 2020; pp. 1003–1006. [Google Scholar]
  59. Nguyen, Q.H.; Muthuraman, R.; Singh, L.; Sen, G.; Tran, A.C.; Nguyen, B.P.; Chua, M. Diabetic retinopathy detection using deep learning. In Proceedings of the 4th International Conference on Machine Learning and Soft Computing, Haiphong City, Vietnam, 17–19 January 2020; pp. 103–107. [Google Scholar]
  60. Lavanya, R.V.; Sumesh, E.; Jayakumari, C.; Isaac, R. Detection and Classification of Diabetic Retinopathy using Raspberry PI. In Proceedings of the 2020 4th International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 5–7 November 2020; pp. 1688–1691. [Google Scholar]
  61. Elzennary, A.; Soliman, M.; Ibrahim, M. Early Deep Detection for Diabetic Retinopathy. In Proceedings of the 2020 International Symposium on Advanced Electrical and Communication Technologies (ISAECT), Kenitra, Morocco, 25–27 November 2020; pp. 1–5. [Google Scholar]
  62. Barhate, N.; Bhave, S.; Bhise, R.; Sutar, R.G.; Karia, D.C. Reducing Overfitting in Diabetic Retinopathy Detection using Transfer Learning. In Proceedings of the 2020 IEEE 5th International Conference on Computing Communication and Automation (ICCCA), Greater Noida, India, 30–31 October 2020; pp. 298–301. [Google Scholar]
  63. Wang, S.; Wang, X.; Hu, Y.; Shen, Y.; Yang, Z.; Gan, M.; Lei, B. Diabetic retinopathy diagnosis using multichannel generative adversarial network with semisupervision. IEEE Trans. Autom. Sci. Eng. 2020, 18, 574–585. [Google Scholar] [CrossRef]
  64. Khaled, O.; El-Sahhar, M.; El-Dine, M.A.; Talaat, Y.; Hassan, Y.M.; Hamdy, A. Cascaded architecture for classifying the preliminary stages of diabetic retinopathy. In Proceedings of the 2020 9th International Conference on Software and Information Engineering (ICSIE), Cairo, Egypt, 11–13 November 2020; pp. 108–112. [Google Scholar]
  65. Islam, M.R.; Hasan, M.A.M.; Sayeed, A. Transfer learning based diabetic retinopathy detection with a novel preprocessed layer. In Proceedings of the 2020 IEEE Region 10 Symposium (TENSYMP), Dhaka, Bangladesh, 5–7 June 2020; pp. 888–891. [Google Scholar]
  66. Wang, J.; Bai, Y.; Xia, B. Simultaneous diagnosis of severity and features of diabetic retinopathy in fundus photography using deep learning. IEEE J. Biomed. Health Inform. 2020, 24, 3397–3407. [Google Scholar] [CrossRef]
  67. AbdelMaksoud, E.; Barakat, S.; Elmogy, M. Diabetic Retinopathy Grading Based on a Hybrid Deep Learning Model. In Proceedings of the 2020 International Conference on Data Analytics for Business and Industry: Way Towards a Sustainable Economy (ICDABI), Sakheer, Bahrain, 26–27 October 2020; pp. 1–6. [Google Scholar]
  68. Yaqoob, M.K.; Ali, S.F.; Kareem, I.; Fraz, M.M. Feature-based optimized deep residual network architecture for diabetic retinopathy detection. In Proceedings of the 2020 IEEE 23rd International Multitopic Conference (INMIC), Bahawalpur, Pakistan, 5–7 November 2020; pp. 1–6. [Google Scholar]
  69. Taufiqurrahman, S.; Handayani, A.; Hermanto, B.R.; Mengko, T.L.E.R. Diabetic retinopathy classification using a hybrid and efficient MobileNetV2-SVM model. In Proceedings of the 2020 IEEE REGION 10 CONFERENCE (TENCON), Osaka, Japan, 16–19 November 2020; pp. 235–240. [Google Scholar]
  70. Vaishnavi, J.; Ravi, S.; Anbarasi, A. An efficient adaptive histogram based segmentation and extraction model for the classification of severities on diabetic retinopathy. Multimed. Tools Appl. 2020, 79, 30439–30452. [Google Scholar] [CrossRef]
  71. Shankar, K.; Sait, A.R.W.; Gupta, D.; Lakshmanaprabu, S.; Khanna, A.; Pandey, H.M. Automated detection and classification of fundus diabetic retinopathy images using synergic deep learning model. Pattern Recognit. Lett. 2020, 133, 210–216. [Google Scholar] [CrossRef]
  72. Karki, S.S.; Kulkarni, P. Diabetic Retinopathy Classification using a Combination of EfficientNets. In Proceedings of the 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), Pune, India, 5–7 March 2021; pp. 68–72. [Google Scholar]
  73. Qian, Z.; Wu, C.; Chen, H.; Chen, M. Diabetic retinopathy grading using attention based convolution neural network. In Proceedings of the 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 12–14 March 2021; Volume 5, pp. 2652–2655. [Google Scholar]
  74. Shorfuzzaman, M.; Hossain, M.S.; El Saddik, A. An Explainable Deep Learning Ensemble Model for Robust Diagnosis of Diabetic Retinopathy Grading. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2021, 17, 1–24. [Google Scholar] [CrossRef]
  75. Sugeno, A.; Ishikawa, Y.; Ohshima, T.; Muramatsu, R. Simple methods for the lesion detection and severity grading of diabetic retinopathy by image processing and transfer learning. Comput. Biol. Med. 2021, 137, 104795. [Google Scholar] [CrossRef]
  76. Lee, C.H.; Ke, Y.H. Fundus images classification for Diabetic Retinopathy using Deep Learning. In Proceedings of the 2021 The 13th International Conference on Computer Modeling and Simulation, Melbourne, Australia, 25–27 June 2021; pp. 264–270. [Google Scholar]
  77. Nazir, T.; Nawaz, M.; Rashid, J.; Mahum, R.; Masood, M.; Mehmood, A.; Ali, F.; Kim, J.; Kwon, H.Y.; Hussain, A. Detection of diabetic eye disease from retinal images using a deep learning based CenterNet model. Sensors 2021, 21, 5283. [Google Scholar] [CrossRef] [PubMed]
  78. Xiao, Z.; Zhang, Y.; Wu, J.; Zhang, X. SE-MIDNet Based on Deep Learning for Diabetic Retinopathy Classification. In Proceedings of the 2021 7th International Conference on Computing and Artificial Intelligence, Tianjin, China, 23–26 April 2021; pp. 92–98. [Google Scholar]
  79. Li, Y.; Song, Z.; Kang, S.; Jung, S.; Kang, W. Semi-Supervised Auto-Encoder Graph Network for Diabetic Retinopathy Grading. IEEE Access 2021, 9, 140759–140767. [Google Scholar] [CrossRef]
  80. Martinez-Murcia, F.J.; Ortiz, A.; Ramírez, J.; Górriz, J.M.; Cruz, R. Deep residual transfer learning for automatic diagnosis and grading of diabetic retinopathy. Neurocomputing 2021, 452, 424–434. [Google Scholar] [CrossRef]
  81. Rajkumar, R.; Jagathishkumar, T.; Ragul, D.; Selvarani, A.G. Transfer learning approach for diabetic retinopathy detection using residual network. In Proceedings of the 2021 6th International Conference on Inventive Computation Technologies (ICICT), Coimbatore, India, 20–22 January 2021; pp. 1189–1193. [Google Scholar]
  82. Swedhaasri, M.; Parekh, T.; Sharma, A. A Multi-Stage Deep Transfer Learning Method for Classification of Diabetic Retinopathy in Retinal images. In Proceedings of the 2021 Second International Conference on Electronics and Sustainable Communication Systems (ICESC), Coimbatore, India, 4–6 August 2021; pp. 1143–1149. [Google Scholar]
  83. Reguant, R.; Brunak, S.; Saha, S. Understanding inherent image features in CNN-based assessment of diabetic retinopathy. Sci. Rep. 2021, 11, 1–12. [Google Scholar] [CrossRef]
  84. Hari, K.N.; Karthikeyan, B.; Reddy, M.R.; Seethalakshmi, R. Diabetic Retinopathy Detection with Feature Enhancement and Deep Learning. In Proceedings of the 2021 International Conference on System, Computation, Automation and Networking (ICSCAN), Puducherry, India, 30–31 July 2021; pp. 1–5. [Google Scholar]
  85. Saeed, F.; Hussain, M.; Aboalsamh, H.A. Automatic diabetic retinopathy diagnosis using adaptive fine-tuned convolutional neural network. IEEE Access 2021, 9, 41344–41359. [Google Scholar] [CrossRef]
  86. Jabbar, M.K.; Yan, J.; Xu, H.; Ur Rehman, Z.; Jabbar, A. Transfer Learning-Based Model for Diabetic Retinopathy Diagnosis Using Retinal Images. Brain Sci. 2022, 12, 535. [Google Scholar] [CrossRef]
  87. Shaik, N.S.; Cherukuri, T.K. Hinge attention network: A joint model for diabetic retinopathy severity grading. Appl. Intell 52 2022, 15105–15121. [Google Scholar] [CrossRef]
  88. Chandrasekaran, R.; Loganathan, B. Retinopathy grading with deep learning and wavelet hyper-analytic activations. Vis. Comput. 2022, 1–16. [Google Scholar] [CrossRef]
  89. Oulhadj, M.; Riffi, J.; Chaimae, K.; Mahraz, A.M.; Ahmed, B.; Yahyaouy, A.; Fouad, C.; Meriem, A.; Idriss, B.A.; Tairi, H. Diabetic retinopathy prediction based on deep learning and deformable registration. Multimed. Tools Appl. 2022, 81, 28709–28727. [Google Scholar] [CrossRef]
  90. Nair, A.T.; Anitha, M.; Kumar, A. Disease Grading of Diabetic Retinopathy using Deep Learning Techniques. In Proceedings of the 2022 6th International Conference on Computing Methodologies and Communication (ICCMC), Erode, India, 29–31 March 2022; pp. 1019–1024. [Google Scholar]
  91. Deepa, V.; Kumar, C.S.; Cherian, T. Pre-Trained Convolutional Neural Network for Automated Grading of Diabetic Retinopathy. In Proceedings of the 2022 First International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT), Tiruchirappalli, India, 16–18 February 2022; pp. 1–5. [Google Scholar]
  92. Farag, M.M.; Fouad, M.; Abdel-Hamid, A.T. Automatic Severity Classification of Diabetic Retinopathy Based on DenseNet and Convolutional Block Attention Module. IEEE Access 2022, 10, 38299–38308. [Google Scholar] [CrossRef]
  93. Canayaz, M. Classification of diabetic retinopathy with feature selection over deep features using nature-inspired wrapper methods. Appl. Soft Comput 2022, 128, 109462. [Google Scholar] [CrossRef]
  94. Bilal, A.; Zhu, L.; Deng, A.; Lu, H.; Wu, N. AI-Based Automatic Detection and Classification of Diabetic Retinopathy Using U-Net and Deep Learning. Symmetry 2022, 14, 1427. [Google Scholar] [CrossRef]
  95. Murugappan, M.; Prakash, N.; Jeya, R.; Mohanarathinam, A.; Hemalakshmi, G.; Mahmud, M. A novel few-shot classification framework for diabetic retinopathy detection and grading. Measurement 2022, 200, 111485. [Google Scholar] [CrossRef]
  96. Chen, C.Y.; Chang, M.C. Using Deep Neural Networks to Classify the Severity of Diabetic Retinopathy. In Proceedings of the 2022 IEEE International Conference on Consumer Electronics-Taiwan, Taipei, Taiwan, 6–8 July 2022; pp. 241–242. [Google Scholar]
  97. Butt, M.M.; Iskandar, D.; Abdelhamid, S.E.; Latif, G.; Alghazo, R. Diabetic Retinopathy Detection from Fundus Images of the Eye Using Hybrid Deep Learning Features. Diagnostics 2022, 12, 1607. [Google Scholar] [CrossRef]
  98. Elwin, J.G.R.; Mandala, J.; Maram, B.; Kumar, R.R. Ar-HGSO: Autoregressive-Henry Gas Sailfish Optimization enabled deep learning model for diabetic retinopathy detection and severity level classification. Biomed. Signal Process. Control 2022, 77, 103712. [Google Scholar] [CrossRef]
  99. Deepa, V.; Sathish Kumar, C.; Cherian, T. Automated grading of diabetic retinopathy using CNN with hierarchical clustering of image patches by siamese network. Phys. Eng. Sci. Med. 2022, 45, 623–635. [Google Scholar] [CrossRef]
  100. Araújo, T.; Aresta, G.; Mendonça, L.; Penas, S.; Maia, C.; Carneiro, Â.; Mendonça, A.M.; Campilho, A. DR| GRADUATE: Uncertainty-aware deep-learning-based diabetic retinopathy grading in eye fundus images. Med. Image Anal. 2020, 63, 101715. [Google Scholar] [CrossRef]
  101. Bhawarkar, Y.; Bhure, K.; Chaudhary, V.; Alte, B. Diabetic Retinopathy Detection From Fundus Images Using Multi-Tasking Model With EfficientNet B5. In Proceedings of the ITM Web of Conferences; EDP Sciences: Navi Mumbai, India, 2022; Volume 44, p. 03027. [Google Scholar]
  102. Sakaguchi, A.; Wu, R.; Kamata, S.i. Fundus image classification for diabetic retinopathy using disease severity grading. In Proceedings of the 2019 9th International Conference on Biomedical Engineering and Technology, Tokyo, Japan, 28–30 March 2019; pp. 190–196. [Google Scholar]
  103. Gulshan, V.; Rajan, R.P.; Widner, K.; Wu, D.; Wubbels, P.; Rhodes, T.; Whitehouse, K.; Coram, M.; Corrado, G.; Ramasamy, K.; et al. Performance of a deep-learning algorithm vs manual grading for detecting diabetic retinopathy in India. JAMA Ophthalmol. 2019, 137, 987–993. [Google Scholar] [CrossRef] [Green Version]
  104. Elswah, D.K.; Elnakib, A.A.; Moustafa, H.E.d. Automated diabetic retinopathy grading using resnet. In Proceedings of the 2020 37th National Radio Science Conference (NRSC), Cairo, Egypt, 8–10 September 2020; pp. 248–254. [Google Scholar]
  105. Dong, C.W.; Xia, D.; Jin, J.; Yang, Z. Classification of diabetic retinopathy based on DSIRNet. In Proceedings of the 2019 14th International Conference on Computer Science & Education (ICCSE), Toronto, ON, Canada, 19–21 August 2019; pp. 692–696. [Google Scholar]
  106. Jiang, H.; Yang, K.; Gao, M.; Zhang, D.; Ma, H.; Qian, W. An interpretable ensemble deep learning model for diabetic retinopathy disease classification. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 2045–2048. [Google Scholar]
  107. Harihanth, K.; Karthikeyan, B. Diabetic Retinopathy Detection using ensemble deep Learning and Individual Channel Training. In Proceedings of the 2020 3rd International Conference on Intelligent Sustainable Systems (ICISS), Thoothukudi, India, 3–5 December 2020; pp. 1042–1049. [Google Scholar]
  108. Nithiyasri, M.; Ananthi, G.; Thiruvengadam, S. Improved Classification of Stages in Diabetic Retinopathy Disease using Deep Learning Algorithm. In Proceedings of the 2022 International Conference on Wireless Communications Signal Processing and Networking (WiSPNET), Chennai, India, 24–26 March 2022; pp. 143–147. [Google Scholar]
  109. Kassimi, A.E.B.; Madiafi, M.; Kammour, A.; Bouroumi, A. A Deep Neural Network for Detecting the Severity Level of Diabetic Retinopathy from Retinography Images. In Proceedings of the 2022 2nd International Conference on Innovative Research in Applied Science, Engineering and Technology (IRASET), Meknes, Morocco, 3–4 March 2022; pp. 1–7. [Google Scholar]
  110. Rahhal, D.; Alhamouri, R.; Albataineh, I.; Duwairi, R. Detection and Classification of Diabetic Retinopathy Using Artificial Intelligence Algorithms. In Proceedings of the 2022 13th International Conference on Information and Communication Systems (ICICS), Irbid, Jordan, 21–23 June 2022; pp. 15–21. [Google Scholar]
  111. Meenakshi, G.; Thailambal, G. Categorisation and Prognosticationof Diabetic Retinopathy using Ensemble Learning and CNN. In Proceedings of the 2022 6th International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, 28–30 April 2022; pp. 1145–1152. [Google Scholar]
  112. Jaskari, J.; Sahlsten, J.; Damoulas, T.; Knoblauch, J.; Särkkä, S.; Kärkkäinen, L.; Hietala, K.; Kaski, K.K. Uncertainty-aware deep learning methods for robust diabetic retinopathy classification. IEEE Access 2022, 10, 76669–76681. [Google Scholar] [CrossRef]
  113. AbdelMaksoud, E.; Barakat, S.; Elmogy, M. A computer-aided diagnosis system for detecting various diabetic retinopathy grades based on a hybrid deep learning technique. Med. Biol. Eng. Comput. 2022, 60, 2015–2038. [Google Scholar] [CrossRef]
  114. Rajavel, R.; Sundaramoorthy, B.; GR, K.; Ravichandran, S.K.; Leelasankar, K. Cloud-enabled Diabetic Retinopathy Prediction System using optimized deep Belief Network Classifier. J. Ambient. Intell. Humaniz. Comput. 2022, 1–9. [Google Scholar] [CrossRef]
  115. Sri, K.S.; Priya, G.K.; Kumar, B.P.; Sravya, S.D.; Priya, M.B. Diabetic Retinopathy Classification using Deep Learning Technique. In Proceedings of the 2022 6th International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, 28–30 April 2022; pp. 1492–1496. [Google Scholar]
  116. Ragab, M.; Aljedaibi, W.H.; Nahhas, A.F.; Alzahrani, I.R. Computer aided diagnosis of diabetic retinopathy grading using spiking neural network. Comput. Electr. Eng. 2022, 101, 108014. [Google Scholar] [CrossRef]
  117. Bhardwaj, C.; Jain, S.; Sood, M. Appraisal of preprocessing techniques for automated detection of diabetic retinopathy. In Proceedings of the 2018 Fifth International Conference on Parallel, Distributed and Grid Computing (PDGC), Solan, India, 20–22 December 2018; pp. 734–739. [Google Scholar]
  118. Al-Mohannadi, A.; Al-Maadeed, S.; Elharrouss, O.; Sadasivuni, K.K. Encoder-decoder architecture for ultrasound IMC segmentation and cIMT measurement. Sensors 2021, 21, 6839. [Google Scholar] [CrossRef]
  119. Elharrouss, O.; Akbari, Y.; Almaadeed, N.; Al-Maadeed, S. Backbones-review: Feature extraction networks for deep learning and deep reinforcement learning approaches. arXiv 2022, arXiv:2206.08016. [Google Scholar]
  120. Riahi, A.; Elharrouss, O.; Al-Maadeed, S. BEMD-3DCNN-based method for COVID-19 detection. Comput. Biol. Med. 2022, 142, 105188. [Google Scholar] [CrossRef]
  121. Elharrouss, O.; Subramanian, N.; Al-Maadeed, S. An encoder–decoder-based method for segmentation of COVID-19 lung infection in CT images. SN Comput. Sci. 2022, 3, 1–12. [Google Scholar] [CrossRef]
  122. Hacisoftaoglu, R.E.; Karakaya, M.; Sallam, A.B. Deep learning frameworks for diabetic retinopathy detection with smartphone-based retinal imaging systems. Pattern Recognit. Lett. 2020, 135, 409–417. [Google Scholar] [CrossRef]
  123. Keel, S.; Wu, J.; Lee, P.Y.; Scheetz, J.; He, M. Visualizing deep learning models for the detection of referable diabetic retinopathy and glaucoma. JAMA Ophthalmol. 2019, 137, 288–292. [Google Scholar] [CrossRef]
Figure 1. Visualization of a healthy retina and an unhealthy retina (https://neoretina.com/blog/diabetic-retinopathy-can-it-be-reversed/, accessed on 1 August 2022).
Figure 1. Visualization of a healthy retina and an unhealthy retina (https://neoretina.com/blog/diabetic-retinopathy-can-it-be-reversed/, accessed on 1 August 2022).
Diagnostics 13 00345 g001
Figure 2. Types of diabetic retinopathy studies.
Figure 2. Types of diabetic retinopathy studies.
Diagnostics 13 00345 g002
Figure 3. Backbones used for diabetic retinopathy detection studies.
Figure 3. Backbones used for diabetic retinopathy detection studies.
Diagnostics 13 00345 g003
Figure 4. The five types of diabetic retinopathy.
Figure 4. The five types of diabetic retinopathy.
Diagnostics 13 00345 g004
Figure 5. Performance comparison of diabetic-retinopathy-grading-based studies that used the Kaggle APTOS dataset.
Figure 5. Performance comparison of diabetic-retinopathy-grading-based studies that used the Kaggle APTOS dataset.
Diagnostics 13 00345 g005
Figure 6. Performance comparison of diabetic-retinopathy-grading-based studies that used the MESSIDOR dataset.
Figure 6. Performance comparison of diabetic-retinopathy-grading-based studies that used the MESSIDOR dataset.
Diagnostics 13 00345 g006
Table 2. Diabetic retinopathy datasets.
Table 2. Diabetic retinopathy datasets.
DatasetNo. of ImagesImage Size
STARE400700 × 605
IDRiD5164288 × 2848
MESSIDOR1200Different sizes
HRF453504 × 2336
Kaggle EyePACS88,702Different sizes
Kaggle APTOS 20195590Different sizes
MESSIDOR 21748Different sizes
DDR13,673Different sizes
Table 3. Performance Comparison of diabetic-retinopathy-detection-based studies. The bold and underlined fonts, respectively, represent first and second place.
Table 3. Performance Comparison of diabetic-retinopathy-detection-based studies. The bold and underlined fonts, respectively, represent first and second place.
DatasetMethodAccuracySensitivitySpecificity
Kaggle APTOSAnoop et al. [11]0.9460.8600.960
Pamadi et al. [19]0.780--
Saranya et al. [20]0.830--
Chetoui and Akhloufi [22]-0.9910.972
Sanjana et al. [26]0.8610.8540.875
Kumar and Karthikeyan [29]0.864--
Lahmar and Idri [32]0.890--
El-Ateif and Idri [40]0.9070.9280.893
MESSIDORRego et al. [16]-0.8080.973
Umapathy et al. [17]0.944--
Sudarmadji et al. [24]0.9970.9900.980
Hossen et al. [27]0.9490.9260.971
Qomariah et al. [28]0.958--
MESSIDOR2Mudaser et al. [21]0.910--
Sanjana et al. [26]0.8610.8540.875
Lahmar and Idri [32]0.841--
El-Ateif and Idri [40]0.7770.3100.938
Kaggle EyePACSSaranya et al. [20]0.830--
Jiang et al. [14]0.757--
Kaushik et al. [30]0.979--
Boral and Thorat [25]0.9880.9771.00
Rego et al. [16]-0.8080.973
Kolla and Venugopal [18]0.910--
Chetoui and Akhloufi [22]-0.9810.989
Sudarmadji et al. [24]0.9840.9800.970
Lian et al. [31]0.790--
Lahmar and Idri [32]0.840--
Quellec et al. [35]0.954--
STAREKazakh-British et al. [10]0.600--
Umapathy et al. [17]0.944--
HRFChakrabarty [13]1.001.00-
Umapathy et al. [17]0.944--
IDRidNasir et al. [12]0.9600.829-
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sebastian, A.; Elharrouss, O.; Al-Maadeed, S.; Almaadeed, N. A Survey on Deep-Learning-Based Diabetic Retinopathy Classification. Diagnostics 2023, 13, 345. https://doi.org/10.3390/diagnostics13030345

AMA Style

Sebastian A, Elharrouss O, Al-Maadeed S, Almaadeed N. A Survey on Deep-Learning-Based Diabetic Retinopathy Classification. Diagnostics. 2023; 13(3):345. https://doi.org/10.3390/diagnostics13030345

Chicago/Turabian Style

Sebastian, Anila, Omar Elharrouss, Somaya Al-Maadeed, and Noor Almaadeed. 2023. "A Survey on Deep-Learning-Based Diabetic Retinopathy Classification" Diagnostics 13, no. 3: 345. https://doi.org/10.3390/diagnostics13030345

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop