Next Article in Journal
The Role of Checkpoint Inhibitor Expression Directly on Exfoliated Cells from Bladder Cancer: A Narrative Review
Previous Article in Journal
Diagnostic Value of Labial Minor Salivary Gland Biopsy: Histological Findings of a Large Sicca Cohort and Clinical Associations
Previous Article in Special Issue
A Radiomic-Based Machine Learning System to Diagnose Age-Related Macular Degeneration from Ultra-Widefield Fundus Retinography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Lightweight Diabetic Retinopathy Detection Model Using a Deep-Learning Technique

by
Abdul Rahaman Wahab Sait
Department of Documents and Archive, Center of Documents and Administrative Communication, King Faisal University, P.O. Box 400, Hofuf 31982, Al-Ahsa, Saudi Arabia
Diagnostics 2023, 13(19), 3120; https://doi.org/10.3390/diagnostics13193120
Submission received: 14 September 2023 / Revised: 1 October 2023 / Accepted: 2 October 2023 / Published: 3 October 2023
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease—3rd Edition)

Abstract

:
Diabetic retinopathy (DR) is a severe complication of diabetes. It affects a large portion of the population of the Kingdom of Saudi Arabia. Existing systems assist clinicians in treating DR patients. However, these systems entail significantly high computational costs. In addition, dataset imbalances may lead existing DR detection systems to produce false positive outcomes. Therefore, the author intended to develop a lightweight deep-learning (DL)-based DR-severity grading system that could be used with limited computational resources. The proposed model followed an image pre-processing approach to overcome the noise and artifacts found in fundus images. A feature extraction process using the You Only Look Once (Yolo) V7 technique was suggested. It was used to provide feature sets. The author employed a tailored quantum marine predator algorithm (QMPA) for selecting appropriate features. A hyperparameter-optimized MobileNet V3 model was utilized for predicting severity levels using images. The author generalized the proposed model using the APTOS and EyePacs datasets. The APTOS dataset contained 5590 fundus images, whereas the EyePacs dataset included 35,100 images. The outcome of the comparative analysis revealed that the proposed model achieved an accuracy of 98.0 and 98.4 and an F1 Score of 93.7 and 93.1 in the APTOS and EyePacs datasets, respectively. In terms of computational complexity, the proposed DR model required fewer parameters, fewer floating-point operations (FLOPs), a lower learning rate, and less training time to learn the key patterns of the fundus images. The lightweight nature of the proposed model can allow healthcare centers to serve patients in remote locations. The proposed model can be implemented as a mobile application to support clinicians in treating DR patients. In the future, the author will focus on improving the proposed model’s efficiency to detect DR from low-quality fundus images.

1. Introduction

DR is a retinal complication of diabetes [1]. It impairs or completely degrades an individual’s vision. Uncontrolled diabetes over an extended length of time increases the risk of visual impairment due to diabetic maculopathy [2,3,4]. Retinal capillaries are susceptible to damage from high blood sugar. In a longer period, this deterioration leaves blood vessels more vulnerable to further damage or even rupture [5]. The risk of DR depends on diabetes duration, blood sugar management, genetic susceptibility, hypertension, and lipid abnormalities [6,7,8]. DR is more likely to develop in type 1 and 2 diabetics with poor blood sugar management [9]. DR is the primary factor of irreversible blindness in individuals across the world [10]. In addition, DR contributes to serious disorders like proliferative DR, the most prevalent microvascular implication [11]. Early diagnosis is one of the crucial factors for reducing the severity of DR.
The field of ophthalmology relies heavily on analyzing blood vessel structures in retinal fundus images. Permanent vision loss can occur due to age-related macular degeneration and diabetic macular edema [11]. Optical Coherence Tomography (OCT) is a crucial tool for ophthalmologists in diagnosing DR [12]. Ophthalmologists must devote a considerable amount of time to detecting abnormalities. This is crucial to prevent or mitigate DR-related visual impairment. Adopting minimally invasive methods and robotic-assisted surgery has become increasingly prevalent in ophthalmology, especially for treatments such as cataract surgery and glaucoma treatment [11]. These advancements have demonstrated the ability to reduce patient discomfort and expedite healing. To increase the efficacy of ocular drugs, new drug delivery devices such as sustained-release implants and punctal plugs have been developed [12]. Utilizing teleophthalmology facilitates both the remote evaluation of retinal images and consultations with patients [12]. It has demonstrated enhanced accessibility to DR screening. Emerging imaging techniques like hyperspectral and multispectral imaging have demonstrated potential in the realm of early disease identification and tissue characterization [12]. Medical diagnosis and therapy have greatly benefited from developments in 3D and 4D medical imaging, which have enhanced the visualization of anatomical structures [12]. The implementation of portable and handheld imaging technologies has experienced a surge in popularity, as it allows healthcare practitioners to conduct imaging activities to provide effective treatments.
In the Kingdom of Saudi Arabia (KSA), it is estimated that 13.4% of individuals are affected by diabetes mellitus, making it an extremely serious medical condition [11,12]. An automated and affordable screening system is required to serve DR patients across Saudi Arabia [12]. Medical and surgical operations for these individuals are more costly, and their unfavorable prognoses impose a financial strain on them and the healthcare system. The Saudi National Diabetes Center was recently founded to address the prevalence and severity of diabetes [12]. The center has spearheaded a strategy plan to significantly enhance diabetes treatment in the Saudi population over the upcoming years. There is a demand for an automated detection model to identify DR in the earlier stages.
In contrast to more traditional procedures, such as the dilatation of the eye pupil, automated retinal image processing has greatly facilitated the diagnosis of retinal diseases [13]. In recent years, artificial intelligence (AI) and machine learning (ML) algorithms have made significant advancements in the automated identification and assessment of DR using retinal images. The primary objective of these systems is to optimize the early detection of medical conditions and increase the overall care and treatment of patients [14]. AI systems can examine retinal images and scans to identify the earliest stages of DR. These algorithms can detect and categorize DR severities [14]. The computerized screening procedure aids in making a timely diagnosis, which is essential for effective therapy. AI applications can help prioritize patients according to the severity of their conditions [15]. It can be used to evaluate large datasets of retinal images and patient information to improve DR detection strategies.
Fundus images, including OCT scans and ultrasonography, are widely applied to DR detection. These images cover blood vessels, the macula, and the retina’s interior part. The fundus camera provides high-quality retinal images. These images are used in deep-learning (DL) models for detecting abnormalities [16].
A convolutional neural network (CNN) is a subset of artificial neural network architecture. It is primarily used in processing videos and images [16]. In recent years, CNNs have played a pivotal role in advancing computer vision by assisting in resolving various visual recognition challenges [17]. CNNs are built from numerous distinct convolutional layers, each of which must learn and identify certain image characteristics or patterns. The computation of feature maps is the goal of convolutional operations, which entail shifting extremely small filters across the input image [18]. It is possible to fine-tune pre-trained CNN models for use in multiple applications. These models have been exposed to extensive data and have gained an enormous amount of knowledge in various feature domains. Transfer learning (TL) approaches present an exceptional outcome using smaller datasets [19]. To enhance feature extraction and prediction for DR detection and to overcome the limitation of unbalanced and noisy fundus image data, existing studies have employed many data-augmentation approaches, sampling techniques, cost-sensitive algorithms, and hybrid and ensemble architectures [20].
The large datasets, including MESSIDOR, EyePacs, and APTOS, provide the fundus images [21]. Ophthalmologists were involved in gathering the ground truth images. The researchers employ these datasets to generalize their DR-detection models [21]. The CNN-based DR-detection models are widely used to detect and grade DR severity. These models demand a higher number of computational resources for producing an outcome. There is a lack of lightweight CNN models for detecting DR severity. This motivated the author to develop a lightweight model for grading the DR-severity level using the fundus images. In addition, an effective mobile-based DR image classifier is required to provide services to the individuals in the remote locations of the KSA.
The contributions of this study are as follows:
i.
A feature-extraction technique to improve the accuracy of the DR-detection model.
ii.
A DR-severity grading model that demands fewer parameters, FLOPs, and convolutional layers.
iii.
An evaluation of the proposed model using the benchmark datasets and evaluation metrics.
This proposed study is structured as follows: Section 2 presents the existing DR-severity literature. Section 3 outlines the proposed methodology for classifying the DR-severity levels. The findings are presented in Section 4. Section 5 discusses this study’s contribution to the DR-detection literature. Finally, Section 6 concludes this study.

2. Literature Review

Medical professionals can benefit greatly from deep-learning-based systems that automate the interpretation of retinal pictures and provide objective and consistent assessments of the severity of DR [21]. DL-based screening methods can test a large diabetic population for DR. Deep-learning algorithms can monitor the course of diseases over time by evaluating successive retinal images [21]. As a result, physicians may fine-tune their approaches to treating patients. Implementing these technologies plays an essential role in augmenting the efficacy and proficiency of DR screening and therapy, ultimately yielding advantages for both patients and healthcare practitioners. Nagpal et al. (2022) [22] discussed the recent developments in the DR-detection models. The noise and low contrast levels of the images may reduce the DR-image-classification performance. The morphological changes in the retinal images are the key factors in detecting DR [23]. In addition, DR-detection models identify lesions to compute severity levels.
Orlanda et al. (2017) [23] proposed a DL-based lesion-detection model using ensemble values. Al-hazaimeh et al. (2022) [24] developed a multi-class classification model for detecting DR severity. They followed blood-vessel-based segmentation and optic-disc-based detection techniques for pre-processing the images. In addition, they applied feature extraction and selection techniques to improve the classifier accuracy. Suganyadevi et al. (2022) [25] proposed a DR-detection model for detecting the severity of the fundus images. They employed the CNN models for processing the images. The multi-class classifier achieved an optimal outcome. Similarly, Nahiduzzaman et al. (2023) [26] developed a DR-identification model using a parallel convolutional neural network. They used an extreme learning machine to extract the key patterns. They adjusted the CNN model’s parameters using hyperparameter optimization. They used a smaller number of parameters for classifying the fundus images.
Abbood et al. (2022) [27] developed a hybrid retinal image enhancement algorithm using the DL technique. They applied a retinal-cropping technique to extract the features. Gaussian blur and circle cropping were used to enhance the image quality. They employed a ResNet 50 model to classify the fundus images. Canayaz (2022) [28] proposed a classification technique to detect DR severity. Binary Bat algorithm, Equilibrium optimizer, Gray Wolf optimizer, and Gravity search algorithm were used for feature extraction. They used a Support Vector Machine and Random Forest for classifying DR-severity levels. Modi and Kumar (2022) [29] developed a DR-severity detection using a Bat-based feature selection algorithm. They employed a deep forest technique for image classification. The K-mean-based segmentation algorithm was used to identify the lesion region. The feature extraction was performed using a multi-grained scanning method. Dayanna and Emmanuel (2022) [30] proposed a grading system for identifying the severity levels of the fundus images. The coherence-enhancing energy-based regularized level set evolution was used for blood-vessel segmentation. An attention-based fusion network was employed to detect the candidate lesion region. They applied a deep CNN model to classify the fundus images.
Furthermore, Savelli et al. (2020) [31] employed the multi-context ensemble-based CNN for detecting lesions in the fundus images. Chetoui et al. (2020) [32] employed EfficientNet to identify the abnormalities. Karki et al. (2021) [33] proposed an integrated EfficientNet model for DR classification. Kajan et al. (2020) [34] proposed a CNN model for identifying DR. They followed the TL technique for classifying the images. Patil et al. (2020) [35] employed a TL technique for DR-severity grading. Tariq et al. (2022) [36] employed ResNet50 and DenseNet121 models for the DR-severity-level classification model. They utilized APTOS and EYEPACS datasets for evaluating the model. Kobat et al. (2022) [37] applied a pre-trained DenseNet model to grade the DR-severity levels. Luo et al. (2023) [38] built a DR-detection model using deep CNN. They used local mining and long-range dependence techniques for the image classification. Lastly, Ishtiaq et al. (2023) [39] proposed a hybrid technique for classifying the fundus images.
To train deep-learning models, it is necessary to have access to extensive datasets that are both sizable and of superior quality. It can be challenging to obtain a diverse and representative dataset of retinal images, especially when dealing with rare DR conditions. The presence of imbalanced data may influence the model to produce more false positives [39]. It presents challenges in identifying severe instances of DR. Interoperability and user-friendly interfaces for healthcare professionals are essential to integrate DL models into clinical settings and EHR systems [39]. Processing retinal images in real time for prompt diagnosis and prioritization in telemedicine or point-of-care environments can impose a significant demand on resources and require specialist technology. The existing CNN models, including VGG, ResNet, and DenseNet models, demand huge computational resources for classifying DR severities [39]. There is a demand for lightweight applications to overcome the shortcomings of the existing models and to detect DR severities with limited computational resources.

3. Materials and Methods

The author presents a DL-based DR-severity grading model. MobileNet V3–Small is a lightweight CNN model that classifies complex images with fewer computational resources. However, the complexities in the fundus images may reduce the performance of the MobileNet V3. Integrating feature extraction and selection techniques enables the MobileNet V3 to produce optimal results and reduces the possibility of data overfitting. In addition, it minimizes the number of parameters of the model in learning the DR severity in the fundus images. The traditional feature-extraction and selection techniques demand a higher computational time for exploring the search space to reduce the dimensionality of the feature set. Yolo V7 [40] is the recent version of the Yolo techniques. It applies deep CNN in extracting the crucial features that represent DR severity. It processes the image at multiple layers and extracts hierarchical features. In addition, it can extract the key features in a short period. QMPA [41] is one of the optimization techniques that reduces the computation time in identifying the feature sets. Therefore, the author applies Yolov V7 and QMPA techniques in the proposed study for classifying DR severities using the fundus images.
Figure 1 highlights the proposed model for classifying the fundus images. Initially, the images were extracted from the APTOS [42] and EyePacs [43], which are benchmark datasets for DR. The author applies CLAHE   and   Wiener   filter   functions to enhance the image quality. The Yolo V7 technique [42] is applied to extract the key features. QMPA [41] is modified to improve the performance in selecting the crucial features. In addition, an Adam Optimizer (AO) is used to optimize the hyperparameters of the MobileNet V3 model.

3.1. Data Acquisition

In this study, the author utilizes the APTOS and EyePacs datasets. The APTOS dataset is available in the repository [42]. It was generated by Aravind Eye Hospital, India. The clinicians captured the fundus images across India. The images were captured using multiple cameras. Thus, the images contain noise and artifacts. The EyePacs dataset is publicly available in the Kaggle repository [43]. It covers a larger number of fundus images. The images were collected from primary care centers across the USA. The dataset provider resized the images into 1024 × 1024 pixels and cropped the black spaces. Based on the severity, the clinicians rated each image as 0 (no DR), 1 (mild), 2 (moderate), 3 (severe), and 4 (proliferative DR). Table 1 presents the properties of the datasets and the definition of the notations used in this section is presented in Table 2. Figure 2 shows the sample images of the datasets.

3.2. Image Pre-Processing

To overcome the noise and artifacts, the author employs CLAHE and Wiener filter techniques. Firstly, CLAHE is used to improve the contrast and visibility of the fundus images. It divides the images into blocks and computes histograms for each block. A Wiener filter is applied to remove the noise from each pixel. It employs the frequency domain to reduce the mean square error between the original and reconstructed image. The transfer function is used to compute element-wise multiplication for removing the noise. Let I be the fundus image and W F be the Wiener filter function. Equation (1) computes the process of removing noise from the images.
I = W F   I i where i = 1 , , N
where i = 1,..., N
Equation (2) presents the computation of error between the original and reconstructed image.
e = k X , Y k ^ X , Y

3.3. Data Augmentation

The author applied the rotation-range function to generate a set of images with a pre-defined range of degrees. Horizontal and vertical flips are used to produce randomly mirrored images. The author applies the shear-range method to distort the images to rectify the perception angles. In addition, width-shift and height-shift ranges are employed for shifting the images to horizontal and vertical positions, respectively. The proposed data-augmentation process is used to overcome the data imbalance of the dataset. The images are resized into 608 × 608 pixels, and each image is transformed into multiple angles. This process assists the training phase in providing an additional set of features to the CNN model.

3.4. Feature Extraction

The author employs Yolo V7 to extract the image features and generate the feature sets. It processes the textures of the fundus images in the lower layer and derives the semantic features in the higher layer. Figure 3 highlights the generation of the feature sets using the Yolo V7 technique.
In the feature map grid, Yolo V7 employs the detection head to compute the bounding boxes, likelihood of the object’s existence, and confidence score. Equation (3) shows the mathematical form of the feature set generation.
F s = Y o l o   V 7 I i where i = 1 , , N

3.5. Modified Quantum Marine Predator Algorithm-Based Feature Selection

To select the key features from the feature sets, the author employs the QMPA algorithm. QMPA is a metaheuristic algorithm for selecting interesting features for DR-severity detection. It generates a feature set to support the following MobileNet V3 model. The feature set represents the presence of the useful feature. Equation (4) shows the initial feature set with size N.
F i t + 1 = F m i n + r F m a x F m i n
QMPA derived Elite and Prey matrices from the traditional MPA to represent the predator with search strategy and the prey’s position data, respectively. However, QMPA faces challenges in achieving the global optima to obtain optimal features. Optimization algorithms frequently employ Cauchy and Gauss mutation [44] to improve efficiency. By introducing randomness into the population of potential solutions, these mutation operators broaden the search space and prevent the algorithm from being trapped in a local optimum. Thus, the author introduces the Cauchy Gaussian mutation method to improve the searching strategy of the QMPA. Equations (5)–(7) outline the Cauchy Gaussian mutation to achieve the global optimum.
E t + 1 = E 1 + λ 1 C a u c h y 0 , σ 2 + λ 2 G a u s s 0 , σ 2
σ = e x p F E F E α F E α
F s t + 1 = E t + 1 f E t + 1 f E E ,         O t h e r w i s e
Furthermore, to find the best set of features, QMPA applies Equation (8) for computing the feature sets.
F s = θ K B e s t i + 1 θ M B e s t i
QMPA computes the iteration using Equation (9).
i t e r a t i o n = R Q E l i t e i R Q P r e y i ,   i = 1 ,   ,   N 2

3.6. MobileNet V3–Small Model-Based DR-Severity Prediction

The author employs the MobileNet V3–Small model for classifying the fundus images. The MobileNet V3–Small neural-network architecture is the latest version of a series of networks developed for mobile and embedded devices. It has a versatile design enabling modification according to individual use cases with the trade-off between speed and accuracy. The MobileNet V3–Small architecture has been specifically designed and tuned to provide better inference performance and accommodate devices with limited computational resources. It incorporates the Hard swish activation function, a non-linear activation function that incorporates the favorable characteristics of the ReLu and sigmoid activation functions. The Hard swish function is specifically engineered to possess computing efficiency and exhibit a non-zero derivative at zero. It performs appropriate gradient-based optimization throughout the training process. Integrating squeeze and excitation (SE) blocks into MobileNet V3 is undertaken to augment channel-wise feature recalibration.
Utilizing SE blocks facilitates the adaptive scaling and recalibration of feature channels. It enables the network to prioritize the key features. The architectural design permits the incorporation of various configurations of layers and blocks under specific criteria. Equation (10) highlights the multi-class classification using the MobileNet V3Small model.
I C = M o b i l e N e t   V 3 S m a l l + R e L u F C F C S o f t m a x F s
Figure 4 shows the MobileNet V3Small model for the DR-severity detection model.
Furthermore, AO is used to fine-tune the hyper-parameters of the Mobile-Net V3 model. Dropout layers are integrated with the classifier to achieve an optimal outcome based on the outcome.

3.7. Evaluation Metrics

The author employs the commonly used metrics, including accuracy, precision, recall, and F1-Score. Accuracy presents the model’s efficiency in correctly classifying the DR-severity levels. However, it may not be suitable for imbalanced datasets. Therefore, precision and recall are used to evaluate the model’s performance using true positives and true negatives. In addition, F1-Score provides a model’s performance based on false positives and false negatives. Equations (11)–(14) outline the computation of accuracy, precision, recall, and F1-Score.
A c c u r a c y = N u m b e r   o f   c o r r e c t l y   i d e n t i f i e d   f u n d u s   i m a g e s T o t a l   n u m b e r   o f   i m a g e s
P r e c i s i o n = N u m b e r   o f   c o r r e c t l y   i d e n t i f i e d   f u n d u s   i m a g e s N u m b e r   o f   D R   s e v e r i t y   c l a s s e s + N u m b e r   o f   w r o n g l y   p r e d i c t e d   f u n d u s   i m a g e s
R e c a l l = N u m b e r   o f   c o r r e c t l y   i d e n t i f i e d   f u n d u s   i m a g e s N u m b e r   o f   D R   s e v e r i t y   c l a s s e s + N u m b e r   o f   w r o n g l y   p r e d i c t e d   n o r m a l   f u n d u s   i m a g e s
F 1 S c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
Cohen’s Kappa K is used to find the relationship between the predicted and actual classifications. It measures the inter-rater reliability using true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN). Equation (15) shows the mathematical expression for calculating K .
K = 2 × T P × T N F N × F P T P + F P + F P + T N + T P + F N + F N + T N
It is widely applied for measuring the efficiency of multi-class classification. Mean absolute deviation (MAD) and root mean square error (RMSE) are used to measure the model’s performance based on the actual observed values. The uncertainty levels are computed for the classifiers using confidence interval (CI) and standard deviation (SD). Equations (16) and (17) highlight the mathematical form of MAD and RMSE.
MAD = D μ N
RMSE = 1 N i = 1 N M M ^ 2
Furthermore, the author applies the model development settings to evaluate the computational complexities of the DR-severity models. The total number of parameters, learning rate, and floating point operations (FLOPs) are used to identify the model’s computational requirements for learning the key patterns of the fundus images. The testing time is used to find the model’s efficiency on the real-time images. Epochs (number of iterations) and the number of convolutional layers are used to evaluate the model’s capability to detect the DR severity. In addition, the ratio of input/output data is used to evaluate the model’s efficiency in handling the feature sets to predict the DR severity.

4. Results

In this study, the author implemented the proposed DR-detection model using Python 3.8.3, NVIDIA GeForce GTX, Windows 10 Professional, and Intel i7 processor with 3.2 GHz. The author generalized the proposed in APTOS and EyePacs datasets, respectively. The datasets are divided into training (70%) and testing (30%). Pytorch and Tensorflow libraries are employed for constructing the MobileNet V3 model. The MobileNet V3 model is optimized using Adam Optimizer (AO). The batch sizes of 54 and 86 and Epochs of 214 and 426 are used for APTOS and EyePacs datasets, accordingly. A softmax function, two dropouts, and three fully connected layers are added to the MobileNet V3 model. Table 3 highlights the performance of the proposed model in the APTOS dataset. The proposed model achieved a better outcome due to the feature extraction and selection techniques. In addition, the higher value of Kappa highlighted the significance of the proposed model in classifying multi-label images.
Likewise, Table 4 outlines the proposed model’s performance in the EyePacs dataset. Compared to the APTOS dataset, EyePacs covers a larger number of samples. The samples were used to train the proposed model to learn the crucial patterns of DR severity. The outcome highlighted that the proposed model obtained superior results. A higher value of F1-Score represents the model’s efficiency in dealing with true positives, true negatives, false positives, and false negatives. Figure 5 highlights the proposed model’s performance in APTOS and EyePacs datasets, respectively.
Table 5 highlights the findings of the comparative analysis. The proposed model achieved an exceptional outcome in the APTOS dataset. It achieved an average accuracy of 98.0 for the APTOS dataset. The APTOS dataset is highly imbalanced, and the proposed image pre-processing approach addressed the data imbalance by integrating high-quality images. In addition, Yolo V7 identified the tiny spots related to DR severity. Figure 6 presents the findings of the comparative analysis for the APTOS dataset.
Similarly, Table 6 reveals the performance of the DR-severity detection models in the EyePacs dataset. The EyePacs dataset presents the images at a high pixel rate. It favored the proposed model to resize the images without any compromise in the image quality. Yolo V7 identified the patterns, effectively. The Cauchy–Gaussian mutation has improved the computational efficiency of the suggested DR-severity detection model. The proposed model outperformed the existing models. Figure 7 shows the comparative analysis outcome for the EyePacs dataset.
Table 7 presents the computational strategies of the DR-severity detection models. The proposed model employed the MobileNet V3 model, which demands fewer parameters and FLOPs for image classification. Moreover, the MobileNet V3 model was trained using the ImageNet dataset. Thus, the proposed model obtained an optimal outcome in APTOS and EyePacs with fewer parameters and FLOPs. It reduces the computational complexities in classifying the DR severity using the fundus images. Thus, the proposed DR is a lightweight model that requires fewer parameters, a learning rate, and FLOPs to generate an exceptional outcome.
Table 8 outlines the findings of the loss-function analysis. It indicates that the proposed model obtained fewer errors in APTOS and EyePacs datasets. The feature extraction and selection approaches supported the proposed model to achieve an optimal outcome. The suggested model addressed the challenges in classifying the fundus images by integrating the Yolo V7 and QMPA models. Moreover, the inclusion of the Cauchy–Gaussian search strategy has played a significant role in the proposed model’s performance.
Finally, Table 9 highlights the uncertainty and variability of the proposed model’s efficiency in detecting DR severities. The proposed model achieved a better CI and SD for APTOS and EyePacs datasets, respectively. The higher value indicates the effective prediction in the unknown data. Moreover, the proposed model combined Yolo V7, QMPA, and MobileNet V3 models for image classification. The findings favored the integrated approach of the proposed model in detecting DR severity.

5. Discussions

In this study, the author proposed a DR-severity detection model for grading the severity of DR using fundus images. The proposed image processing process has produced high-quality images. Initially, the contrast was improved using the CLAHE technique. The author applied a Wiener filter to remove the noise. The data augmentation process has supported this proposed study to overcome the data imbalances in the APTOS dataset. In addition, it produced an additional number of training samples to train the proposed model. The fundus images undergo preprocessing techniques aimed at improving image quality and reducing noise interference. The model was trained with a special focus on the diagnosis of DR. It was trained using benchmark datasets. During the training process, the model acquired the ability to identify and extract pertinent characteristics from the fundus images. Based on the training, the proposed model detected the DR-severity levels from the real-time images.
In feature extraction, Yolo V7 provided the relevant features to the proposed classifier. It identified the crucial patterns of DR severity and generated the feature sets. The identified objects were collected as features. The author tailored the Yolo V7 model and retrieved the feature sets. The architecture of Yolo V7 has assisted the process of feature extraction to produce an outcome in a limited time. On the other hand, QMPA was used for the feature selection. The author introduced the Cauchy–Gaussian mutation searching strategy in the QMPA search space to improve the feature-selection process. Finally, the MobileNet V3–Small model classified the DR-severity levels using the feature sets. The author optimized the CNN model using an Adam optimizer. The model weights are iteratively modified to minimize the loss function that measures the discrepancy between the anticipated and the actual severity levels. Following the classification process, the proposed DR detects the degree of severity found in the fundus images.
Karki et al. [26] employed the EfficientNet model for DR-severity detection. They achieved a Kappa score of 92.4% in the EyePacs dataset. The EfficientNet model is the recently developed pre-trained image classifier. However, it requires an extended training time and a larger dataset for image classification. The complexity of the EfficientNet model may cause limitations in image classification. In contrast, the proposed model achieved a Kappa value of 91.1% with a lower computation cost.
Tariq et al. [29] proposed a DL technique for classifying the fundus images. They employed the ResNet 50 and DenseNet 121 models in the DR-severity classification. They obtained an accuracy of 63.0%. The CNN models face challenges in classifying images and demand additional training time. On the other hand, the proposed model is a lightweight application. It requires a small set of samples to learn the new environment.
Ishtiaq et al. [32] applied the local binary patterns for extracting the features. They employed the Binary Dragon Fly and Sine Cosine algorithms for optimizing the feature extraction process. They achieved an accuracy of 98.8 % in the EyePacs dataset. Similarly, the proposed model obtained an accuracy of 98.0%. However, the proposed model achieved a better Kappa value than the Ishtiaq et al. model.
Kobat et al. [30] proposed a DR-detection model using the pre-trained DenseNet model. They employed the horizontal and vertical patch division in extracting the features. They obtained an average accuracy of 84.9% and 86.7% in the APTOS and the EyePacs datasets. However, the proposed model outperformed the Kobat et al. model by achieving an exceptional outcome with fewer computational strategies.
Luo et al. [31] developed a DR-detection model using the deep CNN model. They employed long-range dependency among the lesion features for DR-severity detection. In addition, they followed patch-wise relationships to improve the local patch features. They obtained an average accuracy of 83.6% in the EyePacs dataset. In contrast, the proposed model detected the severity levels with higher accuracy.
The findings outlined that the proposed DR-severity detection model has the potential to play a role in the diagnosis and evaluation of the various severity levels associated with DR, a retinal disease that poses a risk to vision. It demonstrated a high level of suitability due to its exceptional proficiency in processing and evaluating the fundus images. The integration of CNN-based models into telemedicine and screening programs enables the streamlined and automated screening of extensive populations, hence enhancing efficiency. Immediate evaluation and therapy might be emphasized for patients who present more severe diabetic DR.
Healthcare and disease management greatly benefit from the proposed automated DR-severity detection system. These benefits aid in better patient outcomes, more effective healthcare delivery, and lower overall healthcare costs. The proposed model can identify DR in its earliest stages, typically when patients have any obvious symptoms. It can accurately and consistently analyze retinal images. To efficiently screen many patients, the suggested model can process a huge number of retinal pictures in a short amount of time. The application of the proposed model can substantially minimize the likelihood of human errors while interpreting retinal images. This practice improves the dependability of diagnoses and mitigates the likelihood of erroneous diagnostic assessments. The proposed model possesses the capability to speed up the delivery of outcomes and reduce the diagnostic duration. Telemedicine and other forms of remote healthcare delivery can utilize the suggested framework to provide DR screening to patients in underprivileged and remote areas. The proposed model can offer a reliable and uniform means of assessment, consequently reducing the potential for diagnostic discrepancies across diverse healthcare practitioners.
The proposed model produced a remarkable performance in DR identification and management. However, the author encountered limitations in classifying the fundus images using the proposed model. The expertise of ophthalmologists and other experts is still essential for deciphering AI-generated data, determining the best course of therapy, and caring for patients. The accuracy and dependability of the proposed model in clinical practice depend on thorough validation and ongoing improvement. A significant reference in the context of DR screening is microaneurysm. The dimensions of microaneurysms can be extremely small, rendering their detection challenging and susceptible to misidentification with other types of lesions. Additionally, the poor contrast between lesion pixels and background pixels, the irregular form of lesions, and the significant variations between the same lesion spots may cause limitations in diagnosing ophthalmic disorders. Thus, an effective image pre-processing technique is required for detecting DR severity in the real-time environment.

6. Conclusions

In this study, the author developed a multi-class DR-severity grading model using the DL technique. The proposed model integrated the image pre-processing, Yolo V7, QMPA, and MobileNet V3-Small models. The fundus image datasets are highly imbalanced. In addition, it contains noise and artifacts. The suggested image pre-processing technique has improved the image quality. The dataset biases were addressed using the data augmentation process. The feature extraction process applied the Yolo V7 technique to extract the key features. The author applied the QMPA with the Cauchy–Gaussian mutation strategy to select the critical features related to the DR severity. The MobileNet V3 model was employed to classify the images based on severity levels. The benchmark datasets, including APTOS and EyePacs, were used to generalize the proposed model. The findings highlight the significance of the proposed model in diagnosing DR severity. The proposed model offers an opportunity to develop a mobile-based application with which to treat DR patients. However, it encountered limitations in classifying the fundus images. The small dimension of DR severity in the fundus images may reduce the proposed model’s prediction accuracy. Effective image pre-processing is required to improve the quality of the real-time images. In the future, the author will extend the research to resolve the shortcomings of the proposed model.

Funding

This work was supported by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia (Grant No. 4216).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Dataset. Available online: https://www.kaggle.com/c/aptos2019-blindness-detection (accessed on 23 May 2023). Foundation Consumer Healthcare. EyePACS: Diabetic Retinopathy Detection. Available online: https://www.kaggle.com/c/diabetic-retinopathy-detection/data (accessed on 25 May 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Soni, A.; Rai, A. A novel approach for the early recognition of diabetic retinopathy using machine learning. In Proceedings of the 2021 International Conference on Computer Communication and Informatics (ICCCI), Coimbatore, India, 27–29 January 2021; IEEE: Bengaluru, India, 2021; pp. 1–5. [Google Scholar]
  2. Reddy, G.T.; Bhattacharya, S.; Ramakrishnan, S.S.; Chowdhary, C.L.; Hakak, S.; Kaluri, R.; Reddy, M.P.K. An ensemble based machine learning model for diabetic retinopathy classification. In Proceedings of the 2020 International Conference on Emerging Trends in Information Technology and Engineering (ic-ETITE), Vellore, India, 24–25 February 2020; IEEE: Bengaluru, India, 2020; pp. 1–6. [Google Scholar]
  3. Varghese, N.R.; Gopan, N.R. Performance analysis of automated detection of diabetic retinopathy using machine learning and deep learning techniques. In Innovative Data Communication Technologies and Application: ICIDCA 2019; Springer International Publishing: Cham, Switzerland, 2020; pp. 156–164. [Google Scholar]
  4. Gayathri, S.; Gopi, V.P.; Palanisamy, P. Diabetic retinopathy classification based on multipath CNN and machine learning classifiers. Phys. Eng. Sci. Med. 2021, 44, 639–653. [Google Scholar] [CrossRef] [PubMed]
  5. Shankar, K.; Sait, A.R.W.; Gupta, D.; Lakshmanaprabu, S.K.; Khanna, A.; Pandey, H.M. Automated detection and classification of fundus diabetic retinopathy images using synergic deep learning model. Pattern Recognit. Lett. 2020, 133, 210–216. [Google Scholar] [CrossRef]
  6. Masud, M.; Alhamid, M.F.; Zhang, Y. A convolutional neural network model using weighted loss function to detect diabetic retinopathy. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2022, 18, 1–16. [Google Scholar] [CrossRef]
  7. Lu, Z.; Miao, J.; Dong, J.; Zhu, S.; Wang, X.; Feng, J. Automatic classification of retinal diseases with transfer learning-based lightweight convolutional neural network. Biomed. Signal Process. Control 2023, 81, 104365. [Google Scholar] [CrossRef]
  8. Singh, L.K.; Khanna, M.; Thawkar, S.; Singh, R. Nature-inspired computing and machine learning based classification approach for glaucoma in retinal fundus images. Multimed. Tools Appl. 2023, 1–49. [Google Scholar] [CrossRef]
  9. Gangwar, A.K.; Ravi, V. Diabetic retinopathy detection using transfer learning and deep learning. In Evolution in Computational Intelligence: Frontiers in Intelligent Computing: Theory and Applications (FICTA 2020); Springer: Singapore, 2021; Volume 1, pp. 679–689. [Google Scholar]
  10. Gonçalves, M.B.; Nakayama, L.F.; Ferraz, D.; Faber, H.; Korot, E.; Malerbi, F.K.; Regatieri, C.V.; Maia, M.; Celi, L.A.; Keane, P.A.; et al. Image quality assessment of retinal fundus photographs for diabetic retinopathy in the machine learning era: A review. Eye 2023, 1–8. [Google Scholar] [CrossRef] [PubMed]
  11. Alharbi, A.M.D.; Alhazmi, A.M.S. Prevalence, risk factors, and patient awareness of diabetic retinopathy in Saudi Arabia: A review of the literature. Cureus 2020, 12, e11991. [Google Scholar] [CrossRef]
  12. Al-Shehri, A.M.; Aldihan, K.A.; Aljohani, S. Reasons for the Late Presentation of Diabetic Retinopathy in Saudi Arabia: A Survey of Patients Who Presented with Advanced Proliferative Diabetic Retinopathy to a Tertiary Eye Hospital. Clin. Ophthalmol. 2022, 16, 4323–4333. [Google Scholar] [CrossRef]
  13. Oulhadj, M.; Riffi, J.; Chaimae, K.; Mahraz, A.M.; Ahmed, B.; Yahyaouy, A.; Fouad, C.; Meriem, A.; Idriss, B.A.; Tairi, H. Diabetic retinopathy prediction based on deep learning and deformable registration. Multimed. Tools Appl. 2022, 81, 28709–28727. [Google Scholar] [CrossRef]
  14. Tymchenko, B.; Marchenko, P.; Spodarets, D. Deep learning approach to diabetic retinopathy detection. arXiv 2020, arXiv:2003.02261. [Google Scholar]
  15. Costa, P.; Galdran, A.; Smailagic, A.; Campilho, A. A weakly-supervised framework for interpretable diabetic retinopathy detection on retinal images. IEEE Access 2018, 6, 18747–18758. [Google Scholar] [CrossRef]
  16. Teo, Z.L.; Tham, Y.-C.; Yu, M.; Chee, M.L.; Rim, T.H.; Cheung, N.; Bikbov, M.M.; Wang, Y.X.; Tang, Y.; Lu, Y. Global prevalence of diabetic retinopathy and projection of burden through 2045: Systematic review and meta-analysis. Ophthalmology 2021, 128, 1580–1591. [Google Scholar] [CrossRef] [PubMed]
  17. Alyoubi, L.; Shalash, M.; Abulkhair, F. Diabetic retinopathy detection through deep learning techniques: A review. Inform. Med. Unlocked 2020, 20, 1–11. [Google Scholar] [CrossRef]
  18. Wang, J.; Bai, Y.; Xia, B. Simultaneous diagnosis of severity and features of diabetic retinopathy in fundus photography using deep learning. IEEE J. Biomed. Health Inform. 2020, 24, 3397–3407. [Google Scholar] [CrossRef] [PubMed]
  19. Bhardwaj, C.; Jain, S.; Sood, M. Hierarchical severity grade classification of non-proliferative diabetic retinopathy. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 2649–2670. [Google Scholar] [CrossRef]
  20. Keerthiveena, B.; Esakkirajan, S.; Subudhi, B.N.; Veerakumar, T. A hybrid BPSO-SVM for feature selection and classification of ocular health. IET Image Process 2021, 15, 542–555. [Google Scholar] [CrossRef]
  21. Majumder, S.; Kehtarnavaz, N. Multitasking deep learning model for detection of five stages of diabetic retinopathy. IEEE Access 2021, 9, 123220–123230. [Google Scholar] [CrossRef]
  22. Nagpal, D.; Panda, S.N.; Malarvel, M.; Pattanaik, P.A.; Khan, M.Z. A review of diabetic retinopathy: Datasets, approaches, evaluation metrics and future trends. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 7138–7152. [Google Scholar] [CrossRef]
  23. Orlando, J.I.; Van Keer, K.; Barbosa Breda, J.; Manterola, H.L.; Blaschko, M.B.; Clausse, A. Proliferative diabetic retinopathy characterization based on fractal features: Evaluation on a publicly available dataset. Med. Phys. 2017, 44, 6425–6434. [Google Scholar] [CrossRef]
  24. Al-hazaimeh, O.M.; Abu-Ein, A.A.; Tahat, N.M.; Al-Smadi, M.M.A.; Al-Nawashi, M.M. Combining Artificial Intelligence and Image Processing for Diagnosing Diabetic Retinopathy in Retinal Fundus Images. Int. J. Online Biomed. Eng. 2022, 18, 131–151. [Google Scholar] [CrossRef]
  25. Suganyadevi, S.; Renukadevi, K.; Balasamy, K.; Jeevitha, P. Diabetic Retinopathy Detection Using Deep Learning Methods. In Proceedings of the 2022 First International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT), Trichy, India, 16–18 February 2022; IEEE: Bengaluru, India, 2022; pp. 1–6. [Google Scholar]
  26. Nahiduzzaman, M.; Islam, M.R.; Goni, M.O.F.; Anower, M.S.; Ahsan, M.; Haider, J.; Kowalski, M. Diabetic retinopathy identification using parallel convolutional neural network based feature extractor and ELM classifier. Expert Syst. Appl. 2023, 217, 119557. [Google Scholar] [CrossRef]
  27. Abbood, S.H.; Hamed, H.N.A.; Rahim, M.S.M.; Rehman, A.; Saba, T.; Bahaj, S.A. Hybrid retinal image enhancement algorithm for diabetic retinopathy diagnostic using deep learning model. IEEE Access 2022, 10, 73079–73086. [Google Scholar] [CrossRef]
  28. Canayaz, M. Classification of diabetic retinopathy with feature selection over deep features using nature-inspired wrapper methods. Appl. Soft Comput. 2022, 128, 109462. [Google Scholar] [CrossRef]
  29. Modi, P.; Kumar, Y. Smart Detection and Diagnosis of Diabetic Retinopathy Using Bat Based Feature Selection Algorithm and Deep Forest Technique. Comput. Ind. Eng. 2023, 182, 109364. [Google Scholar] [CrossRef]
  30. Dayana, A.M.; Emmanuel, W.S. Deep learning enabled optimized feature selection and classification for grading diabetic retinopathy severity in the fundus image. Neural Comput. Appl. 2022, 34, 18663–18683. [Google Scholar] [CrossRef]
  31. Savelli, B.; Bria, A.; Molinara, M.; Marrocco, C.; Tortorella, F. A multi-context CNN ensemble for small lesion detection. Artif. Intell. Med. 2020, 103, 101749. [Google Scholar] [CrossRef]
  32. Chetoui, M.; Akhloufi, M.A. Explainable diabetic retinopathy using EfficientNET. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; IEEE: Ottawa, ON, Canada, 2020; pp. 1966–1969. [Google Scholar]
  33. Karki, S.S.; Kulkarni, P. Diabetic retinopathy classification using a combination of efficientnets. In Proceedings of the 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), Pune, India, 5–7 March 2021; IEEE: Bengaluru, India, 2021; pp. 68–72. [Google Scholar]
  34. Kajan, S.; Goga, J.; Lacko, K.; Pavlovičová, J. Detection of diabetic retinopathy using pretrained deep neural networks. In Proceedings of the 2020 Cybernetics & Informatics (K&I), Velke Karlovice, Czech Republic, 29 January–1 February 2020; IEEE: Prague, Czech Republic, 2020; pp. 1–5. [Google Scholar]
  35. Patil, M.; Chickerur, S.; Bakale, V.; Giraddi, S.; Roodagi, V.; Kulkarni, Y. Deep hyperparameter transfer learning for diabetic retinopathy classification. Turk. J. Electr. Eng. Comput. Sci. 2021, 29, 2824–2839. [Google Scholar] [CrossRef]
  36. Tariq, M.; Palade, V.; Ma, Y. Transfer Learning based Classification of Diabetic Retinopathy on the Kaggle EyePACS dataset. In Proceedings of the 3rd International Conference on Medical Imaging and Computer-Aided Diagnosis, Leicester, UK, 20–21 November 2022. [Google Scholar]
  37. Kobat, S.G.; Baygin, N.; Yusufoglu, E.; Baygin, M.; Barua, P.D.; Dogan, S.; Yaman, O.; Celiker, U.; Yildirim, H.; Tan, R.S.; et al. Automated diabetic retinopathy detection using horizontal and vertical patch division-based pre-trained DenseNET with digital fundus images. Diagnostics 2022, 12, 1975. [Google Scholar] [CrossRef]
  38. Luo, X.; Wang, W.; Xu, Y.; Lai, Z.; Jin, X.; Zhang, B.; Zhang, D. A deep convolutional neural network for diabetic retinopathy detection via mining local and long-range dependence. CAAI Trans. Intell. Technol. 2023, 1, 1–14. [Google Scholar] [CrossRef]
  39. Ishtiaq, U.; Abdullah, E.R.M.F.; Ishtiaque, Z. A Hybrid Technique for Diabetic Retinopathy Detection Based on Ensemble-Optimized CNN and Texture Features. Diagnostics 2023, 13, 1816. [Google Scholar] [CrossRef]
  40. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar]
  41. Abd Elaziz, M.; Mohammadi, D.; Oliva, D.; Salimifard, K. Quantum marine predators algorithm for addressing multilevel image segmentation. Appl. Soft Comput. 2021, 110, 107598. [Google Scholar] [CrossRef]
  42. Aptos Dataset. Available online: https://www.kaggle.com/c/aptos2019-blindness-detection (accessed on 23 May 2023).
  43. Foundation Consumer Healthcare. EyePACS: Diabetic Retinopathy Detection. Available online: https://www.kaggle.com/c/diabetic-retinopathy-detection/data (accessed on 25 May 2023).
  44. Li, K.; Li, S.; Huang, Z.; Zhang, M.; Xu, Z. Grey Wolf Optimization algorithm based on Cauchy-Gaussian mutation and improved search strategy. Sci. Rep. 2022, 12, 18961. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Proposed framework.
Figure 1. Proposed framework.
Diagnostics 13 03120 g001
Figure 2. Sample images.
Figure 2. Sample images.
Diagnostics 13 03120 g002
Figure 3. Feature-set generation.
Figure 3. Feature-set generation.
Diagnostics 13 03120 g003
Figure 4. MobileNet V3—Multi class classification.
Figure 4. MobileNet V3—Multi class classification.
Diagnostics 13 03120 g004
Figure 5. Performance analysis outcome.
Figure 5. Performance analysis outcome.
Diagnostics 13 03120 g005
Figure 6. Comparative analysis findings—APTOS [33,36,37,38,39].
Figure 6. Comparative analysis findings—APTOS [33,36,37,38,39].
Diagnostics 13 03120 g006
Figure 7. Comparative analysis findings—EyePacs [33,36,37,38,39].
Figure 7. Comparative analysis findings—EyePacs [33,36,37,38,39].
Diagnostics 13 03120 g007
Table 1. Dataset characteristics.
Table 1. Dataset characteristics.
DatasetTrainingTesting
EyePacs24,57010,530
APTOS36621928
Table 2. Notation and definition.
Table 2. Notation and definition.
NotationDefinition
I Fundus image
W F Wiener filter
e Mean square error
k X , Y Original   image   with   X   and   Y co-ordinates
k ^ X , Y Reconstructed   image   with   X   and   Y co-ordinates
F s Feature sets
Y o l o _ V 7 Yolo V7 function
NNumber of images
E t + 1 Post   mutation   position   of   E l i t e
E Current   position   of   E l i t e
F E Fitness value of E
F E α Fitness value of E at α
| |The absolute value
C a u c h y 0 , σ 2 and G a u s s 0 , σ 2 Random   variables   of   Cauchy   and   Gauss   distribution   with   wavelet   ( σ )
θ Quantum constant
K B e s t   and   M B e s t Optimal feature sets in the specific iteration (i)
R Q Chaotic number
E l i t e i   and   P r e y i Elite and Prey vectors in the specific iteration (i)
Element wise addition
λ 1 and λ 2 Dynamic parameters
I C Multi-class classification
ReLuRectified linear unit
S o f t m a x Softmax function for the multi-class classification
M o b i l e N e t   V 3 S m a l l MobileNet V3—Small model
FCFully connected layer
M ^ Predicted class
M Mean value of predicted class
D Data point
μ Mean
Table 3. Proposed DR performance analysis—APTOS.
Table 3. Proposed DR performance analysis—APTOS.
Classes/MetricsAccuracyKappaPrecisionRecallF1-Score
0 (No DR)97.591.492.493.492.9
1 (Mild DR)98.390.891.592.592.0
2 (Moderate DR)97.892.594.895.295.0
3 (Severe DR)98.691.493.493.893.6
4 (Proliferative DR)97.989.595.294.895.0
Average98.091.193.493.993.7
Table 4. Proposed DR performance analysis—EyePacs.
Table 4. Proposed DR performance analysis—EyePacs.
Classes/MetricsAccuracyKappaPrecisionRecallF1-Score
0 (No DR)98.795.295.494.895.1
1 (Mild DR)98.591.492.593.492.9
2 (Moderate DR)97.990.593.492.793.0
3 (Severe DR)98.694.394.291.892.9
4 (Proliferative DR)98.390.893.790.492.0
Average98.492.493.892.693.1
Table 5. Findings of comparative analysis—APTOS.
Table 5. Findings of comparative analysis—APTOS.
Methods/MetricsAccuracyKappaPrecisionRecallF1-Score
Proposed DR98.091.193.493.993.7
Ishtiaq et al. model [39]95.285.690.191.490.7
Tariq et al. model [36]93.081.294.593.894.1
Luo et al. model [38]82.480.493.491.892.5
Karki et al. model [33]89.190.191.692.792.1
Kobat et al. model [37]84.986.482.483.182.7
Table 6. Findings of comparative analysis—EyePacs.
Table 6. Findings of comparative analysis—EyePacs.
Methods/MetricsAccuracyKappaPrecisionRecallF1-Score
Proposed DR98.492.493.892.693.1
Ishtiaq et al. model [39]98.882.391.290.790.9
Tariq et al. model [36]70.063.072.076.073.9
Luo et al. model [38]83.682.481.983.582.6
Karki et al. model [33]85.492.483.485.284.2
Kobat et al. model [37]86.781.486.187.386.7
Table 7. Computational strategies.
Table 7. Computational strategies.
MethodsAPTOS 2019EyePacs
Learning RateParameters
(in Millions (m))
FLOPs
(in Giga (G))
Learning RateParameters
(in Millions (m))
FLOPs
(in Giga (G))
Proposed DR1 × 10−447 M2.3 G1 × 10−377 M4.5 G
Ishtiaq et al. model [39]1 × 10−386 M4.7 G1 × 10−294 M5.1 G
Tariq et al. model [36]1 × 10−372 M4.3 G1 × 10−398 M5.6 G
Luo et al. model [38]1 × 10−364 M3.7 G1 × 10−297 M5.9 G
Karki et al. model [33]1 × 10−357 M4.1 G1 × 10−291 M5.3 G
Kobat et al. model [37]1 × 10−271 M3.9 G1 × 10−289 M4.9 G
Table 8. Outcome of loss-function analysis.
Table 8. Outcome of loss-function analysis.
MethodsAPTOS EyePacs
MADRMSETesting Time
(seconds)
MADRMSETesting Time
(seconds)
Proposed DR0.3850.7541.260.4230.8211.38
Ishtiaq et al. model [39]0.4250.8231.830.5180.9141.45
Tariq et al. model [36]0.3980.9122.310.4670.9651.52
Luo et al. model [38]0.4050.8642.420.5230.9741.69
Karki et al. model [33]0.5120.8452.510.6121.0121.54
Kobat et al. model [37]0.4871.0212.350.7241.1252.24
Table 9. Uncertainty analysis.
Table 9. Uncertainty analysis.
MethodsAPTOSEyePacs
CISDCISD
Proposed DR[97.53–97.61]0.0014[98.32–98.56]0.0019
Ishtiaq et al. model [39][95.80–96.32]0.0022[95.68–96.18]0.0028
Tariq et al. model [36][96.30–97.12]0.0021[95.24–95.86]0.0032
Luo et al. model [38][95.83–95.89]0.0016[96.40–97.12]0.0017
Karki et al. model [33][97.10–97.45]0.0017[97.19–98.40]0.0018
Kobat et al. model [37][97.42–97.65]0.0019[96.58–96.68]0.0021
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wahab Sait, A.R. A Lightweight Diabetic Retinopathy Detection Model Using a Deep-Learning Technique. Diagnostics 2023, 13, 3120. https://doi.org/10.3390/diagnostics13193120

AMA Style

Wahab Sait AR. A Lightweight Diabetic Retinopathy Detection Model Using a Deep-Learning Technique. Diagnostics. 2023; 13(19):3120. https://doi.org/10.3390/diagnostics13193120

Chicago/Turabian Style

Wahab Sait, Abdul Rahaman. 2023. "A Lightweight Diabetic Retinopathy Detection Model Using a Deep-Learning Technique" Diagnostics 13, no. 19: 3120. https://doi.org/10.3390/diagnostics13193120

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop