Next Article in Journal
Analytical Modelling of an Active Vibration Absorber for a Beam
Next Article in Special Issue
Variational Quantum Circuit Topology Grid Search for Hypocalcemia Following Thyroid Surgery
Previous Article in Journal
Ansatz and Averaging Methods for Modeling the (Un)Conserved Complex Duffing Oscillators
Previous Article in Special Issue
Quantum Computing in Insurance Capital Modelling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantum Computing Meets Deep Learning: A Promising Approach for Diabetic Retinopathy Classification

1
Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
2
Department of Software Engineering, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
3
Department of Mathematics, University of Leicester, Leicester LE1 7RH, UK
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(9), 2008; https://doi.org/10.3390/math11092008
Submission received: 22 March 2023 / Revised: 15 April 2023 / Accepted: 20 April 2023 / Published: 24 April 2023
(This article belongs to the Special Issue Quantum Computing for Industrial Applications)

Abstract

:
Diabetic retinopathy seems to be the cause of micro-vascular retinal alterations. It remains a leading reason for blindness and vision loss in adults around the age of 20 to 74. Screening for this disease has become vital in identifying referable cases that require complete ophthalmic evaluation and treatment to avoid permanent loss of vision. The computer-aided design could ease this screening process, which requires limited time, and assist clinicians. The main complexity in classifying images involves huge computation, leading to slow classification. Certain image classification approaches integrating quantum computing have recently evolved to resolve this. With its parallel computing ability, quantum computing could assist in effective classification. The notion of integrating quantum computing with conventional image classification methods is theoretically feasible and advantageous. However, as existing image classification techniques have failed to procure high accuracy in classification, a robust approach is needed. The present research proposes a quantum-based deep convolutional neural network to avert these pitfalls and identify disease grades from the Indian Diabetic Retinopathy Image Dataset. Typically, quantum computing could make use of the maximum number of entangled qubits for image reconstruction without any additional information. This study involves conceptual enhancement by proposing an optimized structural system termed an optimized multiple-qbit gate quantum neural network for the classification of DR. In this case, multiple qubits are regarded as the ability of qubits in multiple states to exist concurrently, which permits performance improvement with the distinct additional qubit. The overall performance of this system is validated in accordance with performance metrics, and the proposed method achieves 100% accuracy, 100% precision, 100% recall, 100% specificity, and 100% f1-score.

1. Introduction

Diabetic retinopathy (DR) remains a leading reason for vision loss, affecting individuals between the ages of 20 and 74. This disease has significantly impacted high-income and middle-income countries. Moreover, the disease harms the blood vessels of diabetes patients. DR is of two main kinds: proliferative DR and non-proliferative DR [1]. Typically, DR screening is significant because timely treatment could be implemented to prevent vision loss. Preliminary stage involvement through blood pressure management could slow the progress of this disease. On the contrary, late-stage involvement through intravitreal injection or photocoagulation could minimize vision loss. Therefore, this disease has been regarded as a micro-vascular complexity of diabetes.
Nevertheless, growing evidence recommends that neuro-degeneration is the initial phase of its infection. Moreover, abnormal functionalities in the retina could be identified in patients without evidence of micro-vascular irregularities, and the ADA (American Diabetes Association) has currently claimed DR as a neurovascular complication [2]. Recent treatments target the vision has been significantly impacted. Hence, developing an effective approach to avert and treat the disease in its initial phase is crucial. DR is identified by the existence of varied lesion kinds on retinal images. Such lesions are hemorrhages (HM), soft-hard exudates, and micro-aneurysms (MA). MA indicates blood clots with sizes (100 to 120 µm) that are typically circular. Blood leakage from blood vessels is termed HM. The abnormal development of small blood vessels is defined as neovascularization. The venous bleeding indicates the central enlargements of veins that are adjacent to occluded arterioles.
The World Health Organization also predicts DR as an appalling disease that accounts for blindness at a rate of (1.07%) and (1.25%) of moderate to austere visual impairment cases (VI) [3]. Deficiencies in workers, infrastructure, and public awareness are the main issues that have to be resolved immediately with the intention of avoiding blindness. Currently, Artificial Intelligence (AI)-based algorithms have accomplished effective diagnosis in identifying the main medical circumstances, including varied retinal diseases like DR [4]. There exists an issue with the manual diagnosis of DR from the retinal images, as it seems to be a tedious process with a lack of medical experts. Thus, to resolve the existing hitches, it is ideal to develop an automated system for DR detection that could operate with the assistance of physicians [5]. Considerable endeavors have been undertaken for the automation of classifying DR images with the use of DL to assist ophthalmologists in detecting disease in the initial phase. Among different DL algorithms, CNN is widely utilized as it has been efficient and successful in image analysis [6]. CNNs are futuristic, DL-based models that have resulted in several breakthroughs in automating object classification and detection. These DL-based models could extract features that are valuable for performing accurate image classification. Accordingly, multi-path CNN has been designed for extracting the features of DR from the retinal images that could be utilized in ML to perform DR classification [7].
Unlike traditional deep CNN, extensively utilized fractional max-pooling and max-pooling layers have been applied. Two such deep CNN methods with distinct layers have been trained to obtain discriminant features to perform classification. Following this, features have been integrated from image meta-data and Deep CNN, and SVM has been trained to learn the basic distribution limits of individual classes. Subsequently, two deep CNNs have been employed to detect individual stages of DR through the use of balanced and imbalanced datasets [8]. The growing progress of DL assists widely in the classification of various diseases in the medical field. The suggested research has used the multi-level classification DL model for fundus fluorescein angiography (FFA) images, which includes lesion classification and pre-diagnosis assessment. ResNet18, VGG16, and LeNet-5 models have been used in the suggested research for the process of lesion classification, and in that, ResNet18 has attained better results in comparison with other methods [9]. Diabetes is one of the major problems during pregnancy. Hence, diabetes in pregnant women with gestational diabetes mellitus greatly increases their blood glucose level [10]. The fast development of technologies like the internet, cloud computing, and AI reduced the misdiagnosis rate, high healthcare costs, and imbalance of medical resources. The existing research has aimed to enhance the security of people’s health and attain further improvements in the medical field. Recently, DL algorithms have had wider application, and an existing study used modified CNN for developing interactive smart healthcare predictions [11].
Currently, with the progress of initial small-scale quantum computers, quantum DL methods have gained huge attention. Researchers have framed several classification models in accordance with varied quantum parametric circuits, wherein traditional data is determined as distinct qubits [12]. Quantum computers have proven to be more robust than their conventional counterparts in several applications, especially in sampling complex probabilities. Hence, it is typical to ask if this hierarchy employs learning models. An affirmative answer is extensively assumed to be factual, and clear representations on the ways to accomplish this quantum merit seem to be under research with limited outcomes in this area. It has been claimed that quantum-based methods on classical computers possess the benefits of affording ways to expose efficient solutions wherein image classification is no longer an exception [13].
Though there have been considerable attempts to automate the classification of DR images based on DL to assist ophthalmologists in initially detecting the disease, most of them concentrated only on detecting DR rather than the detection of several DR phases. Besides, there exist limited struggles for classifying and localizing the lesion types of DR, which is valuable in reality as the opthalmologists could assess the severity of DR and observe its progress in accordance with the lesion appearance. For such reasons, the present study proposes quantum-based CNN for complete automated screening so as to detect the 5 phases of DR and localize all the lesion kinds of DR concurrently based on the below objectives. The proposed work would assist opthalmologists in imitating the diagnostic method of DR that localizes the lesions of DR, detects its kind, and finds the actual phase of DR. The main contributions of this study are listed as follows:
  • Pre-processing is carried out for resizing the images, which makes the images flexible for further processing;
  • To perform classification of DR by using the proposed quantum-based Deep CNN, which uses an optimized multiple-qubit gate quantum neural network for improving the accuracy of the proposed system;
  • To reduce the loss function, the bottleneck layer in the modified CNN is utilized, which also assists in improving the accuracy;
  • To evaluate the efficacy of the proposed model with the standard performance metrics such as recall, accuracy, precision, and f1-score, the effectiveness of the proposed model is exposed.
The paper is organized as follows: Section 2 discusses the literature review with the main problem identification. This is followed by Section 3, which explains the proposed system with the suitable flow, pseudocode, and mathematical representations. The outcomes procured after the execution of the proposed system are included in Section 4. Lastly, the research work is summarized in Section 5 with future recommendations.

2. Literature Review

The traditional methods that have been undertaken to accomplish DR detection are discussed in this section. Additionally, the problems determined through this review are also presented. Initially, traditional works have used ML-based approaches for detecting DR. Accordingly, the study [14] has used an ML algorithm, taking into account automated image analysis models such as Restricted Boltzmann Machines and Optimal Path Forest. In this case, less functional cost and practicality have been integrated with the minimization of computational load, which enables Restricted Boltzmann Machines in circumstances that need several variables to recognize an image, as in DR. Studies that have used these algorithms have been rare, but the social and personal problem of blindness instigated by DR motivates the search for new ways of screening that possess an identical accuracy rate to the gold standard. ML-based automated disease identification models, particularly RBM, have shown better diagnostic accuracy and sensitivity. Similarly, the research [15] has intended to automatically identify DR with ML classifiers inclusive of linear discriminant analysis (LDA), random forest (RF), variable ranger random (VRR), K-nearest neighbor (K-NN), and support vector machine (SVM). A cross-validation procedure has also been undertaken. For classifying DR, VRR has performed better than other models, with an 86% accuracy rate. To enhance the prediction rate, the article [16] endorses a hybrid ML algorithm for detecting DR. The suggested system encompasses four phases, namely: pre-processing, feature extraction, classification, and segmentation. The system has been assessed on the CHASE dataset to detect DR. The accuracy value has been found to be 96.62%. Analysis of alterations in DR for patients with diabetes and glaucoma has been undertaken with methods inclusive of SVM. For modeling the issue and undertaking predictions, the SVM model has been generated. With the existence of several parameters that could impact the modeling performance, the optimization algorithm, namely Differential Evolution (DE), has been regarded as a way to determine the parameters leading to ideal outcomes. Integration of SVM and DE has proven satisfactory, with 95.23% accuracy [17]. In spite of better performance with the use of ML-based algorithms, DL has evolved to detect DR. Generally, DR causes adversarial changes in the blood vessels and nerves in the retina. The study has performed feature extraction using principal component analysis (PCA), and a DL-based ML-FEC (Multi-Label Feature Extraction and Classification) method has been used to detect and classify all DR lesions in CFPs (Color Findus Photographs) based on a proposed pre-trained CNN. ResNet152, ResNet50, and SqueezeNet1 have been used for training the subset of images that assist in the identification and classification of DR lesions. ResNet152 has attained optimal performance with 94.40% accuracy [18]. Similarly, the research has used a PCA-based deep neural network (DNN) with the Grey Wolf optimization algorithm for classifying the extracted features of the DR dataset [19].
Correspondingly, the study [20] has been constructed on the pre-trained framework that has utilized a 2-stage TL (transfer learning) methodology due to a limited dataset and several parameters in Deep CNN. Initially, the study attempted to use conventional CNN models [21] that had been pretrained with ImageNet. Images within ImageNet possess structures that vary from fundus images; hence, the hierarchical model of pre-trained CNN has been adapted by re-initializing CONV1 later filters with the lesion ROIs retrieved from the annotated dataset. Later, fine-tuning was performed through the use of ROIs. This process has been undertaken with low-phase layers learning local lesion structures. As Fully Connected (FC) layers encode high-level features that are universal in nature, they have been replaced with an FC layer in accordance with Principal Component Analysis (PCA). Subsequently, it has been utilized as an unsupervised way to retrieve features from fundus images. Lastly, gradient boosting has been included. Analysis of the suggested system with the use of 10-fold validation on two datasets has indicated that the suggested system has worked better than conventional methodologies. The existing research has used an automated system to detect DR at early stages by detecting the red lesions that exist in retinal images. To accomplish this, an advanced convolutional layer is used in the architecture of U-Net, which aids in pixel-level class labeling. From the red lesion detection, the segmented images are attained, which are given as input to the CNN for training and classifying the input images based on the severity class [22]. Likewise, the suggested research has used Deep CNN and Squeezenet to offer a multi-level severity classification of DR. By using SqueezeNet, the fundus images are classified into normal and abnormal classes of DR. For tuning the SqueezeNet, fractional war strategy optimization is used, which combines war strategy optimization and fractional calculus. In the second level of decomposition, the severity level of abnormal images is found by using Deep CNN, and the existing study has attained 91.9% accuracy [23].
Similarly, the study [24] used a CNN model with a Siamese-based transfer learning (TL) model. Different from conventional studies, the suggested system accepts binocular structural images as an input and then learns the correlation to assist in making the prediction. To assess the performance of the suggested system, a binocular framework to detect five classes of DR has been trained and assessed on 10% of the validation dataset. Outcomes have revealed that 0.829 has been attained as a kappa score. As CNN has worked better, the study [25] has aimed to detect retinal exudates in fundus images with TL. The suggested research has used both region-based CNN (RCNN) and CNN for detecting the DR lesions automatically, which is independent of datasets, and classification is performed with the detected lesions [26]. In the image, regionally trained CNN has focused on a specific area, which classifies the images of DR. In the suggested model, pre-trained frameworks, namely Residual Network-50, Visual Geometry Group-19, and Inception V3 [27], have been utilized for feature extraction from the fundus images. Lastly, the classification rate of the suggested model has been compared with several deep CNN models in comparison to conventional methods. Better results have been procured.
Moreover, the study [28] has developed and assessed dual DL versions for predicting the progress of DR in diabetic patients who possessed teleretinal DR screening in the primary care setup. Input for the dual versions has been a collection of three- or single-field images. Validation has been undertaken on an eye (chosen at random/patient) from dual datasets, namely an internal and external validation. Outcomes have revealed the better performance of DL in identifying DR. Furthermore, various ML [29] and DL-based models have been employed on accessible DR datasets. Classification has been optimized based on feature extraction. For resolving challenges like optimized feature extraction, the study [30] regarded the DR dataset from the UCI-ML repository and outlined a DL model with PCA for reducing dimensionality and extracting significant features. Harris Hawk Optimization (HHO) has been utilized for optimizing the feature extraction and classification performance. Satisfactory performance has been attained. Taking into consideration the advantages of ML and DL, the study [31] has considered the DR dataset gathered from the UCI ML repository. The data has been normalized with the typical scalar method, and PCA has been utilized to retrieve significant dataset features.
Furthermore, the firefly algorithm (FA) has been implemented to reduce dimensionality. The reduced dataset has been fed into the Deep NN model [32] to perform classification. Outcomes procured through the execution of the model have been assessed against conventional ML models, and outcomes have confirmed the better performance of the suggested model with regard to precision, recall, specificity, sensitivity, and accuracy.
Subsequently, an automated model for identifying DR has been developed. Deep DR has directly detected the existence and complexity of the disease based on ensemble learning and TL. It has encompassed conventional NNs in accordance with the integration of renowned CNNs and customized deep NNs. Deep DR has been developed to construct a DR dataset of high quality that is later labeled by ophthalmologists. The relationship between the ideal element classifiers and class labels has been explored, along with the impacts of various integrations of the elemental classifiers upon ideal integration performance. Models have been assessed in accordance with reliability and validity through the use of nine metrics. Outcomes have revealed that the suggested model has performed with better accuracy [33]. Recently, quantum-based NNs have gained significant attention in disease prediction, and DR is no exception. Accordingly, the study [34] has endorsed a model based on quantum NNs executed on DR images. Models based on quantum computing have performed better than other DL and ML models. An emerging quantum-based model has been confirmed to be faster at DR detection. Hence, the research [35] has presented a quantum-based deep regression model to diagnose the medical image that considers the merits of DL and quantum. Random features and density matrices have been utilized for constructing a density estimator. Testing has been performed. The diagnosis has been undertaken in accordance with the gradation of the advanced phases. Model training has been undertaken with five accessible gates. Reports have revealed the better performance of the suggested system.
From the extensive analysis, it is observed that the existing method, which uses DL for the classification of DR, sometimes misclassifies the severity. Hence, it is necessary to develop a computationally efficient and affordable model for the classification of DR. The assessment of training time with classification accuracy is an issue in many DL methods. The main issues determined through the analysis of conventional works are given as follows:
  • The study [31] has claimed that the suggested system might not work better in the low-dimensional dataset because of the probability of the model being overfitting. The endorsed system has suggested undertaking studies with large amounts of data;
  • Though different studies have attempted to work better, they have lacked accuracy rates; correspondingly, the study [15] has employed several ML classifiers to prognosticate DR. Moreover, RRF has performed better than conventional classifiers by achieving 86% accuracy. Similarly, the research [16] has used a hybrid ML and shown 96.62% accuracy;
  • Though studies have worked better, most of them have relied on ML and DL [30], while limited studies have considered using quantum-based models for DR detection. Hence, research in this area is needed in the future for quick DR detection.

3. Proposed Methodology

The present study aims to perform the classification of DR grading by using the quantum-based Deep CNN method in the IDRiD dataset. Quantum NNs are computational neural networks that work by utilizing the principle of quantum mechanics. Even though many existing studies have performed the classification of DR, conventional studies lack regard for accuracy and computational time. By performing DL in the quantum environment, the present study intends to attain better accuracy and take less computational time. The overall flow of the proposed model is shown in Figure 1. The data are taken from the standard IDRiD dataset and loaded into the proposed system. Initially, pre-processing is done to check for missing values and resize the images. Pre-processed data is given to the train and test split, where 80% of the data has been used for training and 20% of the data has been used for the testing process. And trained Data is mapped to the quantum state in which a quantum circuit has been used to map the quantum feature that encrypts the classical data to quantum state space. Mapping to the quantum state is the essential step for applying quantum-based deep learning to classical data. The classification of DR by using the proposed model, which used deep CNN in the quantum environment.
Quantum-based methods generally use qubits to represent the information. Qubits possess a superposition of states that assist in more efficient and powerful computation in comparison with other methods. Here, quantum gates are utilized for manipulating the qubit-states, and quantum algorithms have more efficiency in solving specific problems. The proposed optimized multiple qubit gate QNN method uses quantum gates for processing the input image as a quantum state, and a neural network assists in learning the patterns in the quantum state and also assists in making predictions. Compared with classical gates, the proposed multiple qubit gates possess quantum behavior in qubits for performing more efficient and powerful computations. Therefore, the proposed method has the ability to attain higher accuracy in the classification of DR images compared with existing methods. Overall, quantum-based methods have more advantages in the classification of images.
Input features are fed into the input layer of CNN, and the process of classification of the proposed model is shown in Figure 2. CNN is performed in the quantum environment, which contains two gates, namely Hadamard gates and coupling gates. The Hadamard gate is used for the process of data clustering, and the coupling gate is used for creating the layers in CNN. Hadamard Gate works on a single qubit. While using the Hadamard gate on the qubit, which is in the | 0 > state, it makes the qubits attain the state of superposition, in which the probability of calculating 0 is equivalent to the probability of calculating 1.
Quantum processors are categorized using a coupling gate that controls the qubits interactions using the CNOT gate. The illustrative diagram of the classification of the optimized multiple-qbit gate quantum NN is shown in Figure 3. Input features are given to the Hadamard gate present in the pooling layers to prepare the state of the cluster. The coupling gate denotes the qubit pairs that assist the operations of two-qubit gates. Qubits are signified as circles that support the operations of a two-qubit gate, which is shown as lines that link the qubits. CNN has the ability to automate the learning of a huge number of filters in parallel for training the dataset. The default convolutional layer reduces the size of images with high dimensions without affecting the information. This is fed into the max pooling layers, which measure the higher value of the feature map and help to create the pooled feature map. It helps to attain the input representation, which has decreased dimensionality, which is fed into the residual block, which assists in the learning of residual functions, and classification is performed on the DR grade prediction.
In convolutional operation, the input   Feat   X   is   the   size   of   o   ×   p ,   the convolutional kernel refers to K with the size w × w, the operation of convolution can be done by using Equation (1).
f =   δ ( X i : i + w , j : j + w K )
where   δ refers to the non-linear function that fixes the size of the feature map. After the convolution process, the feature map is provided as an input for the following convolutional layer to perform feature extraction with high or decreased feature dimensionality. In the pooling layer, the methods of reduction of dimensionality contain average pooling and maximum pooling. At last, the classification of images is attained at the fully connected layer. Quantum circuits include parametric and non-parametric gates. The parametric gate consists of a single qubit rotation gate, and three single qubit rotation gates are defined in Equations (2)–(4).
Rotational   gate x ( θ ) = [ cosine ( θ 2 ) ,   i   sine ( θ 2 ) i   sine ( θ 2 ) ,     cosine ( θ 2 ) ]
Rotational   gate y ( θ ) = [ cosine ( θ 2 ) ,       i   sine ( θ 2 ) sine ( θ 2 ) ,     cosine ( θ 2 ) ]
Rotational   gate z ( θ ) = [ e i ( θ 2 )           0 0           e i ( θ 2 )   ]
As referred to in the   quantum   circuit   qc ( w ) , the parameterized circuit implements the unitary transforms of the quantum state as |x〉 which produced the output quantum state as |y〉, which is
| y =   qc ( w ) | x
where w refers to the parameter in the circuit, including the angle of the qubit-rotation gate. Various quantum methods are realized by connecting various quantum gates.
In a quantum system, where the number of qubits is narrow and the quantum system is utilised to solve classical issues, the process of dimensionality reduction of classical data is generally needed first. Images are downsampled to o   ×   o size, which scales the pixel value by [0, 1], which smoothing the image. Therefore, after the operations, 1 × o 2 vector a = [ a 1 ,   a 2   a o 2 ] converted the vector to Ϙ angle information, which is, Ϙ = 3.1415   x . In which Ϙ = [ Ϙ 1 ,   Ϙ 2   Ϙ o 2 ] . the angle information is considered as the rotation angle of the Rotational   gate y , is utilized in the initial quantum state for the process of encoding for o2 input quantum system is given as,
| ς img i = 1 o 2   Rotational   gate y   Ϙ i   | 0 1   | 0 o 2
Therefore, o   ×   o qubits are needed to encode the downsampled image with size   o   ×   o , the quantum state | ς img which is attained by encoding the pixels present in the downsampled image. Later, the quantum state | ς img , the quantum convolution kernel qck ( θ ) , deliberated by the parameterized quantum circuit, is utilized to execute unitary transformation in | ς img , ( θ =   θ 1 ,   θ 2 ,   θ 3 ,   θ 4 ,   θ 5 ) are the training parameters of the convolutional kernel, | ς out   final   quantum   output is attained when quantum system evolves to a quantum state. ( Z 1 ,   ,   Z N ) refers to the vector of Z operators, which acts on the various qubits, V refers to the parameter of the free unitary gate present in the pooling layer. In which, QCK ( θ ) =   qck 1 ( θ ) qck 2 ( θ )     qck ncl ( θ ) , ncl refers to the total number of convolutional layers in o   ×   o image, ncl = ( o 1 ) 2 times present in the pooling layer. When the outcome of the quantum convolutional layer is directly measured, the quantum output is attained with dimension ( o 2   1 ) . When the output of the quantum pooling layer is measured, the quantum output is attained with dimensions 1     ( o   1 ) 2 . The Hadamard gate in quantum computing is used to perform the data clustering operation, which refers to the qubit rotation gate, where at a time a single qubit can be entered in and a single qubit can come out. In digital logic, it performs similarly to an inverter and a logical buffer. Therefore, the qubit was mapped onto the surface of the sphere. Hadamard’s gate rotates the sphere in which the state looks into the desired position. Hence, it is a necessary gate to manipulate single qubits; however, additionally, two qubit entangling gates are also required for the universal quantum computer. The input matrix of the Hadamard gates is given below:
g ( | ς img ) = ( g 11         g 12       g 13   g 21         g 22       g 23 g 31         g 32       g 33                               g n 1         g n 2       g n )
h ( | ς labels ) = ( h 11         h 21         h 31                                       h n 1       )
where g   &   h refers to the input image matrix to the Hadamard gates. In the framework quantum circuit (FQC) model, a complex transform is broken into Hadamard, CNOT, and NOT, which are represented in Equations (9)–(11). A gate refers to ‘o’ qubits and is denoted by 2 o   × 2 o unitary matrix where, in the gate, the number of qubits of the input and output are equal.
NOT   GATE | ς img + | ς labels ( not   gate ) | ς img + | ς labels
Z   GATE | ς img + | ς labels ( z   gate ) | ς img | ς labels
HADAMARD   GATE | ς img + | ς labels ( hadamard   gate ) | ς img | 0 + | 1 2 + | ς labels | 0 | 1 2
C refers to the nearest neighbor rotational spin, which sets to 1 to solve the energy scaling problem. After adding the gates to qubits with the Hadamard gates, the coupling gate is enacted for N, which is the number of layers in FQC. The initial sub-layer contains ( x , ccz ) with the help of coupling gates that are performed on every pair of the input gates spinning, and it is represented by corresponding qubits. The angle of rotation is constant for each (ccz) that uses the coupling gate with the same sub-layer. The second sub-layer is comprised of N layers (y, cnot) gates, each acting on a qubit that possesses a constant rotation angle that exists in the sub-layer. In which ( X , CCZ ) , ( Y , CNOT ) , ( Y , CCZ )   and   ( Z , CCZ ) act as pairs of coupling gates that contain each qubit, and the rotation angle is calculated by using Equation (12).
CCZ θ =   exp ( i o   θ   CCZ     CCZ )  
Quantum circuits are generated, and the IDRiD dataset is given as the input. Quantum states, | ς img trained data, and | ς img   test data are fixed. Quantum layers are used for the feature extraction, and grid qubits are set as (0,1,2,3,4). Hadamard gates are fixed and executed using the function CNOT, and coupling gates ( X , CCZ ) , ( Y , CNOT ) , ( Y , CCZ ) , and   ( Z , CCZ ) are used for creating the layers. The quantum circuit is passed to the modified convolution layer in which downsampling occurred, and the coupling gate will check the intermediate time calculation at each epoch where the loss function fluctuates. The bottleneck layer helps maintain the loss function in the coupling layer, which helps maintain accuracy constantly. The residual block is combined with the CNN, which is referred to as the modified CNN. The residual network is capable of forming the identity function, which maps to the activation. By using the trained images and labels | ς img   and   | ς labels , the quantum model performs qubit transformations.
Moreover, quantum-based CNN have the ability to solve complex problems, and data that exist in the quantum environment can be expressed by means of qubits. When CNN is performed in a quantum environment, it increases the efficiency of the model in terms of image classification. Thus, the proposed model performs the classification of DR grades. At last, the classification of the images for DR is attained by measuring the quantum state.
Pseudocode 1 for the classification by using the optimized multiple-qubit gate Quantum NN is given below:
Pseudocode 1: Classification using optimized multiple-qubit gate quantum NN.
Step 1 :   gen QUANTUMCIRCUIT ( IMAGES )
step 2 :   Input :   IDRID   dataset   images ,   IDRID   dataset   labels
Step 3 : START  
Step 4 : set   QS ,   | ς img   trained   data ,   | ς img test   data  
Step 5 : set   quantum   layers for   feature   extractions
Step 6 : set   input   shape = ( 75 , 75 , 3 )
Step 7 : set   qubits = Gridqubits ( 0 , 1 , 2 , 3 , 4 )
Step 8 : set   Hadamard   gates ( )
HG = X CNOT Step 9 : set   COUPLING   GATES   ( )
( X , CCZ ) , ( Y , CNOT ) , ( Y , CCZ ) , ( Z , CCZ ) CC   GATES
Step - 10 :   Create   cnn   model
DOWNSAMPLING   ( QUANTUM   CIRCUIT   IS   PASSED )   with   bottleneck   layers
cnn * = Combine c ( DOWNSAMPLING ,   Bottleneck   layers ) ,   where   cnn * represents the current iteration features
modified   cnn = cnn * + residual   block ,
model . fit   with   modified   cnn
Step 11 : run
batchsize ,   epoch ,   and   learning   rate
step 12 : for   each   batchsize   do
quantum   model   perform   Qubits   transformation   train | ς img ,   | ς labels
E = Qubits ( train | ς img ,   | ς labels )
Step 13 : Pass   E   into   the   classical   FCL   for   further   processing ,
Step 14 : Calculate   accuracy   value  
Step 15 : validate   test   images   | φ test
Step 16 : Output :   each   image s   class   ( return   the   retinopathy   grade   binary   classification )
The difference between existing CNN-based quantum computing and the proposed optimized multiple qubit gate QNN is shown in Table 1. From the table, it is observed that quantum-based Deep CNN has multi-qubit gates, which assist in the powerful performance of the proposed system and thereby increase the accuracy rate.

4. Results and Discussion

The results that have been obtained by implementing the proposed model are included in this section, along with their dataset description, exploratory data analysis, experimental results, performance, and comparative analysis. The performance of the proposed system has been exposed by using standard performance metrics [36] like precision, f1-score, recall, specificity, and accuracy.

4.1. Dataset Description

The Indian diabetic retinopathy image dataset (IDRiD) [37] is the first dataset that has been conducted with the Indian population. The dataset is comprised of typical DR lesions and normal retinal structures noted at the pixel level. The IDRiD dataset has delivered information about the severity of the disease of DR for each image. The dataset has been more suitable for both evaluation and development of image analysis algorithms and helps in the early detection of DR. IDRiD’s dataset consists of 516 fundus images, of which 413 images belong to the training set and 103 images belong to the test set.

4.2. Exploratory Data Analysis (EDA)

In general, EDA assists in performing the primary analysis on the dataset, which is used to realize the patterns, spot the anomalies by using the graphical representation, verify assumptions, know more about the experiment hypothesis, and explain the summary statistics. EDA for the proposed model for the IDRiD dataset is included in the section.
Figure 4 shows the count plot of the retinopathy grading, where ‘grade 0’ denotes no NPDR, ‘grade 1’ denotes mild NPDR, ‘grade 2’ denotes moderate NPDR, ‘grade 3’ denotes severe NPDR, ‘grade 4’ denotes PDR. From the figure, it is observed that grade 0 and grade 2 diseases are found to have a higher value in the IDRiD dataset.
In DR grading, the IDRiD dataset has been interpreted using Grad-Cam results, and it offers a visual explanation that assists in understanding the decision-making of deep learning algorithms. Grad-CAM results are used for debugging the proposed model and assisting in improving the accuracy and precision of the proposed model, as shown in Figure 5, and the extracted features using the neural network are exposed by using the Grad-Cam results.

4.3. Experimental Results

The results obtained during the experiment are shown in Figure 6. From the figure, it is observed that the proposed model makes the correct predictions, and the predicted label exactly matches the original image and makes the correct classification.

4.4. Performance Analysis

The section deliberated on the performance examination of the proposed model with the IDRiD dataset. The standard performance metrics have dignified the performance estimation of the offered system.
The confusion matrix is used to analyze the performance of the proposed model. Figure 7 shows the confusion matrix of the DR grading. From Figure 7, it is found that all the predictions made by using the proposed model are true predictions, and there are no false predictions found with the proposed system.
The model loss plot for DR grading is shown in Figure 8, which provides information about the performance of the model on both the validation (labeled test) dataset and the trained dataset. In addition, for the proposed system, the model loss is found to be saturated at the minimum level, which is near 0.
The model accuracy plot for DR grading is shown in Figure 9, which provides information about the performance of the model and the accuracy of the system on both the validation (labeled test) dataset and the trained dataset. In addition, for the proposed system, the model accuracy is found to be saturated, which is equal to 1. The performance metrics of the proposed model have been shown in Table 2 and Figure 10 and it is observed that the obtained values for accuracy, recall, f1-score, and precision are all 100%, indicating that the proposed model has shown optimal performance.
The proposed method is also explored using the SUSTech-SYSU dataset to show the efficacy of the proposed system. From Table 3 and Figure 11, using the SUSTech-SYSU dataset, the values of precision, recall, accuracy, and f1-score are found to be 97.9%, 97.9%, 98%, and 97.9%, respectively.

4.5. Comparative Analysis

The proposed model has been compared with conventional studies that used the IDRiD dataset; hence, existing studies that used a similar type of dataset have also been considered in the comparative analysis. A comparative analysis has been done to expose the efficacy of the proposed model. Table 4 and Figure 12 show the comparative analysis with the existing study [38]. From Table 4 and Figure 12, it is observed that the existing methods like LzyUNCC, VRT, HarangiM1, HarangiM2, SUNet, Mammoth, AVSASVA, and the existing model have been compared with the proposed study, and the LzyUNCC existing method has attained the maximum accuracy of 74.76%, while the proposed method has attained 100% accuracy.
Table 5 and Figure 13 show the comparative analysis with the existing study [39], which used various approaches like the cross-disease attention network, the auxiliary learning approach, and the XGBoost classifier. The existing technique [39] has been compared with the proposed method.
From the extensive analysis, the proposed method has been compared with the existing study [39]. From Table 5 and Figure 13, it is observed that the existing methods, like the cross-disease attention network, the auxiliary learning approach, and the XGBoost classifier and existing technique [39] have been compared with the proposed study, and the existing method [39] has attained the maximum accuracy of 96.76%, while the proposed method has attained 100% accuracy. Hence, it is evident that the proposed method shows efficient performance compared to other existing methods.
Table 6 and Figure 14 show the comparative analysis of the existing study [37] compared with the proposed method.
From the extensive analysis, the proposed method has been compared with the existing study [37]. From Table 6 and Figure 14, it is observed that the existing method [37] has been compared with the proposed study, and the existing method [37] has attained the maximum accuracy of 97.09%, while the proposed method has attained 100% accuracy. The maximum values of precision, recall, f1-score, and specificity in the existing study are given as 100%, 90.63%, 86.96%, and 18.18%, respectively, whereas the values of precision, recall, specificity, and f1-score for the proposed method are collectively found to be 100%. Hence, it is evident that the proposed method shows efficient performance compared to other existing methods.
The proposed system has been compared with the existing study [40], which used methods like SVM, RF, MLP, and J48 for the classification of DR, as shown in Table 7. The classification of DR has been performed based on the values of the false-positive rate (FPR), specificity, precision, f1-score, and recall. From Table 7, it is evident that the proposed method is more efficient in comparison with the existing study and has attained optimal outcomes in each performance metric.
The proposed deep quantum-based NN has greater potential for the classification of DR and finding the severity of the diseases. Hence, from the performance and comparative analysis, it is revealed that the proposed study has an optimal value of 100% in each performance metric in the classification of diseases (DR). Comparative analysis is done to expose the efficacy of the proposed method, and it is found that using CNN in a quantum environment has improved the performance of the classification of DR and increased the accuracy to 100%.

4.6. Implementation Challenges

The present research has some implementation challenges due to hardware limitations, as it is difficult to develop and expensive. It is complex to operate quantum computers with a sufficient number of qubits and a low enough error rate, which assist in executing the quantum algorithms. Hence, QNN is widely used on small-scale computers and has less potential for handling complex and large datasets. Typically, quantum hardware is integrally noisy, and qubits can be affected by noise and several factors. Some protocols are being developed for error correction, but their implementation of this kind of protocols are computationally challenging and expensive. Pre-processing of retinal images for conversion to quantum format is more challenging. Moreover, various quantum encoding techniques lead to different results, and it is complex to find the optimal encoding technique for a given dataset.

5. Conclusions

The present study aimed to perform the classification of DR by using the quantum-based Depp CNN, which assisted diabetic patients in preventing vision loss. In the present study, deep CNN was performed in the quantum environment to reduce computational time and solve complex problems. Quantum had attained more attention due to its enhanced performance in the field of image classification, where Hadamard gates were used for the process of data clustering and coupling gates were used for the creation of several layers in the quantum circuit. Downsampling was performed in the quantum circuit, and coupling gates checked the intermediate time calculation at each epoch where the loss function varied. The bottleneck layer in the modified CNN helped the proposed model maintain the loss function. Thus, bottlenecks and residual blocks assisted in maintaining accuracy. The comparative analysis outcomes confirmed the efficacy of the proposed work over the existing studies. The accuracy value of the proposed system was found to be 100%. Moreover, in the future, the multiqubit gates and qubit processors will be implemented in an enhanced manner in order to improve the realization of the quantum-based DL algorithms.

Author Contributions

Conceptualization, S.A., A.A., A.B., M.S., A.G. and S.W.; methodology, S.A., A.A., A.B., M.S., A.G. and S.W.; software, S.A., A.A., A.B., M.S., A.G. and S.W.; validation, S.A., A.A., A.B., M.S., A.G. and S.W.; formal analysis, S.A., A.A., A.B., M.S., A.G. and S.W.; investigation, S.A., A.A., A.B., M.S., A.G. and S.W.; resources, S.A., A.A., A.B., M.S., A.G. and S.W.; data curation, S.A., A.A., A.B., M.S., A.G. and S.W.; writing—original draft preparation, S.A., A.A., A.B., M.S., A.G. and S.W.; writing—review and editing, S.A., A.A., A.B., M.S., A.G. and S.W.; visualization, S.A., A.A., A.B., M.S., A.G. and S.W.; supervision, S.A., A.A., A.B., M.S., A.G. and S.W.; project administration, S.A., A.A. and A.B.; funding acquisition, S.A., A.A., A.B., M.S. and A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia for funding this research work through the project number (IF2/PSAU/2022/01/22904).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Qummar, S.; Khan, F.G.; Shah, S.; Khan, A.; Shamshirband, S.; Rehman, Z.U.; Khan, I.A.; Jadoon, W. A deep learning ensemble approach for diabetic retinopathy detection. IEEE Access 2019, 7, 150530–150539. [Google Scholar] [CrossRef]
  2. Simó-Servat, O.; Hernández, C.; Simó, R. Diabetic retinopathy in the context of patients with diabetes. Ophthalmic Res. 2019, 62, 211–217. [Google Scholar] [CrossRef]
  3. Shanthi, T.; Sabeenian, R. Modified Alexnet architecture for classification of diabetic retinopathy images. Comput. Electr. Eng. 2019, 76, 56–64. [Google Scholar] [CrossRef]
  4. Bellemo, V.; Lim, Z.W.; Lim, G.; Nguyen, Q.D.; Xie, Y.; Yip, M.Y.; Hamzah, H.; Ho, J.; Lee, X.Q.; Hsu, W. Artificial intelligence using deep learning to screen for referable and vision-threatening diabetic retinopathy in Africa: A clinical validation study. Lancet Digit. Health 2019, 1, e35–e44. [Google Scholar] [CrossRef] [PubMed]
  5. Gangwar, A.K.; Ravi, V. (Eds.) Diabetic retinopathy detection using transfer learning and deep learning. In Evolution in Computational Intelligence: Frontiers in Intelligent Computing: Theory and Applications (FICTA 2020); Springer: Berlin/Heidelberg, Germany, 2021; Volume 1. [Google Scholar]
  6. Alyoubi, W.L.; Abulkhair, M.F.; Shalash, W.M. Diabetic retinopathy fundus image classification and lesions localization system using deep learning. Sensors 2021, 21, 3704. [Google Scholar] [CrossRef] [PubMed]
  7. Gayathri, S.; Gopi, V.P.; Palanisamy, P. Diabetic retinopathy classification based on multipath CNN and machine learning classifiers. Phys. Eng. Sci. Med. 2021, 44, 639–653. [Google Scholar] [CrossRef] [PubMed]
  8. Ragab, M.; Aljedaibi, W.H.; Nahhas, A.F.; Alzahrani, I.R. Computer aided diagnosis of diabetic retinopathy grading using spiking neural network. Comput. Electr. Eng. 2022, 101, 108014. [Google Scholar] [CrossRef]
  9. Gao, Z.; Pan, X.; Shao, J.; Jiang, X.; Su, Z.; Jin, K.; Ye, J. Automatic interpretation and clinical evaluation for fundus fluorescein angiography images of diabetic retinopathy patients by deep learning. Br. J. Ophthalmol. 2022. [Google Scholar] [CrossRef]
  10. Diao, D.; Diao, F.; Xiao, B.; Liu, N.; Zheng, D.; Li, F.; Yang, X. Bayes conditional probability-based causation analysis between gestational diabetes mellitus (gdm) and pregnancy-induced hypertension (PIH): A statistic case study in harbin, China. J. Diabetes Res. 2022, 2022, 2590415. [Google Scholar] [CrossRef]
  11. Lv, Z.; Yu, Z.; Xie, S.; Alamri, A. Deep learning-based smart predictive evaluation for interactive multimedia-enabled smart healthcare. ACM Trans. Multimed. Comput. Commun. Appl. TOMM 2022, 18, 1–20. [Google Scholar] [CrossRef]
  12. Mathur, N.; Landman, J.; Li, Y.Y.; Strahm, M.; Kazdaghli, S.; Prakash, A.; Kerenidis, I. Medical image classification via quantum neural networks. arXiv 2021, arXiv:210901831. [Google Scholar]
  13. Mangini, S.; Tacchino, F.; Gerace, D.; Bajoni, D.; Macchiavello, C. Quantum computing models for artificial neural networks. Europhys. Lett. 2021, 134, 10002. [Google Scholar] [CrossRef]
  14. Bader Alazzam, M.; Alassery, F.; Almulihi, A. Identification of diabetic retinopathy through machine learning. Mob. Inf. Syst. 2021, 2021, 1155116. [Google Scholar] [CrossRef]
  15. Alabdulwahhab, K.; Sami, W.; Mehmood, T.; Meo, S.; Alasbali, T.; Alwadani, F. Automated detection of diabetic retinopathy using machine learning classifiers. Eur. Rev. Med. Pharmacol. Sci. 2021, 25, 583–590. [Google Scholar] [PubMed]
  16. Mahmoud, M.H.; Alamery, S.; Fouad, H.; Altinawi, A.; Youssef, A.E. An automatic detection system of diabetic retinopathy using a hybrid inductive machine learning algorithm. Pers. Ubiquitous Comput. 2021, 1–15. [Google Scholar] [CrossRef]
  17. Anton, N.; Dragoi, E.N.; Tarcoveanu, F.; Ciuntu, R.E.; Lisa, C.; Curteanu, S.; Doroftei, B.; Ciuntu, B.M.; Chiseliţă, D.; Bogdănici, C.M. Assessing changes in diabetic retinopathy caused by diabetes mellitus and glaucoma using support vector machines in combination with differential evolution algorithm. Appl. Sci. 2021, 11, 3944. [Google Scholar] [CrossRef]
  18. Usman, T.M.; Saheed, Y.K.; Ignace, D.; Nsang, A. Diabetic retinopathy detection using principal component analysis multi-label feature extraction and classification. Int. J. Cogn. Comput. Eng. 2023, 4, 78–88. [Google Scholar] [CrossRef]
  19. Gadekallu, T.R.; Khare, N.; Bhattacharya, S.; Singh, S.; Maddikunta, P.K.R.; Srivastava, G. Deep neural networks to predict diabetic retinopathy. J. Ambient. Intell. Humaniz. Comput. 2020, 1–14. [Google Scholar] [CrossRef]
  20. Saeed, F.; Hussain, M.; Aboalsamh, H.A. Automatic diabetic retinopathy diagnosis using adaptive fine-tuned convolutional neural network. IEEE Access 2021, 9, 41344–41359. [Google Scholar] [CrossRef]
  21. Samanta, A.; Saha, A.; Satapathy, S.C.; Fernandes, S.L.; Zhang, Y.-D. Automated detection of diabetic retinopathy using convolutional neural networks on a small dataset. Pattern Recognit. Lett. 2020, 135, 293–298. [Google Scholar] [CrossRef]
  22. Saranya, P.; Pranati, R.; Patro, S.S. Detection and classification of red lesions from retinal images for diabetic retinopathy detection using deep learning models. Multimed. Tools Appl. 2023, 1–21. [Google Scholar] [CrossRef]
  23. Beevi, S.Z. Multi-Level severity classification for diabetic retinopathy based on hybrid optimization enabled deep learning. Biomed. Signal Process. Control 2023, 84, 104736. [Google Scholar] [CrossRef]
  24. Zeng, X.; Chen, H.; Luo, Y.; Ye, W. Automated diabetic retinopathy detection based on binocular siamese-like convolutional neural network. IEEE Access 2019, 7, 30744–30753. [Google Scholar] [CrossRef]
  25. Mateen, M.; Wen, J.; Nasrullah, N.; Sun, S.; Hayat, S. Exudate detection for diabetic retinopathy using pretrained convolutional neural networks. Complexity 2020, 2020, 5801870. [Google Scholar] [CrossRef]
  26. Erciyas, A.; Barışçı, N. An effective method for detecting and classifying diabetic retinopathy lesions based on deep learning. Comput. Math. Methods Med. 2021, 2021, 9928899. [Google Scholar] [CrossRef]
  27. Saichua, P.; Surinta, O. Classification of Diabetic Retinopathy Images Using Deep Learning. Master’s Thesis, Mahasarakham University, Maha Sarakham, Thailand, 2022. [Google Scholar]
  28. Bora, A.; Balasubramanian, S.; Babenko, B.; Virmani, S.; Venugopalan, S.; Mitani, A.; de Oliveira Marinho, G.; Cuadros, J.; Ruamviboonsuk, P.; Corrado, G.S.; et al. Predicting the risk of developing diabetic retinopathy using deep learning. Lancet Digit. Health 2021, 3, e10–e19. [Google Scholar] [CrossRef]
  29. Abdelsalam, M.M.; Zahran, M. A novel approach of diabetic retinopathy early detection based on multifractal geometry analysis for OCTA macular images using support vector machine. IEEE Access 2021, 9, 22844–22858. [Google Scholar] [CrossRef]
  30. Gundluru, N.; Rajput, D.S.; Lakshmanna, K.; Kaluri, R.; Shorfuzzaman, M.; Uddin, M.; Rahman Khan, M.A. Enhancement of detection of diabetic retinopathy using Harris hawks optimization with deep learning model. Comput. Intell. Neurosci. 2022, 2022, 8512469. [Google Scholar] [CrossRef]
  31. Gadekallu, T.R.; Khare, N.; Bhattacharya, S.; Singh, S.; Maddikunta, P.K.R.; Ra, I.H.; Alazab, M. Early detection of diabetic retinopathy using PCA-firefly based deep learning model. Electronics 2020, 9, 274. [Google Scholar] [CrossRef]
  32. Wang, X.-N.; Dai, L.; Li, S.-T.; Kong, H.-Y.; Sheng, B.; Wu, Q. Automatic grading system for diabetic retinopathy diagnosis using deep learning artificial intelligence software. Curr. Eye Res. 2020, 45, 1550–1555. [Google Scholar] [CrossRef] [PubMed]
  33. Zhang, W.; Zhong, J.; Yang, S.; Gao, Z.; Hu, J.; Chen, Y.; Yi, Z. Automated identification and grading system of diabetic retinopathy using deep neural networks. Knowl. Based Syst. 2019, 175, 12–25. [Google Scholar] [CrossRef]
  34. Jayanthi, P.; Rai, B.K.; Muralikrishna, I. The potential of quantum computing in healthcare. In Technology Road Mapping for Quantum Computing and Engineering; IGI Global: Hershey, PA, USA, 2022; pp. 81–101. [Google Scholar]
  35. Toledo-Cortés, S.; Useche, D.H.; Müller, H.; González, F.A. Grading diabetic retinopathy and prostate cancer diagnostic images with deep quantum ordinal regression. Comput. Biol. Med. 2022, 145, 105472. [Google Scholar] [CrossRef] [PubMed]
  36. Huang, C.; Li, S.X.; Caraballo, C.; Masoudi, F.A.; Rumsfeld, J.S.; Spertus, J.A.; Normand, S.L.T.; Mortazavi, B.J.; Krumholz, H.M. Performance metrics for the comparative analysis of clinical risk prediction models employing machine learning. Circ. Cardiovasc. Qual. Outcomes 2021, 14, e007526. [Google Scholar] [CrossRef] [PubMed]
  37. Gu, Z.; Li, Y.; Wang, Z.; Kan, J.; Shu, J.; Wang, Q. Classification of Diabetic Retinopathy Severity in Fundus Images Using the Vision Transformer and Residual Attention. Comput. Intell. Neurosci. 2023, 2023, 1305583. [Google Scholar] [CrossRef]
  38. Luo, L.; Xue, D.; Feng, X. Automatic diabetic retinopathy grading via self-knowledge distillation. Electronics 2020, 9, 1337. [Google Scholar] [CrossRef]
  39. Singh, R.K.; Gorantla, R. DMENet: Diabetic macular edema diagnosis using hierarchical ensemble of CNNs. PLoS ONE 2020, 15, e0220677. [Google Scholar] [CrossRef] [PubMed]
  40. Gayathri, S.; Gopi, V.P.; Palanisamy, P. A lightweight CNN for Diabetic Retinopathy classification from fundus images. Biomed. Signal Process. Control 2020, 62, 102115. [Google Scholar]
Figure 1. The overall flow of the proposed model.
Figure 1. The overall flow of the proposed model.
Mathematics 11 02008 g001
Figure 2. Classification using optimized multiple-qbit gate quantum neural network.
Figure 2. Classification using optimized multiple-qbit gate quantum neural network.
Mathematics 11 02008 g002
Figure 3. Illustrative Diagram of the Proposed Model.
Figure 3. Illustrative Diagram of the Proposed Model.
Mathematics 11 02008 g003
Figure 4. Count plot of retinopathy grades.
Figure 4. Count plot of retinopathy grades.
Mathematics 11 02008 g004
Figure 5. Grad cam results of DR.
Figure 5. Grad cam results of DR.
Mathematics 11 02008 g005
Figure 6. Experimental results.
Figure 6. Experimental results.
Mathematics 11 02008 g006
Figure 7. Confusion matrix of the proposed system.
Figure 7. Confusion matrix of the proposed system.
Mathematics 11 02008 g007
Figure 8. Model loss for DR.
Figure 8. Model loss for DR.
Mathematics 11 02008 g008
Figure 9. Model accuracy for DR.
Figure 9. Model accuracy for DR.
Mathematics 11 02008 g009
Figure 10. Performance metrics of the proposed model.
Figure 10. Performance metrics of the proposed model.
Mathematics 11 02008 g010
Figure 11. Performance analysis of the proposed method with SUSTech-SYSU and IDRID datasets.
Figure 11. Performance analysis of the proposed method with SUSTech-SYSU and IDRID datasets.
Mathematics 11 02008 g011
Figure 12. Comparative analysis [38].
Figure 12. Comparative analysis [38].
Mathematics 11 02008 g012
Figure 13. Comparative analysis using performance metrics [39].
Figure 13. Comparative analysis using performance metrics [39].
Mathematics 11 02008 g013
Figure 14. Comparative analysis [37].
Figure 14. Comparative analysis [37].
Mathematics 11 02008 g014
Table 1. Comparison between existing CNN based on quantum computing and proposed optimized multiple qubit gate QNN method.
Table 1. Comparison between existing CNN based on quantum computing and proposed optimized multiple qubit gate QNN method.
FactorExisting CNN Based on Quantum ComputingProposed Optimized Multiple-Qubit Gate QNN Method
Data inputClassical pixelsQuantum-encoded image.
Quantum gate typesNo multiple-qubit gates are usedMultiple-qubit gates are responsible for powerful processing.
InterpretabilityStraightforward interpretationIn quantum, interpretation is more difficult.
AccuracyModerateHigher accuracy is attained by using an optimized QNN design.
Practical implementationChallenging due to hardware limitationsChallenging due to hardware limitations.
FactorCNN-based Quantum ComputingProposed optimized multiple-qubit gate QNN method.
Quantum circuit designNot presentQuantum circuit design helps attain optimal classification.
Model architectureConvolutional Neural Network (CNN)Quantum Neural Network (QNN) with multiple qubits.
Existing research More research is availableLimited.
Training algorithmClassical machine learning algorithmQuantum algorithm for optimal QNN parameters.
Table 2. Performance metrics of the proposed model.
Table 2. Performance metrics of the proposed model.
IDRID DatasetAccuracyPrecisionRecallF1-Score
Proposed Model100100100100
Table 3. Performance analysis of the proposed method with SUSTech-SYSU and IDRID datasets.
Table 3. Performance analysis of the proposed method with SUSTech-SYSU and IDRID datasets.
Proposed ModelAccuracy (%)Precision (%)Recall (%)F1-Score (%)
SUSTech-SYSU dataset9897.997.997.9
IDRID dataset100100100100
Table 4. Comparative analysis [38].
Table 4. Comparative analysis [38].
MethodsAccuracy (%)
LzyUNCC74.76
SUNet65.06
VRT59.22
Mammoth55.34
HarangiM155.34
AVSASVA54.37
HarangiM247.57
Existing model67.96
Proposed Methods100
Table 5. Comparative analysis [39].
Table 5. Comparative analysis [39].
TechniqueAccuracy
Auxiliary learning approach and XGBoost classifier94.17
Cross-disease attention network65.1
Existing Technique [39]96.76
Proposed Technique100
Table 6. Comparative analysis [37].
Table 6. Comparative analysis [37].
ClassPrecisionRecallSpecificityF1-ScoreAccuracy
00.76920.58820.18180.66670.8058
110.60.020.750.9806
20.5370.90630.06120.67440.7282
310.52630.09680.68970.9126
410.76920.03230.86960.9709
Proposed model with all classes11111
Table 7. Comparative analysis [40].
Table 7. Comparative analysis [40].
ModelFPRSpecificityPrecisionRecallF1-ScoreClass
Existing modelsSVM0.0650.9350.8780.970.922Normal
010Mild NPDR
0.0760.9240.8590.9410.898Moderate NPDR
0.1120.8880.6240.8510.72Severe NPDR
0.0030.9970.9330.2860.437PDR
Random forest0.0650.9350.8740.9330.903Normal
0.0030.9970.8330.250.385Mild NPDR
0.0580.9420.8940.9930.941Moderate NPDR
0.0560.9440.7630.8240.792Severe NPDR
0.0050.9950.9390.6330.756PDR
MLP0.0290.9710.9390.9180.928Normal
010Mild NPDR
0.0870.9130.8450.9630.9Moderate NPDR
0.0290.9710.8720.9190.895Severe NPDR
0.0080.9920.9390.9390.939PDR
J480.0040.9960.9930.9930.993Normal
0110.950.974Mild NPDR
0.0040.9960.9930.9930.993Moderate NPDR
0.0030.9970.9860.9860.986Severe NPDR
0.0030.9970.9810.99PDR
Proposed ModelProposed Model01111Normal
01111Mild NPDR
01111Moderate NPDR
01111Severe NPDR
01111PDR
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alsubai, S.; Alqahtani, A.; Binbusayyis, A.; Sha, M.; Gumaei, A.; Wang, S. Quantum Computing Meets Deep Learning: A Promising Approach for Diabetic Retinopathy Classification. Mathematics 2023, 11, 2008. https://doi.org/10.3390/math11092008

AMA Style

Alsubai S, Alqahtani A, Binbusayyis A, Sha M, Gumaei A, Wang S. Quantum Computing Meets Deep Learning: A Promising Approach for Diabetic Retinopathy Classification. Mathematics. 2023; 11(9):2008. https://doi.org/10.3390/math11092008

Chicago/Turabian Style

Alsubai, Shtwai, Abdullah Alqahtani, Adel Binbusayyis, Mohemmed Sha, Abdu Gumaei, and Shuihua Wang. 2023. "Quantum Computing Meets Deep Learning: A Promising Approach for Diabetic Retinopathy Classification" Mathematics 11, no. 9: 2008. https://doi.org/10.3390/math11092008

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop