Next Article in Journal
REHABS: An Innovative and User-Friendly Device for Rehabilitation
Next Article in Special Issue
A Comparison of the Impact of Pharmacological Treatments on Cardioversion, Rate Control, and Mortality in Data-Driven Atrial Fibrillation Phenotypes in Critical Care
Previous Article in Journal
Effect of Different Anchorage Reinforcement Methods on Long-Term Maxillary Whole Arch Distalization with Clear Aligner: A 4D Finite Element Study with Staging Simulation
Previous Article in Special Issue
Artificial Intelligence Applications for Osteoporosis Classification Using Computed Tomography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

GAN-Based Approach for Diabetic Retinopathy Retinal Vasculature Segmentation

Computer Science and Engineering Department, Qatar University, Doha P.O. Box 2713, Qatar
*
Author to whom correspondence should be addressed.
Bioengineering 2024, 11(1), 4; https://doi.org/10.3390/bioengineering11010004 (registering DOI)
Submission received: 31 October 2023 / Revised: 5 December 2023 / Accepted: 14 December 2023 / Published: 21 December 2023
(This article belongs to the Special Issue Computer Vision and Machine Learning in Medical Applications)

Abstract

:
Most diabetes patients develop a condition known as diabetic retinopathy after having diabetes for a prolonged period. Due to this ailment, damaged blood vessels may occur behind the retina, which can even progress to a stage of losing vision. Hence, doctors advise diabetes patients to screen their retinas regularly. Examining the fundus for this requires a long time and there are few ophthalmologists available to check the ever-increasing number of diabetes patients. To address this issue, several computer-aided automated systems are being developed with the help of many techniques like deep learning. Extracting the retinal vasculature is a significant step that aids in developing such systems. This paper presents a GAN-based model to perform retinal vasculature segmentation. The model achieves good results on the ARIA, DRIVE, and HRF datasets.

1. Introduction

Diabetes is a global health concern found among individuals of different age groups. Diabetic retinopathy (DR) is a condition of the eyes that might occur in persons who suffer from diabetes for a very long period. DR patients will have certain DR lesions behind their retinas. Fundus photographs of patients which are taken with the help of fundus cameras are used by ophthalmologists to diagnose DR. Manually examining these fundus images requires a lot of time and is also error-prone. Moreover, there has been an alarming increase recently in the number of diabetes patients. Due to this, the limited number of ophthalmologists available to carry out this procedure is becoming a barrier to the timely diagnosis of DR [1]. Early detection of DR is of great importance since this will help give timely treatment to patients to avoid any undesirable consequences that may occur as a result of DR progressing to a severe stage. Several computer-aided automated systems are being developed to address this issue using techniques like deep learning.
Retinal vasculature segmentation has a significant role in developing such systems. It is usable as a pre-processing/feature extraction step to develop DR detection or DR grading systems [2]. Also, it is useful for treating and detecting the risk of many diseases like diabetes mellitus, hypertension, cardiovascular disease, etc. [3]. Due to this importance, many new methods are used to perform this task. But, there are several challenges in achieving this efficiently and accurately. One major problem is the unavailability of sizeable datasets. The state-of-the-art datasets used for retinal vasculature extraction are small in size. This forces some studies to use a combination of multiple datasets collected under different settings and taken using different cameras [4]. Another major challenge is the wide disparity in the thickness of the retinal blood vessels. Hence, the utilized retinal vessel extraction methods should be capable of efficiently segmenting both thick and thin vessels. Yet another challenge is that retinal features differ from patient to patient. Finally, other structures in the retina, like the Optic Disc, Fovea, and DR lesions, may be wrongly detected as retinal blood vessels. Hence, in this study, we put forward a Pix2Pix GAN model, which can overcome all these challenges and segment retinal blood vessels efficiently on multiple datasets, including ARIA, DRIVE, and HRF datasets. An example of manually extracted retinal vasculature from an image of the DRIVE dataset is presented in Figure 1.

2. Related Works

Retinal vasculature extraction is a challenging task of extracting retinal blood vessels of varying thicknesses from fundus images effectively. This resulting vasculature should be free from other structures present in the retina. Also, the method used should be able to perform the same on fundus images collected under different settings. Over the years, researchers have used several traditional and artificial intelligence-based techniques to achieve this [6,7,8]. Artificial intelligence practices like machine learning and deep learning are the most highly preferred techniques for this task.

Diabetic Retinopathy Retinal Vasculature Segmentation

Several authors have explored the use of both traditional and machine-learning techniques for retinal vasculature extraction. For instance, the researchers in [9] made use of contrast-limited adaptive histogram equalization for enhancing the retinal images’ contrast, followed by mathematical morphology to reduce noise. The fuzzy c-means method was used for blood vessel extraction, and further refinement was achieved through an integrated level set approach. Fan et al. [10] used the image matting technique for retinal vasculature extraction. They automatically generated a trimap using the region features of the vessels. Later, hierarchical image matting was used for extracting the pixels of blood vessels present in unfamiliar regions. A three-stage algorithm for retinal vasculature extraction was introduced by the authors in [11]. Initially, binary images were extracted by preprocessing the green plane of the input images, and larger vessels were identified from these. After this, a Gaussian mixture model classifier was used for classification, and, in the third stage, the classified output from the previous stage was combined with significant portions of the blood vessels. Hossain and Reza [12] proposed a model for detecting blood vessels using the Markov Random Field method. They found the energy of clique sets using Markov–Gibbs equivalence. Finally, they utilized the Bayesian rule to determine the joint distribution.
Researchers have effectively utilized deep learning architectures. For instance, the authors in [13] employed a Convolutional Neural Network (CNN) for generating a vessels probability map, which helped to distinguish vessels as well as background pixels in low-contrast regions. Further, a fully connected Conditional Random Field (CRF) was used with the vessel probability map to achieve better segmentation accuracy. In a different study, the authors in [14] introduced a segmentation technique using a fully connected CNN with pre/post-processing. The final steps helped with noise removal and to obtain fine segmentation results.
A two-stage approach for vessel extraction was introduced by the authors in [15]. The first one utilized a CNN to correlate the image patch and the ground truth. In the next one, a visual codebook was formed by propagating the training patches in the CNN, allowing feature vectors to query this search space. Additionally, Sine-Net, a deep CNN-based architecture, was used for blood vessel segmentation [16]. It made use of up-sampling and down-sampling for capturing features of vessels with different thicknesses.
The U-Net++ architecture was utilized for retinal vasculature extraction in a study by the authors in [17]. Extracted features were then used to predict diabetic retinopathy in the subsequent stage. The same task was achieved using an encoder enhanced atrous U-Net by the researchers in [18]. An enhanced U-Net was employed for retinal vasculature extraction by the researchers in [19]. For the same task, three deep learning models, including SegNet, U-Net, and Convolutional Neural Network, were utilized by the researchers in [20]. Among these, SegNet was found to be the most effective. A deformable convolutional network was joined with U-Net architecture to perform retinal vasculature extraction by the researchers in [21]. Another context-involved U-Net approach was employed for retinal vasculature extraction by the researchers in [22]. The extraction of thinner vessels was improved by using patch-based loss weight mapping.
Aujih et al. [23] conducted a study using the U-Net model for retinal vasculature extraction. They used dropout and batch normalization with different settings, finding that batch normalization accelerated learning up to the thirtieth epoch. Additionally, the same study used Inception-V1 to understand the impact of retinal vasculature extraction on diabetic retinopathy classification. The U-Net architecture with region merging was used by the researchers in [24] for retinal vasculature extraction.
In another study by the authors in [25], a backpropagation neural network was employed to achieve retinal blood vessel segmentation, resulting in reduced operation time and improved accuracy. Deng and Ye [26] used a new model called D-MNet, having multi-scale attention and a residual mechanism along with a pulse-coupled neural network for achieving the same task.
Retinal blood vessel segmentation using a multi-encoder decoder architecture having two encoders was performed by the researchers in [27]. Yadav [28] used a dual-tree discrete Ridgelet transform (DT-DRT) to extract features within the Region of Interest in fundus images. Subsequently, a U-Net was utilized to achieve retinal vasculature extraction. Samuel and Veeramulai [29] achieved the same task using a multilevel deep neural network. Feature extraction was performed with VGG-16.
Wu et al. [30], used a new network called N F N + for retinal vasculature extraction. This NFN+ model was characterized by a special cascaded architecture that included connections between networks. These connections facilitated the accurate segmentation of thick and thin retinal blood vessels. Yan et al. [31] used a three-phase deep learning model. This model sequentially extracted thick vessels, followed by thin vessels, and ultimately combined them. This approach yielded the successful extraction of vessels with varying thicknesses. In the work by the authors of [32], a multi-scale Convolutional Neural Network featuring attention mechanisms (MSCNN-AM) was utilized for retinal vasculature extraction. This technique involved utilizing various dimensions for segmentation. To enhance the effectiveness of capturing global and multi-scale vessel data, atrous separable convolutions with different dilation rates were employed. In the same context, some authors used a Generative Adversarial Network (GAN) to segment retinal vasculature. For example, the authors in [33] proposed a conditional pix2pix GAN for segmenting retinal vessels, while in [34] the authors proposed a GAN-based model with an adapted UNet to segment retinal data. In [35], the authors proposed a GAN-based model named M-GAN with an M-generator while two encoder-decoder networks were exploited.
Table 1 summarizes some of the methods that were reviewed in this section.

3. Proposed Method

3.1. Data Augmentation

All three datasets were small and hence were subjected to various data augmentation techniques. These techniques included Horizontal Flipping, Vertical Flipping, Elastic Transform, Grid Distortion, and Optical Distortion. Horizontal flipping involved flipping the image along the Y-axis, while vertical flipping flipped along the X-axis. Elastic Transformations, Grid Distortion, and Optical Distortion were each applied with two different sets of parameters. Albumentations, an open-source library, was utilized for performing the data augmentation. Among the augmentation methods used, Elastic Transformation and Grid Distortion are particularly renowned for medical images.

3.2. GAN-Based Retinal Vasculature Segmentation

In this study, a new type of Pix2Pix Generative Adversarial Network (GAN) was employed. Initially introduced by Ian J Goodfellow in 2014, this architecture comprises two sub-models, the Generator as well as the Discriminator. These models compete against each other, with the Generator generating data samples and the Discriminator attempting to differentiate between real and generated data. Training continues until the Discriminator is unable to differentiate between the two. Figure 2 shows the Pix2Pix GAN architecture used in this study.
The Generator network receives a fixed-length random seed noise or latent vector, which it uses to produce an image. This latent vector serves as the foundation of the generative process. The resulting image and real images are fed into the Discriminator for discrimination. After training, a multi-dimensional vector space called latent space is created, representing latent variables that cannot be directly observed but are crucial for the problem domain, resembling points in it. The latent space captures high-level concepts of the unprocessed data, and the Generator interprets points in this space to produce new outputs.
The Discriminator functions as a classification model, distinguishing between real (from the training data) and generated samples. The Generator and Discriminator losses are monitored during training, with the goal of minimizing the Discriminator loss. As training progresses, the Discriminator becomes better at distinguishing real from fake, and the Generator becomes more proficient at generating realistic data. When convergence is reached, the Generator can produce nearly realistic data, and the Discriminator outputs ½ for all inputs, rendering it dispensable after training.
GANs find applications in various domains, such as generating 3D objects, image processing, traffic monitoring, texture transfer, and more [36]. One crucial application is Image Translation, which involves transforming an input image into an output image.
Different types of GANs exist, including DCGAN, cGAN, Cycle GAN, and Info GAN. DCGAN uses deep convolutional nets and transposed convolutional networks for upsampling images. cGANs allow the use of class labels to condition the GAN, which makes it suitable for image-to-image translation. Cycle GAN can perform similar tasks but with the ability to learn mappings between images using unpaired datasets. Info GANs can learn interpretable and meaningful representations. In this study, a Pix2Pix GAN was used, which is a special case of cGAN, widely used for image-to-image translation experiments.
Out of a perceived image x as well as a random noise vector z, a cGAN can understand a mapping to an output image y represented as G : x , z y [37].
The following denotes the loss function of a cGAN [37]:
L c G A N ( G , D ) = E ( x , y ) [ log D ( x , y ) ] + E ( x , z ) [ log ( 1 D ( x , G ( x , z ) ) ]
In this, the generator G tends to reduce the aforementioned function in contradiction of the discriminator D, which tends to increase it. To calculate the implication of conditioning D, an unconditional variant is used in the loss for GAN as seen below [37]:
L G A N ( G , D ) = E ( y ) [ log D ( y ) ] + E ( x , z ) [ log ( 1 D ( G ( x , z ) ) ]
The Generator in the Pix2Pix GAN uses a Resnet between upsampling and downsampling operations, forming a UNet architecture. Additionally, an L 1 loss function is introduced in G to minimize blurring as follows [37]:
L L 1 ( G ) = E ( ( x , y , z ) ) [ | | y G ( x , z ) ) | | 1 ]
The Discriminator is a patchGAN with a 70 × 70 patch size. The final loss function of the Pix2Pix GAN is denoted by a formula involving the cGAN loss and the L 1 loss, with a hyperparameter λ determining the weight of the L 1 loss function [37] as below:
G = a r g m i n G m a x D L c G A N ( G , D ) + λ L L 1 ( G )

4. Experimental Results

In this section, we present the experimental results using the proposed method on three datasets. In the experiment using the ARIA dataset, a total of 1287 images were used which is quite large, leading to a high number of trainable parameters. To handle this, a laptop with GPU capabilities was used, featuring an Intel Core i7 processor as well as an NVIDIA GEFORCE graphics card.
We trained our Pix2Pix GAN using the PyTorch framework in a Python 3.9 environment. The Adam optimizer, having an initial learning rate set at 0.0002 along with L1 loss ( λ ) set to ten, was used. We trained the model for 100 epochs.

4.1. Datasets

The segmentation evaluation was performed on the following three datasets including ARIA, DRIVE, and HRF.
The ARIA (Automated Retinal Image Analysis) dataset comprises 143 fundus images annotated for blood vessel segmentation [38]. Each image has dimensions of 768 × 576 pixels and includes images of both left and right eyes. The dataset was gathered during the period 2004 to 2006 by St. Paul’s Eye Unit, Liverpool, UK, from male as well as female adults. It consists of three groups: the control group with 61 images, the diabetic retinopathy group with 59 images, and the age-related macular degeneration (AMD) group with 23 images. Two different graders, denoted as “SS” and “BD”, annotated the images, and the labels from grader “BD” were used in this experiment. Eighty percent of the data was used to train and the rest to test the model.
The DRIVE dataset (Digital Retinal Images for Vessel Extraction) contains fundus images acquired through a DR diagnosis initiative in the Netherlands [39]. It comprises forty images, separated equally into train and test sets. A Canon camera at forty-five degrees field of view was utilized to capture these images, having a resolution of 584 × 565 pixels. In the training set, each image is annotated by a single expert. In contrast, the testing set contains two annotations for each image, performed by two different graders. To assess the proposed method on this dataset, we use the annotations provided by the first grader. The same training and testing sets provided in the dataset were used in this experiment.
The High-Resolution Fundus or HRF dataset consists of forty-five high-resolution color fundus images having a size of 3504 × 2366 pixels [40]. The images present in it are separated into three categories consisting of healthy, DR, and glaucomatous with 15 images each. All images are provided with binary gold standard vessel segmentation. Eighty percent of the data was used to train and the rest to test the model.

4.2. Evaluation Metrics

To evaluate the model’s performance, seven metrics were employed, which include Accuracy, Sensitivity, Specificity, Dice Coefficient, Jaccard’s Coefficient, Precision, as well as Matthews Correlation Coefficient (MCC). All of the metrics chosen in this study are commonly used in segmentation tasks. To understand the formulas used for calculating these metrics, some terms need to be defined including TP or “True Positives” which denotes the correctly generated retinal blood vessel pixels. TN or “True Negatives” denotes the rightly generated background pixels. FP or “False Positives” denotes the background pixels falsely acknowledged as retinal blood vessel pixels. FN or “False Negatives” denotes the retinal blood vessel pixels falsely identified as background pixels.
The following formulas were used to calculate the mentioned metrics in this study:
Accuracy: Pixel-wise accuracy measures how many pixels the model classifies correctly.
Accuracy = ( TP + TN ) / ( TP + FP + FN + TN )
Sensitivity: This metric measures the rate of actual pixels generated as retinal blood vessels among all generated pixels that are actually retinal blood vessel pixels.
Sensitivity = TP / ( TP + FN )
Dice Coefficient(Sorenson Index/FMeasure): An important metric used in image segmentation, representing a special overlap index.
Dice = 2 TP / ( 2 TP + FP + FN )
Specificity: This metric measures the rate of actual pixels generated as background pixels among all generated pixels that are actually background pixels.
Specificity = TN / ( TN + FP )
Jaccard’s Coefficient: This metric indicates the similarity between two images.
Jaccard = Dice / ( 2 Dice )
Precision: This metric denotes the rate of actual pixels generated as retinal blood vessels to the entire count of pixels projected as retinal blood vessels.
Precision = TP / ( TP + FP )
MCC: MCC estimates the distance between the actual values and projected values.
MCC = ( TP TN FP FN ) / sqrt ( ( TP + FP ) ( TP + FN ) ( TN + FP ) ( TN + FN ) )

4.3. Evaluation and Discussion

The values attained for the different metrics presented in the preceding sub-section are shown in Table 2. We also present some visual results in Figure 3, Figure 4 and Figure 5. Table 2 presents a contrast of the results attained by our model with previous techniques that utilized the ARIA, DRIVE, and HRF datasets for retinal vasculature segmentation.
Table 2 reveals that the GAN applied to all three datasets achieved values above 0.942 for all seven metrics calculated in this study. Higher values closer to one indicate better results. Thus, these metrics demonstrate the GAN model’s strong performance on all three datasets, particularly for retinal vasculature extraction. Furthermore, visually comparing the results with the ground truth presented compelling and appealing outcomes. The highest values for all metrics were obtained on the HRF dataset except the one obtained for Sensitivity. The highest value for Sensitivity was obtained on the DRIVE dataset.
Table 2 shows that our model performed better than other methods used in the comparison. Specifically, we compared our results on the ARIA dataset with three methods that utilized the same dataset: Azzopardi’s method [44], Kar’s method [41], and Prajna’s method [43]. Vostatek et al. [42] evaluated Azzopardi’s traditional method [44] using the ARIA dataset. Kar et al. [41] employed a Deep Neural Network (DNN) for retinal vasculature segmentation. Similarly, Prajna and Nath [43] tackled the same task by combining a Multi-Scale Residual CNN with GAN. Compared with three other studies chosen for comparison, the GAN model yielded superior results regarding Accuracy, Sensitivity, Dice, Jaccard, and Precision metrics on this dataset. Additionally, the model obtained the second-highest value for Specificity.
The results obtained on the DRIVE and HRF datasets were compared alongside three studies by the researchers in [20,22,41]. As mentioned earlier, the researchers in [41] employed a DNN for retinal vasculature segmentation. Elaouaber [20] used three deep learning models, which included SegNet, U-Net, and CNN, to achieve the same. They obtained the best results using SegNet, whereas the researchers in [22] used a context-involved U-Net approach for retinal vasculature extraction. Table 2 shows that on the DRIVE dataset, we achieved the highest values for Accuracy, Sensitivity, Dice, MCC, and Precision when compared with the other three studies. Regarding the HRF dataset, we could obtain the highest values for Accuracy, Dice, and Precision in the comparison. Moreover, we could attain the second-best values for Sensitivity and Specificity.

5. Conclusions and Future Work

We could successfully use the GAN model on the HRF dataset with an accuracy of 0.983, a sensitivity of 0.973, as well as a specificity of 0.992 for retinal vasculature extraction. The results achieved on the DRIVE and ARIA datasets were also appealing. Notably, these favorable outcomes were attained despite using smaller datasets. In future work, we will perform diabetic retinopathy lesion segmentation using similar deep-learning methods.

Author Contributions

Conceptualization, A.S., O.E. and S.A.-M.; data curation, A.S.; formal analysis, A.S.; methodology, A.S., O.E. and S.A.-M.; project administration, S.A.-M. and N.A.; supervision, S.A.-M. and N.A.; validation, A.S., O.E., S.A.-M. and N.A.; visualization, A.S. and O.E.; writing—original draft, A.S.; writing—review and editing, A.S., O.E., S.A.-M. and N.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research work was made possible by research grant support (IRCC-2023-223) from Qatar University Research Fund in Qatar.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets used in this paper are available online including DRIVE dataset https://drive.grand-challenge.org/ (accessed on 30 March 2023), HRF https://figshare.com/articles/dataset/A_robust_technique_based_on_VLM_and_Frangi_filter_for_retinal_vessel_extraction_and_denoising/5879803 (accessed on 30 March 2023), and ARIA [38].

Conflicts of Interest

The authors declare that they have no competing interests.

References

  1. Luo, X.; Pu, Z.; Xu, Y.; Wong, W.K.; Su, J.; Dou, X.; Ye, B.; Hu, J.; Mou, L. MVDRNet: Multi-view diabetic retinopathy detection by combining DCNNs and attention mechanisms. Pattern Recognit. 2021, 120, 108104. [Google Scholar] [CrossRef]
  2. Sebastian, A.; Elharrouss, O.; Al-Maadeed, S.; Almaadeed, N. A Survey on Deep-Learning-Based Diabetic Retinopathy Classification. Diagnostics 2023, 13, 345. [Google Scholar] [CrossRef] [PubMed]
  3. Thangaraj, S.; Periyasamy, V.; Balaji, R. Retinal vessel segmentation using neural network. IET Image Process. 2018, 12, 669–678. [Google Scholar] [CrossRef]
  4. Sebastian, A.; Elharrouss, O.; Al-Maadeed, S.; Almaadeed, N. A Survey on Diabetic Retinopathy Lesion Detection and Segmentation. Appl. Sci. 2023, 13, 5111. [Google Scholar] [CrossRef]
  5. Drive Dataset. Available online: https://drive.grand-challenge.org/ (accessed on 30 March 2023).
  6. Al-Mohannadi, A.; Al-Maadeed, S.; Elharrouss, O.; Sadasivuni, K.K. Encoder-decoder architecture for ultrasound IMC segmentation and cIMT measurement. Sensors 2021, 21, 6839. [Google Scholar] [CrossRef] [PubMed]
  7. Riahi, A.; Elharrouss, O.; Al-Maadeed, S. BEMD-3DCNN-based method for COVID-19 detection. Comput. Biol. Med. 2022, 142, 105188. [Google Scholar] [CrossRef] [PubMed]
  8. Elasri, M.; Elharrouss, O.; Al-Maadeed, S.; Tairi, H. Image generation: A review. Neural Process. Lett. 2022, 54, 4609–4646. [Google Scholar] [CrossRef]
  9. Memari, N.; Ramli, A.R.; Saripan, M.I.B.; Mashohor, S.; Moghbel, M. Retinal blood vessel segmentation by using matched filtering and fuzzy c-means clustering with integrated level set method for diabetic retinopathy assessment. J. Med. Biol. Eng. 2019, 39, 713–731. [Google Scholar] [CrossRef]
  10. Fan, Z.; Lu, J.; Wei, C.; Huang, H.; Cai, X.; Chen, X. A hierarchical image matting model for blood vessel segmentation in fundus images. IEEE Trans. Image Process. 2018, 28, 2367–2377. [Google Scholar] [CrossRef]
  11. Roychowdhury, S.; Koozekanani, D.D.; Parhi, K.K. Blood vessel segmentation of fundus images by major vessel extraction and subimage classification. IEEE J. Biomed. Health Inform. 2014, 19, 1118–1128. [Google Scholar]
  12. Hossain, N.I.; Reza, S. Blood vessel detection from fundus image using Markov random field based image segmentation. In Proceedings of the 2017 4th International Conference on Advances in Electrical Engineering (ICAEE), Dhaka, Bangladesh, 28–30 September 2017; pp. 123–127. [Google Scholar]
  13. Fu, H.; Xu, Y.; Wong, D.W.K.; Liu, J. Retinal vessel segmentation via deep learning network and fully-connected conditional random fields. In Proceedings of the 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic, 13–16 April 2016; pp. 698–701. [Google Scholar]
  14. Soomro, T.A.; Afifi, A.J.; Gao, J.; Hellwich, O.; Khan, M.A.; Paul, M.; Zheng, L. Boosting sensitivity of a retinal vessel segmentation algorithm with convolutional neural network. In Proceedings of the 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Sydney, Australia, 29 November–1 December 2017; pp. 1–8. [Google Scholar]
  15. Chudzik, P.; Al-Diri, B.; Caliva, F.; Hunter, A. DISCERN: Generative framework for vessel segmentation using convolutional neural network and visual codebook. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 5934–5937. [Google Scholar]
  16. Atli, I.; Gedik, O.S. Sine-Net: A fully convolutional deep learning architecture for retinal blood vessel segmentation. Eng. Sci. Technol. Int. J. 2021, 24, 271–283. [Google Scholar] [CrossRef]
  17. Gargari, M.S.; Seyedi, M.H.; Alilou, M. Segmentation of Retinal Blood Vessels Using U-Net++ Architecture and Disease Prediction. Electronics 2022, 11, 3516. [Google Scholar] [CrossRef]
  18. Sathananthavathi, V.; Indumathi, G. Encoder enhanced atrous (EEA) unet architecture for retinal blood vessel segmentation. Cogn. Syst. Res. 2021, 67, 84–95. [Google Scholar]
  19. Li, Q.; Fan, S.; Chen, C. An intelligent segmentation and diagnosis method for diabetic retinopathy based on improved U-NET network. J. Med. Syst. 2019, 43, 1–9. [Google Scholar] [CrossRef] [PubMed]
  20. Elaouaber, Z.; Feroui, A.; Lazouni, M.; Messadi, M. Blood vessel segmentation using deep learning architectures for aid diagnosis of diabetic retinopathy. In Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization; Taylor & Francis: Abingdon, UK, 2022; pp. 1–15. [Google Scholar]
  21. Jin, Q.; Meng, Z.; Pham, T.D.; Chen, Q.; Wei, L.; Su, R. DUNet: A deformable network for retinal vessel segmentation. Knowl.-Based Syst. 2019, 178, 149–162. [Google Scholar] [CrossRef]
  22. Zhang, Y.; He, M.; Chen, Z.; Hu, K.; Li, X.; Gao, X. Bridge-Net: Context-involved U-net with patch-based loss weight mapping for retinal blood vessel segmentation. Expert Syst. Appl. 2022, 195, 116526. [Google Scholar] [CrossRef]
  23. Aujih, A.; Izhar, L.; Mériaudeau, F.; Shapiai, M.I. Analysis of retinal vessel segmentation with deep learning and its effect on diabetic retinopathy classification. In Proceedings of the 2018 International Conference on Intelligent and Advanced System (ICIAS), Kuala Lumpur, Malaysia, 13–14 August 2018; pp. 1–6. [Google Scholar]
  24. Burewar, S.; Gonde, A.B.; Vipparthi, S.K. Diabetic retinopathy detection by retinal segmentation with region merging using CNN. In Proceedings of the 2018 IEEE 13th International Conference on Industrial and Information Systems (ICIIS), Rupnagar, India, 1–2 December 2018; pp. 136–142. [Google Scholar]
  25. Liu, Z. Construction and verification of color fundus image retinal vessels segmentation algorithm under BP neural network. J. Supercomput. 2021, 77, 7171–7183. [Google Scholar] [CrossRef]
  26. Deng, X.; Ye, J. A retinal blood vessel segmentation based on improved D-MNet and pulse-coupled neural network. Biomed. Signal Process. Control 2022, 73, 103467. [Google Scholar] [CrossRef]
  27. Chala, M.; Nsiri, B.; El yousfi Alaoui, M.H.; Soulaymani, A.; Mokhtari, A.; Benaji, B. An automatic retinal vessel segmentation approach based on Convolutional Neural Networks. Expert Syst. Appl. 2021, 184, 115459. [Google Scholar] [CrossRef]
  28. Yadav, N. A deep data-driven approach for enhanced segmentation of blood vessel for diabetic retinopathy. Int. J. Imaging Syst. Technol. 2022, 32, 1696–1708. [Google Scholar] [CrossRef]
  29. Samuel, P.M.; Veeramalai, T. Multilevel and multiscale deep neural network for retinal blood vessel segmentation. Symmetry 2019, 11, 946. [Google Scholar] [CrossRef]
  30. Wu, Y.; Xia, Y.; Song, Y.; Zhang, Y.; Cai, W. NFN+: A novel network followed network for retinal vessel segmentation. Neural Netw. 2020, 126, 153–162. [Google Scholar] [CrossRef] [PubMed]
  31. Yan, Z.; Yang, X.; Cheng, K.T. A three-stage deep learning model for accurate retinal vessel segmentation. IEEE J. Biomed. Health Inform. 2018, 23, 1427–1436. [Google Scholar] [CrossRef] [PubMed]
  32. Fu, Q.; Li, S.; Wang, X. MSCNN-AM: A multi-scale convolutional neural network with attention mechanisms for retinal vessel segmentation. IEEE Access 2020, 8, 163926–163936. [Google Scholar] [CrossRef]
  33. Popescu, D.; Deaconu, M.; Ichim, L.; Stamatescu, G. Retinal blood vessel segmentation using pix2pix gan. In Proceedings of the 2021 29th Mediterranean Conference on Control and Automation (MED), Puglia, Italy, 22–25 June 2021; pp. 1173–1178. [Google Scholar]
  34. Yue, C.; Ye, M.; Wang, P.; Huang, D.; Lu, X. SRV-GAN: A generative adversarial network for segmenting retinal vessels. Math. Biosci. Eng. 2022, 19, 9948–9965. [Google Scholar] [CrossRef]
  35. Park, K.B.; Choi, S.H.; Lee, J.Y. M-GAN: Retinal blood vessel segmentation by balancing losses through stacked deep fully convolutional networks. IEEE Access 2020, 8, 146308–146322. [Google Scholar] [CrossRef]
  36. Aggarwal, A.; Mittal, M.; Battineni, G. Generative adversarial network: An overview of theory and applications. Int. J. Inf. Manag. Data Insights 2021, 1, 100004. [Google Scholar] [CrossRef]
  37. Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
  38. Farnell, D.J.; Hatfield, F.N.; Knox, P.; Reakes, M.; Spencer, S.; Parry, D.; Harding, S.P. Enhancement of blood vessels in digital fundus photographs via the application of multiscale line operators. J. Frankl. Inst. 2008, 345, 748–765. [Google Scholar] [CrossRef]
  39. Staal, J.; Abràmoff, M.D.; Niemeijer, M.; Viergever, M.A.; Van Ginneken, B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23, 501–509. [Google Scholar] [CrossRef]
  40. Budai, A.; Bock, R.; Maier, A.; Hornegger, J.; Michelson, G. Robust vessel segmentation in fundus images. Int. J. Biomed. Imaging 2013, 2013, 154860. [Google Scholar] [CrossRef]
  41. Kar, M.K.; Neog, D.R.; Nath, M.K. Retinal vessel segmentation using multi-scale residual convolutional neural network (MSR-Net) combined with generative adversarial networks. Circuits, Syst. Signal Process. 2023, 42, 1206–1235. [Google Scholar] [CrossRef]
  42. Vostatek, P.; Claridge, E.; Uusitalo, H.; Hauta-Kasari, M.; Fält, P.; Lensu, L. Performance comparison of publicly available retinal blood vessel segmentation methods. Comput. Med. Imaging Graph. 2017, 55, 2–12. [Google Scholar] [CrossRef] [PubMed]
  43. Prajna, Y.; Nath, M.K. Efficient blood vessel segmentation from color fundus image using deep neural network. J. Intell. Fuzzy Syst. 2022, 42, 1–13. [Google Scholar] [CrossRef]
  44. Azzopardi, G.; Strisciuglio, N.; Vento, M.; Petkov, N. Trainable COSFIRE filters for vessel delineation with application to retinal images. Med. Image Anal. 2015, 19, 46–57. [Google Scholar] [CrossRef]
Figure 1. Retinal vasculature segmentation sample [5].
Figure 1. Retinal vasculature segmentation sample [5].
Bioengineering 11 00004 g001
Figure 2. Flowchart of the GAN model.
Figure 2. Flowchart of the GAN model.
Bioengineering 11 00004 g002
Figure 3. Results of Pix2Pix GAN on the ARIA dataset (first row: fundus images; second row: groundtruth; and third row: results).
Figure 3. Results of Pix2Pix GAN on the ARIA dataset (first row: fundus images; second row: groundtruth; and third row: results).
Bioengineering 11 00004 g003
Figure 4. Results of Pix2Pix GAN on the HRF dataset (first row: fundus images; second row: groundtruth; and third row: results).
Figure 4. Results of Pix2Pix GAN on the HRF dataset (first row: fundus images; second row: groundtruth; and third row: results).
Bioengineering 11 00004 g004
Figure 5. Results of Pix2Pix GAN on the DRIVE dataset (first row: fundus images; second row: groundtruth; and third row: results).
Figure 5. Results of Pix2Pix GAN on the DRIVE dataset (first row: fundus images; second row: groundtruth; and third row: results).
Bioengineering 11 00004 g005
Table 1. Previous methods for diabetic retinopathy retinal vasculature segmentation.
Table 1. Previous methods for diabetic retinopathy retinal vasculature segmentation.
StudyMethodDataset(s)Year
Gargari et al. [17]U-Net++DRIVE, MESSIDOR2022
Roychowdhury et al. [11]Gaussian mixture model classifierDRIVE, CHASEDB1, STARE2014
Fan et al. [10]Image mattingDRIVE, CHASEDB1, STARE2018
Memari et al. [9]Fuzzy c means clusteringDRIVE, CHASEDB1, STARE2019
Zhang et al. [22]U-NetDRIVE, CHASE-DB1, STARE, HRF2022
Atli and Gedik [16]Sine-NetDRIVE, CHASEDB1, STARE2021
Sathananthavathi et al. [18]U-NetCHASE DB1, DRIVE, STARE, HRF2021
Deng and Ye [26]D-MNetCHASE DB1, DRIVE, STARE, HRF2022
Elaouaber et al. [20]Multiple DL modelsDRIVE, CHASE-DB1, HRF2022
Table 2. Performance comparison of results with previous methods on the ARIA, DRIVE, and HRF datasets. The bold font represents first place.
Table 2. Performance comparison of results with previous methods on the ARIA, DRIVE, and HRF datasets. The bold font represents first place.
DatasetMethodAccuracySensitivitySpecificityDiceJaccardMCCPrecision
DRIVEKar et al. [41]0.9740.8940.988---0.875
Elaouaber et al. [20]0.9770.9670.9960.957---
Zhang et al. [22]0.9570.7850.9820.82-0.7980.864
Popescu et al. [33]0.9210.8340.960---0.948
Yue et al. [34]0.9700.8330.985----
Park et al. [35]0.9700.8340.983--0.8260.834
Proposed0.9780.9750.9810.9780.9560.9560.98
HRFKar et al. [41]0.9770.8890.985---0.8
Elaouaber et al. [20]0.980.980.9950.969---
Zhang et al. [22]0.960.850.9710.82---
Park et al. [35]0.967----0.784-
Proposed0.9830.9730.9920.9820.9650.9660.992
ARIAKar et al. [41]0.9630.7180.984---0.795
Vostatek et al. [42]0.94------
Prajna and Nath [43]0.9250.5660.9610.6490.48--
Proposed0.9710.9740.9690.970.9420.9430.966
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sebastian, A.; Elharrouss, O.; Al-Maadeed, S.; Almaadeed, N. GAN-Based Approach for Diabetic Retinopathy Retinal Vasculature Segmentation. Bioengineering 2024, 11, 4. https://doi.org/10.3390/bioengineering11010004

AMA Style

Sebastian A, Elharrouss O, Al-Maadeed S, Almaadeed N. GAN-Based Approach for Diabetic Retinopathy Retinal Vasculature Segmentation. Bioengineering. 2024; 11(1):4. https://doi.org/10.3390/bioengineering11010004

Chicago/Turabian Style

Sebastian, Anila, Omar Elharrouss, Somaya Al-Maadeed, and Noor Almaadeed. 2024. "GAN-Based Approach for Diabetic Retinopathy Retinal Vasculature Segmentation" Bioengineering 11, no. 1: 4. https://doi.org/10.3390/bioengineering11010004

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop