Next Article in Journal
Effect of Feature Selection on the Accuracy of Music Popularity Classification Using Machine Learning Algorithms
Previous Article in Journal
Design of Ultra High Aperture Efficiency Surface Wave Antenna Array Based on the Three-Dimensional Aperture Principle
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Segmentation of Retinal Blood Vessels Using U-Net++ Architecture and Disease Prediction

by
Manizheh Safarkhani Gargari
1,
Mir Hojjat Seyedi
2,* and
Mehdi Alilou
3
1
Computer Science Department, Islamic Azad University, Urmia Branch, Urmia 5716963896, Iran
2
Department of Biomedical Engineering, Islamic Azad University, Urmia Branch, Urmia 5716963896, Iran
3
Computer Science Department, Islamic Azad University, Khoy Branch, Khoy 5815988838, Iran
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(21), 3516; https://doi.org/10.3390/electronics11213516
Submission received: 10 October 2022 / Revised: 23 October 2022 / Accepted: 25 October 2022 / Published: 29 October 2022

Abstract

:
This study presents a segmentation method for the blood vessels and provides a method for disease diagnosis in individuals based on retinal images. Blood vessel segmentation in images of the retina is very challenging in medical analysis and diagnosis. It is an essential tool for a wide range of medical diagnoses. After segmentation and binary image improvement operations, the resulting binary images are processed and the features in the blood vessels are used as feature vectors to categorize retinal images and diagnose the type of disease available. To carry out the segmentation task and disease diagnosis, we used a deep learning approach involving a convolutional neural network (CNN) and U-Net++ architecture. A multi-stage method is used in this study to better diagnose the disease using retinal images. Our proposed method includes improving the color image of the retina, applying the Gabor filter to produce images derived from the green channel, segmenting the green channel by receiving images produced from the Gabor filter using U-Net++, extracting HOG and LBP features from binary images, and finally disease diagnosis using a one-dimensional convolutional neural network. The DRIVE and MESSIDOR image banks have been used to segment the image, determine the areas related to blood vessels in the retinal image, and evaluate the proposed method for retinal disease diagnosis. The achieved results for accuracy, sensitivity, specificity, and F1-score are 98.9, 94.1, 98.8, 85.26, and, 98.14, respectively, in the DRIVE dataset and the obtained results for accuracy, sensitivity, and specificity are 98.6, 99, 98, respectively, in MESSIDOR dataset. Hence, the presented system outperforms the manual approach applied by skilled ophthalmologists.

1. Introduction

Medical imaging is an important tool in today’s health care community because this method is used for visual documentation, registration, and storage of documents for patients, as well as access to information about many diseases. Medical imaging has transformed health care processes, allowing specialists to detect and diagnose the disease at an early and treatable stage, allowing patients to recover with effective and intensive care [1]. An accurate diagnosis using medical imaging relies on proper usage of the image and successful image analysis. Rather than relying on image interpretation to determine the health of organ tissues such as the retina, lungs, brain, and breasts, the medical community can now collect and analyze detailed information about these tissues thanks to advances in imaging hardware and software, and the development of machine learning and deep learning techniques. Medical image processing, research, and modeling methods are increasingly being applied in all medical science sectors, especially in examining eye and retinal images. Blood vessels in the retina can only be imaged using Fundus cameras since they are not attacked and have a unique composition when viewed through the lens of the camera [2].
Retinal imaging shows changes in the shape of retinal blood vessels in diseases such as diabetic retinopathy, vascular occlusion, cardiovascular disease, hypertension, and stroke. These conditions usually alter the reflectivity, curvature, and models of blood arteries. High blood pressure varies the angle of the branch and the curvature of the arteries, and diabetic retinopathy can lead to the formation of new arteries that, if there is no treatment, can damage the eye and cause blindness. In case of such changes, preventive measures are essential; therefore, vision loss is dramatically preventable. Retina blood vessel segmentation automatically from retinal images is a strong tool for medical analysis. Therefore, the employed segmentation method must be exact and reliable [3]. Separating a target object from a background is called segmentation. Several methods for splitting the retina image have been reported in previous work. Based on machine learning methods, there are two ways to divide retinal vessels: supervised and unsupervised approaches. Supervised methods are based on previously labeled data that examine whether a pixel belongs to a class of vessels. Unsupervised methods do not use pre-tagged information and can learn and organize their information to find patterns or clusters that resemble blood vessels [4]. Segmenting blood vessels manually is difficult and time-consuming, and accurate segmentation can be challenging if neural network complexity is very high. So automatic segmentation is valuable because it reduces the time and effort required. In particular, the retinal blood cell division algorithm focuses on the automatic diabetic retinopathy diagnosis, which has been a major cause of blindness in recent days. Loss of vision associated with diabetic retinopathy can be detected in the early stages of the disease, so many authors of vascular segmentation methods have suggested different methods based on different techniques [5].
To improve the previous works, this research uses a multi-step method to better diagnose the disease based on retinal images. A deep U-Net++ neural network is used to better segment the retina’s color picture and locate the arteries. Several Gabor filters are used in the green channel to improve the neural network’s input, and the generated images are sent into the U-Net++ network as input. To design the neural network, using the training data and the determined structure, appropriate network parameters are obtained during the training process, and finally the neural network learns to perform the image segmentation and separation of areas related to blood vessels. HOG and LBP feature vectors are fed into a deep convolutional neural network, which extracts the disease-specific properties from a binary image and automatically detects them in the retinal image. In the rest of this study, we have the following sections. Section 2 presents the related work. The proposed method is introduced in Section 3. Empirical results and comparison of results with a set of previously presented algorithms are shown in Section 4. We discuss the results and methods for developing this work are shown in Section 5. Section 6 presents the conclusion and future work.

2. Related Work

Retinal blood vessel morphology has been linked to cardiac metabolic risk and other diseases, according to research. This morphological relationship, however, requires a huge amount of data to understand its nature. Various retina image analysis (RIA) systems have been extended. Most have limited analysis and automation due to the large number of retinal signs they present. Kadri et al. [6] presented a multi-scale filter (MSMF) using the slime mold optimization algorithm to extract blood vessels from digital fundus images. Next, they conducted a comparison between the extracted reservoirs and the ground-truth image and calculated the image-performance values for the images, and obtained an accuracy of 97.15% in DRIVE and 97.16% in the CHASE DB1 dataset.
According to Rajinikanth et al. [7], the eye is a sensory organ and any disease in the eye strongly affects the evaluation of the sensory signal and the ability of the brain to make conclusions. Choroid neovascularization is among the severe eye diseases in which a new blood vessel grows from the choroid. The main cause of CNV is wet age-related macular degeneration, and the newly formed vessel causes a leak in the fluid that wets the retina. Untreated CNV can cause blindness. In this work, an MLS scheme has been implemented to detect CNV from OCT images with improved accuracy.
In their study [8] titled “ Predicting Cardiovascular Risk Factors from Retinal Fundus Photographs Using Deep Learning”, Ryan et al. state that medical discoveries are traditionally made through a series of observations, followed by experiments to test hypotheses. Due to the wide range of characteristics, values, patterns, colors, and forms of the actual data, viewing and measuring a set of image observations can be difficult. New information about retinal photos can be found using methods such as deep learning, a machine learning method that teaches its specific characteristics.
Retinal fundus images can be preprocessed to detect diabetic retinopathy and extract characteristics, according to Dilip et al. [9]. A review of medical reports shows that more than 10% of cases with diabetes are in danger of cancer. Diabetic retinopathy is a vision disease affecting more than 70% of those with diabetes who have been diagnosed for more than ten years. Deep retinal imaging is usually used to diagnose and analyze diabetic retinopathy in hospitals. Using machine learning algorithms, it is very challenging to process raw retina images. Using green channel extraction, histogram scaling, picture augmentation, and resizing techniques, deep retinal images are preprocessed in this research. Quantitative analysis of pre-processed photos yields fourteen additional attributes. The Kaggle Diabetic Retinopathy dataset is used for these tests, and the mean and standard deviation of the derived features are used to evaluate the results.
In 2021, Mohammad et al. [10] presented an image segmentation based on the superpixel and an automatic clustering using q-Generalized Pareto distribution under linear normalization called ASCQPHGS. In the proposed method, the superpixel algorithm is used to segment the given image, then density peaks clustering is used for the results obtained from the superpixel algorithm to produce a decision graph. The Hunger Games Search algorithm is used as a clustering method for image segmentation.
According to Omaisi et al. [11], diabetes is a common disease in the world that causes diabetic retinopathy, macular edema, and other microvascular complications in the human retina. In this research, fundus image classification technology has been used to automatically detect DR, classify fundus images based on DR severity, and detect real-time classification from fundus images to patient status. The automatic diagnosis system has a high degree of automation and precision, thus reducing the pressure on the diagnosis and treatment of DR. They have used various image pre-processing methods to extract several important features and then classify them.
Maqsood et al. [12] proposed a new macular detection system based on contrast enhancement, top hat transformation, and modified Kirsch pattern method. Their method was applied to 1349 images from STARE, DRIVE, MESSIDOR, and DIARETDB1 databases and the average sensitivity, specificity, and accuracy were 97.79%, 97.65%, and 97.60%, respectively. Experimental results show that the proposed method performs better.
Zhou et al. [13] proposed a volumetric memory network (VMN) to solve volumetric medical image segmentation as a memory-based reasoning problem. The basis of this model is an external memory component that allows the model to store historical target information in segmented slices in memory and later retrieve useful representations from memory as a guide for input slice segmentation.

3. The Proposed Method

The primary goal of this research is to develop a retinal imaging-based approach for identifying illnesses in humans. The general method proposed is to segment the retinal images to segment and identify areas related to existing blood vessels. After image enhancement and binary image segmentation, the binary image is obtained and processed, and the features in the blood vessels are used as feature vectors to categorize retinal images and diagnose the type of disease available. We propose a novel method of Segmentation of Retinal Blood Vessels Using U-Net++ Architecture and Disease Prediction using retinal fundus images, having the following contributions.
  • Improving the Color Image of the Retina;
  • Applying the Gabor Filter and Generating Image-derived Images from the Green Channel;
  • Green channel segmentation by receiving images Generated from the Gabor filter using U-Net++;
  • Extract HOG and LBP attributes from the binary image;
  • Final diagnosis of the disease with one-dimensional convolution neural network.
The limitations of the existing method can be mentioned as follows:
  • Unsuccessful identification of thin retinal vessels;
  • Almost all existing methods have high computational complexity and require a greater processing time to identify diseases.
All of these strategies will be reviewed in detail in this section, as well as their respective components.

3.1. Improving the Color Image of the Retina

To standardize images and create appropriate contrast and brightness conditions, among other things, the histogram normalization method is used to improve the quality of retina images. This method improves image resolution by evenly distributing different intensities of pixel values. After improving the quality of retinal images, normalization of the histogram causes a Gaussian filter to be used to remove noise and smooth it. Some types of noise in the image are removed with the help of this filter, and the image loses some of its edges by losing some high frequencies, preparing the image for the next step, which is the application of Gabor filters [14]. Figure 1 shows the image of the retinal green channel, along with the contrast-enhanced image and the smoother image with the Gaussian filter.

3.2. Applying the Gabor Filter and Generating Image-Derived Images from the Green Channel

Only the green channel of the retinal image is used for feature extraction, segmentation, and extraction of the blood vessels, which is sufficient for further processing. The Gabor filter is used for feature extraction, for which 4 filters are provided. After filtering, 4 grey images, including information derived from 4 angles of 90, 45, 0, and −45, are extracted and used for the next steps. These four grey images will be used as the deep U-Net++ network’s input to analyze the retinal image of the blood vessels in the green channel by analyzing these values. Gabor filters are the methods inspired by the visual cortex of the posterior brain, which can detect various objects in the image. The main feature that the brainstorming brains use to detect the objects and environmental properties in the neuronal layers of the brain is similar to Gabor filters, and this is enough reason to prove the effectiveness of this method, and the necessity of further research on the successful use of these filters in various applications of machine vision. According to the findings, this technique can be used to extract various tissue features in various applications, including our study, where the primary goal is to extract features from the retinal image in order to diagnose the type of disease. Figure 2 shows the four Gabor filters used in this research, each with a fixed size and an angle that differs from the others. After applying four Gabor filters to the grey image shown in Figure 1, as shown in Figure 3, each Gabor filter extracts unique features of the grey image and its texture, which can lead to better results when combined with other Gabor filters [15].

3.3. Green Channel Segmentation by Receiving Images Generated from the Gabor Filter Using U-Net++

A variety of retinal image segmentation methods have been presented. The U-Net++ structure of a deep neural network was studied. As a first step, we need to teach the neural network how to perform this task. It is necessary to extract the relevant data in the form of grey images obtained by the Gabor filter, and we will be able to train the neural network using these images and fractured images [16].
Medical images, such as retina scans, can be segmented and blood vessels can be extracted using the U-Net++ network. The workflow is shown in Figure 4 for retinal image segmentation for blood vessel extraction and disease diagnosis. The neural network should be trained using the error iteration approach to encode the input image into the features extracted by the convolution filters and then perform the inverse convolution operation. An image of the same size as an input image is created in binary form, where blood vessels are characterized by non-zero values. Figure 5 shows a general schematic of the U-Net++ neural network structure.

3.3.1. Nested U-Net (U-Net++)

Nested U-Net, also commonly known as U-Net++, is an extension of the U-Net architecture introduced by Zhu et al. (2018) aimed to improve segmentation accuracy in U-Net architecture [17]. In U-Net++, they maintain the encoder–decoder architecture of U-Net and claim that incrementally enriching higher-resolution feature maps before combining them with the decoder results helps the network to capture high-resolution details due to higher semantic similarity between connected features. To achieve more accurate segmentation in medical images, U-Net++ is a new segmentation architecture based on dense and nested skip connections. The key point of this architecture is that when the high-resolution feature maps from the encoder network are gradually enriched before merging with the corresponding semantically rich feature maps from the decoder network, the model can more effectively capture the granular details of foreground objects. When the feature maps from the decoder and encoder networks are semantically similar, the network deals with an easier learning task. In U-Net, high-resolution feature maps are transferred from the encoder to the decoder network, and as a result, semantically different feature maps are merged. U-Net++ aims to improve segmentation accuracy by including dense block and convolution layers between the encoder and decoder [17]. Figure 5 shows a high-level overview of the U-Net++ architecture. U-Net++ has 3 additions to the original U-Net:
  • Re-designed skip pathways (shown in green),
  • Dense skip connections (shown in blue), and
  • Deep supervision (shown in red).

Re-Designed Skip Pathways

In U-Net++, the redesigned skip pathways (shown in green) have been added to bridge the semantic gap between the encoder and decoder sub-paths. The purpose of these convolutions layers is reducing the semantic gap between the feature maps of the encoder and decoder subnetworks. As a result, it is possibly a more straightforward optimization problem for the optimizer to solve. Skip connections used in U-Net directly connect the feature maps between encoder and decoder, which results in fusing semantically dissimilar feature maps. However, with U-Net++, the output from the previous convolution layer of the same dense block is fused with the corresponding up-sampled output of the lower dense block. This brings the semantic level of the encoded feature closer to that of the feature maps waiting in the decoder; thus, optimization is easier when semantically similar feature maps are received. All convolutional layers on the skip pathway use kernels of size 3 × 3.
The skip path is formulated as follows: Let x i , j denote the output of node X i , j , where I denote the down-sampling layer along the encoder and j the convolution layer of the dense block along the skip path. The feature maps denoted by x i , j is calculated as follows:
x i , j = { H ( x i 1 , j )   ,                                                                             j = 0 H ( [ [ x i , k ] k = 0 j 1   , u ( x i + 1 , j 1 ) ] ) ,       j > 0 } ,  
where function H (.) is a convolution operation followed by an activation function, U(.) denotes an up-sampling layer, and [ ] denotes the concatenation layer [18].

Dense Skip Connections

In U-Net++, dense skip connections (shown in blue) have implemented skip pathways between the encoder and decoder. These dense blocks are inspired by Dense Net with the purpose to improve segmentation accuracy and improve gradient flow. Dense skip connections ensure that all prior feature maps are accumulated and arrive at the current node because of the dense convolution block along each skip pathway. This generates full-resolution feature maps at multiple semantic levels [18].

Deep Supervision

In U-Net++, deep supervision (shown in red) is added, so that model can be pruned to adjust the model complexity, to balance speed and performance. For the accurate mode, the output from all segmentation branches is averaged. For the fast mode, the final segmentation map is selected from among the segmentation branches.

3.4. Diagnosis of Disease with a Deep Convolutional Neural Network Based on Vascular Binary Image

After segmenting the image and creating a binary image including the vessels, the whole image can be convoluted to the network to determine whether or not a condition is present or absent. However, in this work, an initial feature vector is provided to the one-dimensional convolution network along with image-based estimates of its HOG and LBP properties to extract more features from the feature vector information to achieve higher accuracy. The convolutional neural network is trained for this purpose by receiving samples labeled sick or healthy, and it correctly categorizes the image and detects the presence of disease in addition to extracting more features from the feature vector.
There are many methods for extracting texture properties, but local binary models, in their original and improved forms, are considered by many experts in this field due to their ease of implementation and extraction of appropriate properties with high classification accuracy. In this study, we use the histogram obtained after normalization to use the texture feature in the extraction, which extracts the binary patterns in the image with a single neighborhood radius and creates a histogram of its values in the range [0–255].
Additionally, the HOG method is among the most widely used feature vector extraction methods and has been used successfully in various applications such as pedestrian identification and detection of various objects in the image. The first step is feature extraction, which involves calculating the horizontal and vertical axes gradients as well as calculating the size and direction of the gradients.
Convolutional neural networks are used to classify these specific vectors after feature extraction using HOG and LBP methods. It is possible to extract better and more complex features from input feature vectors using convolution filters, which will be one-dimensional in the desired structure, and then categorize feature vectors using connected layers using this deep neural network.

3.4.1. Local Binary Patterns (LBP)

Feature extraction is a process in which by performing an operation on the input image, its obvious and defining features are determined. There are a variety of ways to achieve this. We used LBP and HOG methods in our work. In computer vision, local binary patterns are employed as visual descriptors for classification [19,20]. A specific instance of the Texture Spectrum model, introduced in 1990, is used in LBP. LBP was first described in 1994 [21,22]. Since then, it has been discovered that LBP is a potent feature for texture classification; it has also been discovered that when LBP is paired with the Histogram of Oriented Gradients (HOG) descriptor, it increases detection performance significantly on particular datasets [23,24]. Silva et al. (2015) conducted a background subtraction evaluation of different enhancements to the original LBP [25]. Bouwmans et al. [26] provide a comprehensive overview of the various LBP variations. The local texture feature of an image is calculated using LBP, which is commonly used to solve texture classification problems. LBP works as it describes the eight-neighborhood pixel in binary code and summarizes all code into a histogram which serves as a texture feature.

3.4.2. Histogram of Oriented Gradients (HOG) Descriptor

Computer vision and image processing use the histogram of oriented gradients (HOG) as a feature descriptor for object detection. The method looks for gradient orientation in certain areas of an image. Using overlapping local contrast normalization and a dense grid of evenly spaced cells, this method is related to edge orientation histograms, scale-invariant feature transform descriptors, and shape contexts. The HOG descriptor algorithm is implemented as follows [27]:
  • A histogram of gradient directions or edge orientations is produced for each cell in the image, and this histogram is applied to the image as a whole.
  • The gradient orientation is used to discretize each cell into angular bins.
  • Pixels in each cell contribute to an angular bin’s weighted gradient.
  • ‘Blocks,’ or spatial regions, refer to a collection of adjacent cells. Histograms can be grouped and normalized by putting cells into blocks.
  • The block histogram is represented by a normalized group of histograms. In this set of block histograms, the descriptor is represented.

3.5. Convolutional Neural Networks

Deep learning neural networks, such as CNNs, are a subset of these networks. A CNN is a hierarchical neural network containing convolutional and subsampling layers that are similar to the basic and complicated cells in the visual cortex. The implementation of convolutional and subsampling layers and the training of CNNs differ [28,29]. Convolutional neural networks (CNN) are commonly employed in image analysis tasks such as object detection, recognition, and segmentation. In conventional neural networks, there are three kinds of layers:
  • Each input neuron in a conventional neural network is connected to the next hidden layer via a convolutional layer. Neurons in the input layer of a CNN are only partially connected to those in the neuron hidden layer.
  • The feature map’s dimensionality can be reduced using a layer called the pooling layer. The CNN’s hidden layer will have numerous activation and pooling levels.
  • Layers that are fully connected make up the network’s final few layers. The output from the final Pooling or Convolutional layer is flattened and sent into the fully connected layer.
Figure 6 shows the general schematic of the convolution network in the various classification problems.

4. Empirical Results

Here, we will use the data supplied in the preceding part to assess the efficacy of our proposed solution. In this study, color images were used to identify and segment the areas related to blood vessels in the retina, so that first the contrast of the images was improved and used to segment the U-Net++ network. This network causes the formation of suitable convolution layers in the first layers of the network, which performs the work of extracting feature vectors automatically, by training based on the descending gradient optimization algorithm in order to reduce network error. The following layers include reverse convolution, which is used to reconstruct an image from feature values using reverse convolution and eventually generate tags and image segmentation as the neural network’s output.

4.1. Database for U-Net++ Network Training

The image bank used in this study performs image segmentation and determines the areas related to blood vessels in the DRIVE retina image [29]. The images in this database have been used to teach the U-Net++ network. This database has 40 images of 565 by 584 dimensions, where 20 images are considered as the training set and different neighborhoods have been extracted from each image. To create a test set from the remaining 20 images of neighborhoods, 20,000 of each image was extracted and the proposed method was used to evaluate the results. Neighborhoods with dimensions of 256 by 256 are extracted from each image and are used to train and evaluate the neural network and perform segmentation and classification of areas related to blood vessels [30].

4.2. Database for the Diagnosis of Retinal Disease

To evaluate the proposed method in diagnosing retinal disease, the MESSIDOR image bank [31] has been used, which includes 1200 color images of different people’s retinas. This image bank has two types of labels. The label of the first type indicates the degree of retinopathy, which is four different types of classes. The second type of label is the risk of macular edema, which is labeled in three types of classes and shows the degree of risk of the disease.

4.3. Quantitative Analysis Approaches

To analyze the experimental results quantitatively, several performance metrics were considered, including precision, recall, accuracy, sensitivity, specificity, F1-score. The performance metrics were calculated using Equations (2)–(7).
Precision = TP/(TP + FP),
Recall = TP/(TP + FN),
Accuracy = TP/(TP + TN + FP + FN),
F1-Score = 2 * ((precision * recall)/(recall + precision)),
Sensitivity = TP/(TP + FN),
Specificity = TP/(TP + FP),
where True Positive (TP) represents the number of correctly classified normal images, True Negative (TN) represents the number of correctly classified pathological images, False Positive (FP) represents the number of incorrectly classified normal images as pathological images, and False Negative (FN) represents the number of the pathological images incorrectly classified as the correct images [32].

5. Results and Discussion

Implementing the proposed method and obtaining the desired results from the segmentation of the retinal images using the U-Net++ network were carried out in the following steps:
  • Collecting neighborhood information from the training set’s images and the mask images intended for training the U-Net++ network,
  • Defining the U-Net++ network structure and training it with the help of MATLAB deep learning toolbox functions, and
  • Gathering neighborhood information from deep-set network training and evaluation set images using deep learning toolbox functions.
From each image of the training set, which consists of 20 color images and 20 masks, 20,000 neighborhoods with the dimensions of 256 by 256 were extracted, used in the collection of 400,000 neighborhoods for the U-Net++ network training. Color images were utilized as the U-Net++ network input, whereas masks were used as the equivalent output to finally learn the neural network in exchange for receiving the input color image and by performing a series of convolution and reverse convolution operations to produce a binary image in its output, segmenting the desired image. The U-Net++ neural network learns the features of the retinal image, such as the shape and structures of the blood vessels and retinal tissue, and the appropriate features for segmenting the images. Retina images were improved before using them for building a deep neural network model for segmenting the image and determining the areas related to the blood vessels; additionally, image classification is for detecting the disease’s presence or absence with the help of pre-processing to enhance the efficiency of the proposed method. Smoothing the retinal green channel histogram is the first step in pre-processing, which improves image contrast and details. The image is smoothed with the help of a Gaussian filter, which weakens noise-like details and extra edges in the image, leaving only the main and efficient edges in the image by applying it to the green channel of the image. Gabor filters are applied to the green channel in the third part of the pre-processing, and the result of these filters is used as the input of the U-Net++ network. The U-Net++ network has been trained in this study with the help of educational images and labels images from the relevant database. To evaluate the segmentation accuracy using the neural network proposed for the test images, the corresponding tag images were also used as a reference. Finally, we used the training and test images to train the U-Net++ network in MATLAB. To perform the tests, training, and evaluation of the neural network, a computer device equipped with a Nvidia GeForce GTX 960M graphics processor with 4 GB of internal memory and 16 GB of RAM was used. The size of the images to build the U-Net++ network is large and the number of parameters used is large enough to fill the GPU memory. Hence, the training process performed by the GPU could be a problematic operation. Therefore, instead of using the whole image to perform the neural network training and image segmentation, the neighborhood [256 256] was considered in this study. Neural network training was repeated in 200 iterations with a learning rate of 0.001 and a random gradient optimization method with a longer term. Detecting blood vessels while there are few images for deep neural network training is a very challenging issue. However, due to the small number of educational images, the U-Net++ network has performed well and the obtained results are acceptable. For a better evaluation, we review the proposed method for different evaluation images. Figure 7 shows the result of image segmentation. In part (a), the original image, in part (b) its mask, in part (c) its ground truth, and (d) the image of the retina assessment segmented using the U-Net++ network is displayed. we obtained the evaluation process for the images and presented the identification accuracy in Table 1.
After the segmentation of retinal images and production of binary images by determining blood vessels, features of the structure of blood vessels can be extracted and used to diagnose the disease. According to the method proposed in the previous section, retinal images are first segmented by the U-Net++ network, and after determining the blood vessels in the binary image, HOG and LBP features are extracted and used to diagnose the disease in the retinal image. The feature vector is extracted by the two methods and the final feature vector is provided to the convolutional neural network. The convolution network facilitates the diagnosis of retinal disease by extracting more and better features.
According to the explanation, the image bank used to diagnose the disease in the MESSIDORE retina has 1200 images, 70% of which have been used for convolutional network training and the rest for evaluation. Finally, after performing neural network training based on the Adam optimization method with a learning rate of 0.001, the results are presented in Table 2. Three types of labels were used for the experiments. The first label was related to retinal retinopathy, the second label was related to macular disease, and the third label was related to the presence or absence of retinopathy or macular disease. The retinopathy label contains 4 values that indicate the severity or absence of the disease in the retinal image, which include values from 0 to 3. A value of 0 indicates no disease, and a value of 1 to 3 indicates the disease severity. Classification is based on the disease severity and the findings in ophthalmoscopy into three categories: Mild NPDR (mild non-proliferative stage), severe NPDR (severe non-reproductive stage), and proliferative stage (PDR) [37]. Macular disease labels with values from 0 to 2 are considered 3 types of grading of macular disease. In this study, values greater than or equal to 1 for the presence of disease and value 0 for the absence of macular disease. As the third problem, the combination of the two previous labels has been proposed as a two-class problem of the presence of retinopathy or macular disease or the absence of the disease, which are presented in Table 2.
In this network, first, there are convolution layers that extract the feature from the image, and at the end of the network, there are connected layers that categorize the final image. The input of the convolutional neural network is the feature extracted using HOG and LBP from the binary image generated using the U-Net++ network. In Table 3, the results obtained based on the mentioned methods are presented, indicating the superiority of the proposed method in the MESSIDOR dataset. Figure 8 compares the existing methods for disease presence in the MESSIDOR database.
In the convolution network, the first layers are related to convolution filters. During the training process, they learn to extract the effective features from the input; here, it is a feature vector from the HOG and LBP methods. The connected layers in the convolutional network perform the classification work, so during the training process, as a kind of machine learning algorithm, they seek to identify and classify the current disease. On the other hand, the input of the U-Net++ network is several matrices obtained from Gabor filters, which are used to segment the desired image and extract the area of the vessels. In summary, the reason for the superiority of the proposed method over previous works and innovations of this method can be summarized as follows:
(1)
Uses several matrices obtained from various Gabor filters as input to the U-Net++ network.
(2)
Uses feature extraction methods from binary images such as HOG and LBP as input to the convolution network that have not been seen in previous research.
(3)
Uses a one-dimensional convolution network for final classification of feature vectors and final diagnosis of retinal disease.

6. Conclusions

Disease diagnosis has now made its way into computer science from the field of medicine. Artificial intelligence could be very useful in medical diagnosis and even surpass humans as the process of diagnosis and decision making improves. Deep learning can achieve greater accuracy than traditional methods such as two-layer neural networks or even a support vector machine. This led to the use of a deep convolutional neural network in conjunction with the U-Net++ network, as well as the LBP and HOG feature extraction methods, to solve the problem of diagnosing the disease in retinal images, which is more accurate than previous methods. A method for segmenting the blood vessel area in a retinal color image using the U-Net++ neural network and disease diagnosis using a deep convolutional network was presented in this study. The DRIVE image bank’s retina images were used to train and construct deep neural networks. This database has 40 images of 565 by 584 dimensions, where 20 images are considered as the training set. To create a test set from the remaining 20 images of neighborhoods, 20,000 of each image was extracted and the proposed method was used to evaluate the results. The U-Net++ neural network learns the characteristics of the retinal image, such as the shape and structure of blood vessels and retinal tissue, as well as the appropriate features for image segmentation. With the help of a convolutional network, the MESSIDOR image bank was used to perform segmentation, feature extraction, and finally detection. MESSIDORE has 1200 images, 70% of which have been used for convolutional network training and the rest for evaluation. Finally, after performing neural network training based on the Adam optimization method with a learning rate of 0.001, the generated binary images were used to extract features using two widely used methods, LBP and HOG, resulting in high-dimension feature vectors. The feature vectors generated by the one-dimensional convolutional network were processed in proportion to the input feature vectors, and a final diagnosis of the retinal disease was made. The achieved results for accuracy, sensitivity, specificity, IOU, and F1-score were 98.9, 94.1, 98.8, 85.26, and, 98.14, respectively, in the DRIVE dataset and the obtained results for accuracy, sensitivity, and specificity were 98.6, 99, 98, respectively, in the MESSIDOR dataset. The evaluation results of the proposed method for the classification and diagnosis of the disease in retinal color images were reviewed. The results obtained for diagnosing the disease in retinal images show that our method is more accurate than the previous methods proposed in the MESSIDOR database. The use of filters in the segmentation stage, as well as the LBP and HOG feature extraction methods, are the reasons. Limitations of the existing method can be unsuccessful detection of retinal thin vessels and detection of disease with a high processing time.
We plan to use deep learning techniques to segment and diagnose Alzheimer’s disease from retinal images in future research, and we also plan to improve the execution time in future works.

Author Contributions

Conceptualization and methodology, M.S.G., M.H.S. and M.A.; formal analysis, M.S.G., M.H.S. and M.A.; software, validation, and writing—original draft, M.S.G., M.H.S. and M.A.; writing—review and editing and data curation, M.S.G., M.H.S. and M.A.; supervision, M.H.S. and M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The DRIVE dataset is publicly available at https://drive.grand-challenge.org/ (accessed on 28 August 2022). The MESSIDOR dataset is publicly available at https://www.adcis.net/en/third-party/messidor (accessed on 28 August 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. You, X.; Peng, Q.; Yuan, Y.; Cheung, Y.-M.; Lei, J. Segmentation of retinal blood vessels using the radial projection and semi-supervised approach. Pattern Recognit. 2011, 44, 2314–2324. [Google Scholar] [CrossRef]
  2. Quellec, G.; Charrière, K.; Boudi, Y.; Cochener, B.; Lamard, M. Deep image mining for diabetic retinopathy screening. Med. Image Anal. 2017, 39, 178–193. [Google Scholar] [CrossRef] [Green Version]
  3. David, S.A.; Mahesh, C.; Kumar, V.D.; Polat, K.; Alhudhaif, A.; Nour, M. Retinal Blood Vessels and Optic Disc Segmentation Using U-Net. Math. Probl. Eng. 2022, 2022, 8030954. [Google Scholar] [CrossRef]
  4. Fraz, M.; Welikala, R.; Rudnicka, A.; Owen, C.; Strachan, D.; Barman, S. QUARTZ: Quantitative Analysis of Retinal Vessel Topology and size—An automated system for quantification of retinal vessels morphology. Expert Syst. Appl. 2015, 42, 7221–7234. [Google Scholar] [CrossRef] [Green Version]
  5. Xu, X.; Wang, Y.; Liang, Y.; Luo, S.; Wang, J.; Jiang, W.; Lai, X. Retinal Vessel Automatic Segmentation Using SegNet. Comput. Math. Methods Med. 2022, 2022, 3117455. [Google Scholar] [CrossRef] [PubMed]
  6. Kadry, S.; Rajinikanth, V.; Damasevicius, R.; Taniar, D. Retinal Vessel Segmentation with Slime-Mould-Optimization based Multi-Scale-Matched-Filter. In Proceedings of the 2021 Seventh International Conference on Bio Signals, Images, and Instrumentation (ICBSII), Chennai, India, 25–27 March 2021; pp. 1–5. [Google Scholar] [CrossRef]
  7. Rajinikanth, V.; Kadry, S.; Damasevicius, R.; Taniar, D.; Rauf, H.T. Machine-Learning-Scheme to Detect Choroidal-Neovascularization in Retinal OCT Image. In Proceedings of the 2021 Seventh International Conference on Bio Signals, Images, and Instrumentation (ICBSII), Chennai, India, 25–27 March 2021; pp. 1–5. [Google Scholar] [CrossRef]
  8. Poplin, R.; Varadarajan, A.V.; Blumer, K.; Liu, Y.; McConnell, M.V.; Corrado, G.S.; Peng, L.; Webster, D.R. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat. Biomed. Eng. 2018, 2, 158–164. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Sisodia, D.S.; Nair, S.; Khobragade, P. Diabetic Retinal Fundus Images: Preprocessing and Feature Extraction For Early Detection of Diabetic Retinopathy. Biomed. Pharmacol. J. 2017, 10, 615–626. [Google Scholar] [CrossRef]
  10. Elaziz, M.A.; Zaid, E.O.A.; Al-Qaness, M.A.A.; Ibrahim, R.A. Automatic Superpixel-Based Clustering for Color Image Segmentation Using q-Generalized Pareto Distribution under Linear Normalization and Hunger Games Search. Mathematics 2021, 9, 2383. [Google Scholar] [CrossRef]
  11. Asia, A.-O.; Zhu, C.-Z.; Althubiti, S.A.; Al-Alimi, D.; Xiao, Y.-L.; Ouyang, P.-B.; Al-Qaness, M.A.A. Detection of Diabetic Retinopathy in Retinal Fundus Images Using CNN Classification Models. Electronics 2022, 11, 2740. [Google Scholar] [CrossRef]
  12. Maqsood, S.; Damaševičius, R.; Shah, F.M.; Maskeliūnas, R. Detection of Macula and Recognition of Aged-Related Macular Degeneration in Retinal Fundus Images. Comput. Inform. 2021, 40, 957–987. [Google Scholar] [CrossRef]
  13. Zhou, T.; Li, L.; Bredell, G.; Li, J.; Konukoglu, E. Volumetric memory network for interactive medical image segmentation. Med. Image Anal. 2022. [Google Scholar] [CrossRef]
  14. Lestari, T.; Luthfi, A. Retinal Blood Vessel Segmentation using Gaussian Filter. J. Phys. Conf. Ser. 2019, 1376, 012023. [Google Scholar] [CrossRef]
  15. Kugaevskikh, A. Comparison Gabor Filter Parameters for Efficient Edge Detection. Available online: https://www.researchgate.net/publication/319327768_Comparison_Gabor_Filter_Parameters_for_Efficient_Edge_Detection (accessed on 10 September 2022).
  16. Wang, S.; Yin, Y.; Cao, G.; Wei, B.; Zheng, Y.; Yang, G. Hierarchical retinal blood vessel segmentation based on feature and ensemble learning. Neurocomputing 2015, 149, 708–717. [Google Scholar] [CrossRef]
  17. Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. UNet++: A Nested U-Net Architecture for Medical Image Segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Stoyanov, D., Taylor, Z., Carneiro, G., Syeda-Mahmood, T., Martel, A., Maier-Hein, L., Tavares, J.M.R.S., Bradley, A., Papa, J.P., Belagiannis, V., et al., Eds.; Springer: Berlin/Heidelberg, Germany, 2018; Volume 11045. [Google Scholar] [CrossRef] [Green Version]
  18. Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  19. Ojala, T.; Pietikainen, M.; Harwood, D. Performance evaluation of texture measures with classification based on Kullback discrimination of distributions. In Proceedings of the 12th International Conference on Pattern Recognition, Jerusalem, Israel, 9–13 October 1994; Volume 1, pp. 582–585. [Google Scholar] [CrossRef]
  20. Ojala, T.; Pietikäinen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
  21. Dong-Chen, H.; Wang, L. Texture Unit, Texture Spectrum, And Texture Analysis. IEEE Trans. Geosci. Remote Sens. 1990, 28, 509–512. [Google Scholar] [CrossRef]
  22. Wang, L.; He, D.-C. Texture classification using texture spectrum. Pattern Recognit. 1990, 23, 905–910. [Google Scholar] [CrossRef]
  23. Wang, X.; Han, T.X.; Yan, S. An HOG-LBP Human Detector with Partial Occlusion Handling. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 32–39. [Google Scholar] [CrossRef]
  24. Bhosle, S.; Prakash, K. Texture Classification Approach And Texture Datasets: A Review. IJRAR 2019, 6, 218–224. [Google Scholar]
  25. Silva, C.; Bouwmans, T.; Frélicot, C. An eXtended Center-Symmetric Local Binary Pattern for Background Modeling and Subtraction in Videos. In Proceedings of the 10th International Conference on Computer Vision Theory and Applications, Berlin, Germany, 11–14 March 2015; pp. 395–402. [Google Scholar] [CrossRef] [Green Version]
  26. Bouwmans, T.; Silva, C.; Marghes, C.; Zitouni, M.S.; Bhaskar, H.; Frelicot, C. On the role and the importance of features for background modeling and foreground detection. Comput. Sci. Rev. 2018, 28, 26–91. [Google Scholar] [CrossRef] [Green Version]
  27. Dalal, N.; Triggs, B. Histograms of Oriented Gradients for Human Detection. In Proceedings of the Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–26 June 2005; pp. 886–893. [Google Scholar] [CrossRef] [Green Version]
  28. Anwar, S.M.; Majid, M.; Qayyum, A.; Awais, M.; Alnowami, M.; Khan, M.K. Medical Image Analysis using Convolutional Neural Networks: A Review. J. Med Syst. 2018, 42, 226. [Google Scholar] [CrossRef] [Green Version]
  29. Ma, Y.; Zhu, Z.; Dong, Z.; Shen, T.; Sun, M.; Kong, W. Multichannel Retinal Blood Vessel Segmentation Based on the Combination of Matched Filter and U-Net Network. BioMed Res. Int. 2021, 2021, 1–18. [Google Scholar] [CrossRef]
  30. Biradar, S.; Jadhav, A.S. A Survey on Blood Vessel Segmentation and Optic Disc Segmentation of Retinal Images. Int. J. Adv. Res. Comput. Commun. Eng. 2015, 4, 21–26. [Google Scholar]
  31. Decencière, E.; Zhang, X.; Cazuguel, G.; Lay, B.; Cochener, B.; Trone, C.; Gain, P.; Ordóñez-Varela, J.-R.; Massin, P.; Erginay, A.; et al. Feedback on a publicly distributed image database: The messidor database. Image Anal. Stereol. 2014, 33, 231. [Google Scholar] [CrossRef] [Green Version]
  32. Moccia, S.; De Momi, E.; El Hadji, S.; Mattos, L.S. Blood vessel segmentation algorithms—Review of methods, datasets and evaluation metrics. Comput. Methods Programs Biomed. 2018, 158, 71–91. [Google Scholar] [CrossRef] [Green Version]
  33. Sreejini, K.; Govindan, V. Improved multiscale matched filter for retina vessel segmentation using PSO algorithm. Egypt. Inform. J. 2015, 16, 253–260. [Google Scholar] [CrossRef] [Green Version]
  34. Tamim, N.; Elshrkawey, M.; Azim, G.A.; Nassar, H. Retinal Blood Vessel Segmentation Using Hybrid Features and Multi-Layer Perceptron Neural Networks. Symmetry 2020, 12, 894. [Google Scholar] [CrossRef]
  35. Khowaja, S.A.; Khuwaja, P.; Ismaili, I.A. A framework for retinal vessel segmentation from fundus images using hybrid feature set and hierarchical classification. Signal, Image Video Process. 2019, 13, 379–387. [Google Scholar] [CrossRef]
  36. Fu, H.; Xu, Y.; Wong, D.W.K.; Liu, J. Retinal vessel segmentation via deep learning network and fully-connected conditional random fields. In Proceedings of the 13th International Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic, 13–16 April 2016; pp. 698–701. [Google Scholar] [CrossRef]
  37. Sahlsten, J.; Jaskari, J.; Kivinen, J.; Turunen, L.; Jaanio, E.; Hietala, K.; Kaski, K. Deep Learning Fundus Image Analysis for Diabetic Retinopathy and Macular Edema Grading. Sci. Rep. 2019, 9, 10750. [Google Scholar] [CrossRef] [Green Version]
  38. Veras, R.; Silva, R.; Araújo, F.; Medeiros, F. SURF Descriptor and Pattern Recognition Techniques in Automatic Identification of Pathological Retinas. In Proceedings of the 2015 Brazilian Conference on Intelligent Systems (BRACIS), Natal, Brazil, 4–7 November 2015; 2015; pp. 316–321. [Google Scholar] [CrossRef]
  39. Singh, R.K.; Gorantla, R. DMENet: Diabetic Macular Edema diagnosis using Hierarchical Ensemble of CNNs. PLoS ONE 2020, 15, e0220677. [Google Scholar] [CrossRef] [Green Version]
  40. Sreejini, K.; Govindan, V. Retrieval of pathological retina images using Bag of Visual Words and pLSA model. Eng. Sci. Technol. Int. J. 2019, 22, 777–785. [Google Scholar] [CrossRef]
  41. González-Gonzalo, C.; Sánchez-Gutiérrez, V.; Hernández-Martínez, P.; Contreras, I.; Lechanteur, Y.T.; Domanian, A.; Van Ginneken, B.; Sánchez, C.I. Evaluation of a deep learning system for the joint automated detection of diabetic retinopathy and age-related macular degeneration. Acta Ophthalmol. 2020, 98, 368–377. [Google Scholar] [CrossRef] [Green Version]
  42. Li, X.; Hu, X.; Yu, L.; Zhu, L.; Fu, C.-W.; Heng, P.-A. CANet: Cross-Disease Attention Network for Joint Diabetic Retinopathy and Diabetic Macular Edema Grading. IEEE Trans. Med. Imaging 2020, 39, 1483–1493. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Yaqoob, M.K.; Ali, S.F.; Kareem, I.; Fraz, M.M. Feature-based optimized deep residual network architecture for diabetic retinopathy detection. In Proceedings of the 2020 IEEE 23rd International Multitopic Conference (INMIC), Bahawalpur, Pakistan, 5–7 November 2020; pp. 1–6. [Google Scholar] [CrossRef]
Figure 1. (a) The green retinal channel, (b) result of contrast improvement, and (c) result of applying Gaussian filter for smoothing the image.
Figure 1. (a) The green retinal channel, (b) result of contrast improvement, and (c) result of applying Gaussian filter for smoothing the image.
Electronics 11 03516 g001
Figure 2. Display of four Gabor filters used in this research: (a) 90’, (b) 45’, (c) 0’, and (d) −45’.
Figure 2. Display of four Gabor filters used in this research: (a) 90’, (b) 45’, (c) 0’, and (d) −45’.
Electronics 11 03516 g002
Figure 3. Four Gabor filters used in this research: (a) 90’, (b) 45’, (c) 0’, and (d) −45’.
Figure 3. Four Gabor filters used in this research: (a) 90’, (b) 45’, (c) 0’, and (d) −45’.
Electronics 11 03516 g003
Figure 4. Retinal image segmentation by the U-Net++ neural network and diagnose of disease with a CNN.
Figure 4. Retinal image segmentation by the U-Net++ neural network and diagnose of disease with a CNN.
Electronics 11 03516 g004
Figure 5. The overall structure of the U-Net++ architecture [17].
Figure 5. The overall structure of the U-Net++ architecture [17].
Electronics 11 03516 g005
Figure 6. The general architecture of a CNN [28].
Figure 6. The general architecture of a CNN [28].
Electronics 11 03516 g006
Figure 7. Result of the segmentation: (a) the original color image, (b) its mask, (c) its ground truth, and (d) the image of the retina segmented using the U-Net++ network.
Figure 7. Result of the segmentation: (a) the original color image, (b) its mask, (c) its ground truth, and (d) the image of the retina segmented using the U-Net++ network.
Electronics 11 03516 g007
Figure 8. Results were obtained by comparing the presence of disease on the MESSIDOR database [37,38,3940].
Figure 8. Results were obtained by comparing the presence of disease on the MESSIDOR database [37,38,3940].
Electronics 11 03516 g008
Table 1. Results from the image segmentation using the U-Net++ neural network on the DRIVE database.
Table 1. Results from the image segmentation using the U-Net++ neural network on the DRIVE database.
MethodAccuracy (%)Sensitivity (%)Specificity (%)F1-Score (%)
Proposed method98.994.198.898.14
Kadry et al. [6]94.1283.5398.7482.28
Sreejini and Govindan [33]96.3371.3298.66-
Tamim et al. [34]96.0775.4298.4374.75
Khowaja et al. [35]94.1074.3795.9668.77
Fu et al. [36]94.1274.4496.0068.84
Table 2. Results from the diagnosis of convolution neural network disease on the MESSSIDOR database.
Table 2. Results from the diagnosis of convolution neural network disease on the MESSSIDOR database.
Sensitivity (%)Specificity (%)Accuracy (%)
Diagnosis of Retinopathy98.3299.6498.8
Diagnosis of Macular98.999.298.7
Existence of Disease99.0898.0698.61
Table 3. Results were obtained by comparing the presence of disease on the MESSIDOR database.
Table 3. Results were obtained by comparing the presence of disease on the MESSIDOR database.
MethodsPerformance Measures (%)
1Proposed methodSensitivity = 99Specificity = 98Accuracy = 98.6
2Sahlsten et al. [37]Sensitivity= 85Specificity = 97Accuracy = 92
3veras et al. [38]Sensitivity = 97Specificity = 97Accuracy = 97
4Singh et al. [39]Sensitivity = 94.68Specificity = 97.19Accuracy = 95.47
5K.S. Sreejini et al. [40]Sensitivity = 98Specificity = 97Accuracy = 98
6Gonzalo et al. [41]Sensitivity = 92Specificity = 92.1-
7Li et al. [42]Sensitivity = 70.8-Accuracy = 91.2
8Yaqoob et al. [43]--Accuracy = 89.89
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gargari, M.S.; Seyedi, M.H.; Alilou, M. Segmentation of Retinal Blood Vessels Using U-Net++ Architecture and Disease Prediction. Electronics 2022, 11, 3516. https://doi.org/10.3390/electronics11213516

AMA Style

Gargari MS, Seyedi MH, Alilou M. Segmentation of Retinal Blood Vessels Using U-Net++ Architecture and Disease Prediction. Electronics. 2022; 11(21):3516. https://doi.org/10.3390/electronics11213516

Chicago/Turabian Style

Gargari, Manizheh Safarkhani, Mir Hojjat Seyedi, and Mehdi Alilou. 2022. "Segmentation of Retinal Blood Vessels Using U-Net++ Architecture and Disease Prediction" Electronics 11, no. 21: 3516. https://doi.org/10.3390/electronics11213516

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop