Next Article in Journal
Probabilistic Linguistic Preference Relation-Based Decision Framework for Multi-Attribute Group Decision Making
Previous Article in Journal
Optimality and Duality with Respect to b-(,m)-Convex Programming
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fundus Image Classification Using VGG-19 Architecture with PCA and SVD

School of Big Data & Software Engineering, Chongqing University, Chongqing 401331, China
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(1), 1; https://doi.org/10.3390/sym11010001
Submission received: 22 November 2018 / Revised: 11 December 2018 / Accepted: 18 December 2018 / Published: 20 December 2018

Abstract

:
Automated medical image analysis is an emerging field of research that identifies the disease with the help of imaging technology. Diabetic retinopathy (DR) is a retinal disease that is diagnosed in diabetic patients. Deep neural network (DNN) is widely used to classify diabetic retinopathy from fundus images collected from suspected persons. The proposed DR classification system achieves a symmetrically optimized solution through the combination of a Gaussian mixture model (GMM), visual geometry group network (VGGNet), singular value decomposition (SVD) and principle component analysis (PCA), and softmax, for region segmentation, high dimensional feature extraction, feature selection and fundus image classification, respectively. The experiments were performed using a standard KAGGLE dataset containing 35,126 images. The proposed VGG-19 DNN based DR model outperformed the AlexNet and spatial invariant feature transform (SIFT) in terms of classification accuracy and computational time. Utilization of PCA and SVD feature selection with fully connected (FC) layers demonstrated the classification accuracies of 92.21%, 98.34%, 97.96%, and 98.13% for FC7-PCA, FC7-SVD, FC8-PCA, and FC8-SVD, respectively.

1. Introduction

Progressive changes in the field of science and technology are making human life healthier, secure, more comfortable and livable. Automated diagnosis systems (ADSs) are providing services for the ease of humanity. ADSs perform a vital role in the early diagnosis of serious diseases. Diabetic retinopathy (DR) is a serious and widespread disease worldwide. Recently, the World Health Organization (WHO) has reported that diabetes will become the seventh highest death causing disease in the world by 2030. In this context, it is a big challenge to save the lives of patients affected by diabetes. Diabetic retinopathy is a common disease that exists in diabetic patients. In diabetic retinopathy, some lesions are produced in the eyes, which become a cause of non-reversible blindness with the passage of time. These types of lesions include abnormal retinal blood vessels, microaneurysm (MA), cotton wool spots, exudates, and hemorrhages, as shown in Figure 1.
According to the disease severity scale [1], diabetic retinopathy can be classified into five stages: non-DR, mild severe, moderate severe, severe, and proliferative DR. In this regard, many researchers have introduced different types of methods, architectures, models, and frameworks, and have performed a vital role in detecting lesions in the early stage of DR. Haloi et al. [2] introduced a method to detect cotton wool spots and exudates. Furthermore, Haloi [3] used deep neural networks to find microaneurysms in color retinal fundus images. Van Grinsven et al. [4] introduced a novel technique to detect rapid hemorrhages. Moreover, Srivastava et al. [5] obtained significant experimental results when finding hemorrhages and micro aneurysms using multiple kernel methods.
On the other hand, the detection of the severity levels of DR is also an important task when curing an affected eye. Seoud et al. [6] developed a computer-aided system for the classification of fundus images using random forests [7]. On the basis of deep learning methods, Kuen et al. [8] promoted DCNN for diabetic retinopathy detection. Gulshan et al. [1] classified the fundus images into non-DR and moderate severe with the help of 54 American ophthalmologists and other medical researchers on more than 128 thousand DR images. Furthermore, Sankar et al. [9] classified the fundus images into non-DR, mild DR, and severe DR. Pratt et al. [10] proposed a robust technique to classify the severity levels of DR followed by the standards of the severity scale [11].
Somasundaram et al. [12] designed a machine learning bagging ensemble classifier (ML-BEC) to diagnose the DR disease at an early stage. The ML-BEC technique contains two phases. The first phase involves the extraction of significant features of the retinal fundus images. The second stage includes the implication of an ensemble classifier on the extracted features of retinal fundus images on the basis of machine learning. Nanni et al. [13] proposed a bioimage classification technique based on ensemble CNNs. With the composition of multiple CNNs, the novel approach boosted the performance of medical image analysis and classification. Abbas et al. [14] developed an automatic computer aided diagnosis system based on deep visual features (DVFs) to classify the severity levels of DR without the application of preprocessing. DVFs are derived from a multilayer semi supervised technique using deep learning algorithms. Orlando et al. [15] developed a new method to detect red lesions with the combination of domain knowledge and deep learning. In this technique, the random forest classifier was applied to classify the retinal fundus images on the basis of severity scale. Prentašić and Lončarić [16] proposed a technique to detect exudates in color retinal fundus images using convolutional neural networks. Wang et al. [17] introduced a deep learning technique to understand the detection of diabetic retinopathy where they applied a regression activation map (RAM) after the pooling layer of the convolutional neural networks. The role of RAM is to localize the interesting regions of the fundus image to define the region of interest on the basis of severity level. Kälviäinen and Uusitalo [18] introduced an evaluation technique to analyze the results of the retinal fundus images obtained by the implementation of different architectures, models, and frameworks. Sadek et al. [19] reported a new deep features learning technique using fully connected layers on the basis of CNNs. In this approach, a nonlinear classifier was applied to classify the normal, drusen, and exudates. Yu et al. [20] introduced a novel approach to retinal fundus image quality classification (IQS). This technique was based on the human visual model to execute the computational algorithms. The proposed method was a combination of CNNs and saliency map to obtain supervised and unsupervised features respectively as an input for the support vector machine (SVM) classifier. Choi et al. [21] applied a multi-categorical deep learning technique to detect multiple lesions with the presence of retinal fundus images on the basis of a structured analysis retinal database (SARD).
Deep learning is widely used in the field of diabetic retinopathy for the classification of retinal fundus images. In this area of research, many authors have found different techniques to cluster out the retinal images on the basis of the International Clinical Diabetic Retinopathy Disease Severity Scale [1]. Prasad et al. [22] introduced various segmentation approaches to detect hard exudates, blood vessels, and micro-aneurysms. In this paper, a PCA based feature extraction technique named the “Haar wavelet transform” was applied. The researchers applied back propagation in neural networks for two class categorization. Bhatkar and Kharat [23] applied a deep neural network based on multilayer perception. On the basis of optic disk segmentation, an automatic computer aided detection approach was developed [24] with the graph cuts technique. Raman et al. [25], used optic disk identification for microaneurysm and exudate features extraction to classify the DR images. To identify the exudates in the DR images, the genetic algorithm was used by [26]. Similarly, ManojKumar et al. [27], used the intersection of anomalous thickness in retinal blood vessels to localize the red lesions including exudates. Additionally, fuzzy and k-means clustering techniques have also been applied for diabetic retinopathy [28]. Mansour [29] developed an automatic system for diabetic retinopathy to detect anomalies and the severity evaluation of retinal fundus images. Seourd et al. [30] specific shape features for the detection of hemorrhages and microaneurysm detection were applied. The JSEG technique was applied to diagnose microaneurysms and exudates in DR images [31]. Quellec et al. [32], proposed a pixel based ConvNets DNN technique for earlier diagnosis of retinal fundus images. In the ConvNets technique, a softmax classifier is used for the classification of severity levels in retinal fundus images. Du and Li [33] proposed a texture analysis technique to identify the blood vessels and hemorrhages.
Yang. et al. [34] developed an automatic DR analysis approach using two stage DCNN to localize and detect red lesions in the retinal fundus images. Additionally, the automatic system also classified the retinal fundus images on the basis of severity levels.
Gurudath et al. [35] proposed an automatic approach for the identification of DR from color retinal fundus images. Classification of the retinal fundus images was performed among three classes including normal, mild severe, and moderate severe. A Gaussian filtering model was used for retinal blood vessel segmentation on the input fundus images.
Cao et al. [36] analyzed the microaneurysm detection ability based on 25 × 25 image patches collected from retinal fundus images. Support vector machine (SVM), random forest (RF), and neural network (NN) were used for the classification of DR into five stages of severity. Principle component analysis (PCA) and random forest feature importance (RFFI) were applied for dimensionality reduction.
Nanni et al. [37] designed a computer vision system based on non-handcrafted and handcrafted features. For the non-handcrafted features, three approaches were applied including the compact binary descriptor (CBD), CNNs, and PCA. On the other hand, local phase quantization, completed local binary patterns, rotated local binary patterns, and some other algorithms were used for the handcrafted features. The proposed technique was applied to various datasets and obtained outstanding image classification results.
Hagiwara et al. [38], reviewed research papers related to automatic computer aided diagnosis systems and proposed a novel automatic detection approach based on the existing methodology to diagnose glaucoma. Litjens et al. [39], introduced an assessment approach for Google Inception v3 to detect retinal fundus images and compared them with licensed ophthalmologists.
Alban and Gilligan [40] improved the existing de-noising approaches used to detect the severity levels of DR. The improved work was able to classify the input datasets related to the retinal fundus images using the CNN classifier. Pratt et al. [10] designed a CNN architecture that was able to identify the complex features concerned with the classification process for exudates, hemorrhages, and microaneurysms in diabetic retinopathy.
Rahim et al. [41] designed a novel automatic diabetic retinopathy screening system model for the early detection of microaneurysms. For the development of the automatic DR screening system, the fuzzy histogram method was used for feature extraction and the latter in the preprocessing of retinal fundus images. Mansour [28] developed a novel automatic computer-aided diagnosis system for the early detection of abnormal retinal blood vessels and classified the retinal fundus images on the basis of severity levels. Mansour applied AlexNet DNN architecture for feature extraction and PCA for dimensionality reduction. The proposed approach is an improved form of this existing feature extraction approach to obtain better classification accuracy results.
The rest of the article is organized as follows. The proposed method is explained in Section 2, and the experimental results and discussion are covered in Section 3. Finally, the findings are concluded in Section 4.

2. Proposed Method

The proposed technique was able to build a competent automated classification system for diabetic retinopathy. The proposed system was capable of making segments in the fundus images according to the severity classes and their diseases. To achieve the DR severity, the system was implemented in a proper way where the proposed approach was divided into sequential steps for better understanding and implementation.

2.1. Data Gathering

Data collection is a basic step to collect the data for experiments in the proposed system. In the context of diabetic retinopathy, a well-known KAGGLE competition dataset was used for the experiments [42]. The standard KAGGLE contains a large number of fundus images with a high resolution taken by different levels of fundus cameras. The total number of fundus images in KAGGLE is 35,126, labeled with right and left eyes. Table 1 explains the standard distribution of the fundus images in the KAGGLE RD dataset.
The fundus images were captured by different fundus cameras with different conditions and quality levels. Some of them were treated as normal images, but few of them had some noise in the form of dots, circles, triangles, or squares. In these conditions, some images are able to overturn. Noisy images can also be considered as out of focused, blurry, under-exposed, and over-exposed images. Under these conditions, there should also be a method that is able to predict the DR images in the existence of noisy data.

2.2. Data Preprocessing

In the preprocessing phase, the achievement of optimal diabetic retinopathy or non-diabetic retinopathy is a vital role. There are many diseases in diabetic retinopathy which include microaneurysms (MAs), exudates, and hemorrhages. Preprocessing is helpful to figure out and differentiate between actual lesions or features of DR from the noisy data. Therefore, before feature extraction, it was necessary to perform a preprocessing operation on the raw pavement digital images. In the proposed computer aided diagnosis system, the key purpose of preprocessing is to identify the blood vessels in the form of microaneurysms (MAs). During the preprocessing phase, algorithmic techniques are performed including the grayscale conversion technique to demonstrate better contrast, while the shade correction technique is performed to estimate the image, then subtracts it from the existing image. In the next phase, vessel segmentation is applied based on GMM. Figure 2 shows the fundus images used to extract the information from the colored image to the background extracted vision.
The background image contains the most discriminant form of the image. To achieve the most discriminant information, the adaptive learning rate (ALR) achieved an outstanding performance in the region of interest (ROI). Antal et al. [43] applied a robust technique on the basis of ensemble clustering to find the microaneurysms in DR.

2.3. Region of Interest Detection

Before applying the feature extraction, blood vessel extraction was performed with the association of ROI localization. In this phase, blood vessel segmentation was applied to extract the ROI in the DR images. For this purpose, there are many techniques that can be applied including ROI based segmentation, edge-based segmentation, fuzzy models, and neural networks. In this paper, a Gaussian mixture technique was applied for vessel segmentation. Stauffer et al. [44] applied Gaussian sorting to obtain the background subtraction approach. In the proposed technique, a hybrid approach was proposed with the incorporation of the Gaussian mixture model (GMM) based on an adaptive learning rate (ALR) to obtain better region detection results. The Gaussian mixture g ( x ) with j components was introduced for the ROI for calculation.
g ( x ) = j = 1 J r j N ( y ; μ j , σ j )
where r j is a weight factor and N ( y ; μ j , σ j ) is a normalized form of the average μ j .
ALR was defined to update the μ j repeatedly with the use of the probability constraint N ( y ; μ j , σ j ) to identify whether a pixel was an element of the j t h Gaussian distribution or not. In this method, to eliminate such types of limitations, a parametric idea was proposed to obtain the difference to enable the quasi linear adaptation.
Figure 3 shows the detailed retinal blood vessel segmentation and detection process. In the first part, the DR dataset is an input collected from the KAGGLE repository for image data segmentation and detection. In the second part, blood vessel segmentation and detection based on GMM can be explained with the sub process. After the vessel segmentation process, the connected components analysis (CCA) technique is applied to consider the size, location, and region of the diabetic retinopathy features including hemorrhages, hard exudates, and MAs. CCA is also helpful in identifying and differentiating the size, shape, and proximity from the normal retinal features. With the completion of the CCA process, the blood vessels’ ROI was detected and ready for the VGG-19 based feature extraction.

2.4. Feature Extraction

In each layer of the CNNs, there is a new representation of the input image by progressively extracting meaningful information. In the proposed technique, VGG-19 was applied to extract the meaningful information from the fundus images. The visualization discloses the categorized image format that makes the representation.
To obtain a robust diabetic retinopathy system (DRS), the extraction process considers meaningful features including the area of pixels, perimeter, minor axis length, major axis length as well as circularity, which are helpful to identify blood vessel, exudates, hemorrhages, optical distance, and microaneurysm areas.

2.4.1. VGG-19 DNN

The visual geometry group network (VGGNet) is a deep neural network with a multilayered operation. The VGGNet is based on the CNN model and is applied on the ImageNet dataset. VGG-19 is useful due to its simplicity as 3 × 3 convolutional layers are mounted on the top to increase with depth level. To reduce the volume size, max pooling layers were used as a handler in VGG-19. Two FC layers were used with 4096 neurons. Figure 4 shows that the vessel segmented images were used as input data for the VGGNet DNN. In the training phase, convolutional layers were used for the feature extraction and max pooling layers associated with some of the convolutional layers to reduce the feature dimensionality. In the first convolutional layer, 64 kernels (3 × 3 filter size) were applied for feature extraction from the input images. Fully connected layers were used to prepare the feature vector. The acquired feature vector was further subjected to PCA and SVD for dimensionality reduction and the feature selection of image data for better classification results. To reduce the highly dimensional data using PCA and SVD is a significant task. PCA and SVD are more useful because they are faster and numerically more stable than other reduction techniques. Finally, in the testing phase, 10-fold cross validation was applied to classify the DR images based on the softmax activation technique. The performance of the proposed VGG-19 based system was compared with other feature extraction architectures including AlexNet and SIFT. AlextNet is a multilayered feature extraction architecture used in CNN. The scale-invariant feature transform (SIFT) is a classical feature extraction technique introduced by Mansour [28] to detect the local features of the input image in the field of computer vision.

2.5. Data Reduction

After feature extraction from VGGNet, the next step for image analysis is feature selection. The purpose of feature selection is to reduce the dimensionality of the imaging data. In the proposed technique, the PCA and SVD techniques were applied for data reduction. According to the application of PCA, it transforms the image data from high dimensionality to low dimensionality on the basis of the most important features. In the case of DR fundus image extraction, there is also a challenge to distinguish the most expressive features (MEF) obtained during the feature extraction process. To solve this problem, PCA plans to transform the feature components to novel feature vectors to differentiate between them. On the other hand, SVD is used to decrease the dimensionality on the basis of reliability. The important purpose of SVD is to rapidly reduce the number of parameters as well as reduce the number of computations in the complex network. In the VGGNet, max pooling is also used to reduce the dimensionality on the basis of the maximum value and to handle the over fitting problem in the DNN. Figure 5 shows the algorithmic structure of the retinal fundus image classification process including two fully connected layers, FC7 and FC8, to extract the features of the DR-ROI on the basis of the VGGNet.

2.5.1. Principle Component Analysis

The dimensionality reduction process is important to reduce the computational time and storage. Principal component analysis performs a vital role in decreasing the dimensional complexity with a high level of accuracy. The functionality of PCA is to shift the high dimensional feature space to that of a lower dimensionality containing important feature vectors. In diabetic retinopathy, significant features of the fundus images are mostly interrelated. The elements of interrelated features are known as the most expressive features (MEF). To solve this problem, PCA is a better option to shift the features vector to the novel feature elements, which are treated differently from each other. Therefore, to achieve accuracy and fast computation of the features, PCA performed a significant role in reducing the feature vectors with similar elements and transferring them to the most significant feature vectors.

2.5.2. Singular Value Decomposition

SVD is a reliable and robust orthogonal matrix decomposition technique. SVD is widely used for image analysis to solve the problems related with least squares, pseudo inverse computation of the matrix, and also in multivariate experiments. In the area of machine learning, SVD based methods are used to decrease the dimensionality, metric learning, manifold learning, and collaborative filtering. In the case of complex data, parallel processing is generally unavailable for the deployment of systems and execution of large-scale datasets within seconds instead of days. In this case, SVD is a better option to decrease the dimensionality of huge amounts of data by speeding up the computational process. Particularly, in the classification of fundus images, SVD also performed better in the form of classification accuracy with optimal computational time.

2.6. Retinal Fundus Image Classification

In this research, the softmax classifier was applied to classify the fundus images on the basis of its features. In the proposed technique, the softmax algorithm was trained to show the classification in the binary form. The features obtained from the data reduction were mapped for the softmax based categorization. The proposed classification was further trained to classify the fundus images into five standard classes including non-DR, mild severe, moderate severe, severe, and PDR.

3. Experimental Results and Discussion

The experimental results of the proposed technique are explained in this section. To evaluate the performance of the novel DR system, KAGGLE datasets were selected for the experiments. KAGGLE datasets contain 35,126 fundus images captured by fundus cameras at different conditions. The standard KAGGLE dataset is based on five types of fundus images: non-DR, mild severe, moderate severe, severe, and PDR with different percentages as shown in Table 1. First, for efficient computational results, fundus images were resized to a size of 224 × 224 pixels. Next, for better results, the preprocessing operation was performed, followed by microaneurysm segmentation. To obtain a flexible implementation of the VGGNet, general purpose units (GPUs) were used to obtain the significant results. For the optimal solution, PCA and SVD algorithms were implemented on the proposed technique. The results obtained were comparatively better than the other results obtained using traditional methods i.e., AlexNet and SIFT, as shown in Table 2.
FC7 and FC8 were fully connected layers in the VGGNet layers, and the obtained values were greater than the values of AlexNet and SIFT. This proved that the classification accuracy of the VGG-19 DNN model was higher than the other traditional techniques. The features included in the proposed technique performed better with the combination of PCA and SVD. SVD extracted the discriminate features and then selected the significant feature vectors in a proper way. The graphical representation of the classification accuracy is demonstrated in Figure 6.
As mentioned in Section 2.4.1, a 10-fold cross validation technique was utilized for the model training and validation. For the implementation of the whole model and for the execution of major functions, the VLFeat-0.9.20 toolbox and MATLAB 2017(a) were used respectively. In terms of computational time, MATLAB took 210 minutes for the total (35,126) images, which is a lower computational time period relative to the AlexNet (240 minutes), but higher than the SIFT. SIFT’s computational time was noted as 144 minutes with a lower classification accuracy. For better assessment of the accuracy classification in the fundus images, Table 3 shows the comparative performance results of different techniques used for DR fundus image classification.

4. Conclusions

From the last few years, the number of diabetes patients has increased exponentially. As a result, diabetic retinopathy (DR) has also become a big challenge. To solve this problem, deep neural network has been used to perform a vital role in detecting the symptoms in the very early stage of DR. In the proposed DR classification system, the VGGNet was applied for the feature extraction of retinal fundus images, but PCA and SVD were used to reduce the dimensionality of the large-scale retinal fundus images. PCA and SVD performed a better function to make the data reduction process faster and reliable. To detect the region of interest, GMM was used with the incorporation of an adaptive learning rate based on vessel segmentation. The reason for selecting the VGG-19 is that the VGGNet is deeper and more reliable architecture for ImageNet technology. The proposed VGGNet DNN based DR model demonstrated better results than the AlexNet and SIFT models through PCA and SVD feature selection, with an FC7 classification accuracy of 92.21% and 98.34% and an FC8 classification accuracy of 97.96% and 98.13%, respectively.

Author Contributions

Conceptualization, Methodology, Implementation, Writing, M.M.; Funding Acquisition, Review, Editing and Supervision, J.W.; Formal Analysis, N.; Validation, S.S.; Resources, Z.H.

Acknowledgments

This work was supported by the basic and advanced research projects in Chongqing, China under grant no. 61672117.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 2016, 316, 2402–2410. [Google Scholar] [CrossRef] [PubMed]
  2. Haloi, M.; Dandapat, S.; Sinha, R. A Gaussian scale space approach for exudates detection, classification and severity prediction. arXiv, 2015; arXiv:1505.00737. [Google Scholar]
  3. Haloi, M. Improved microaneurysm detection using deep neural networks. arXiv, 2015; arXiv:1505.04424. [Google Scholar]
  4. van Grinsven, M.J.; van Ginneken, B.; Hoyng, C.B.; Theelen, T.; Sánchez, C.I. Fast convolutional neural network training using selective data sampling: Application to hemorrhage detection in color fundus images. IEEE Trans. Med. Imaging 2016, 35, 1273–1284. [Google Scholar] [CrossRef] [PubMed]
  5. Srivastava, R.; Duan, L.; Wong, D.W.; Liu, J.; Wong, T.Y. Detecting retinal microaneurysms and hemorrhages with robustness to the presence of blood vessels. Comput. Methods Programs Biomed. 2017, 138, 83–91. [Google Scholar] [CrossRef] [PubMed]
  6. Seoud, L.; Chelbi, J.; Cheriet, F. Automatic grading of diabetic retinopathy on a public database. In Proceedings of the Ophthalmic Medical Image Analysis Second International Workshop, Munich, Germany, 9 October 2015. [Google Scholar]
  7. Barandiaran, I. The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 832–844. [Google Scholar]
  8. Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, L.; Wang, G. Recent advances in convolutional neural networks. arXiv, 2015; arXiv:1512.07108. [Google Scholar] [CrossRef]
  9. Sankar, M.; Batri, K.; Parvathi, R. Earliest diabetic retinopathy classification using deep convolution neural networks. Int. J. Adv. Eng. Technol. 2016. [Google Scholar] [CrossRef]
  10. Pratt, H.; Coenen, F.; Broadbent, D.M.; Harding, S.P.; Zheng, Y. Convolutional neural networks for diabetic retinopathy. Procedia Comput. Sci. 2016, 90, 200–205. [Google Scholar] [CrossRef]
  11. Haneda, S.; Yamashita, H. International clinical diabetic retinopathy disease severity scale. Nihon Rinsho. Jpn. J. Clin. Med. 2010, 68, 228–235. [Google Scholar]
  12. Somasundaram, S.; Alli, P. A Machine Learning Ensemble Classifier for Early Prediction of Diabetic Retinopathy. J. Med. Syst. 2017, 41, 201. [Google Scholar]
  13. Nanni, L.; Ghidoni, S.; Brahnam, S. Ensemble of Convolutional Neural Networks for Bioimage Classification. Appl. Comput. Inform. 2018. [Google Scholar] [CrossRef]
  14. Abbas, Q.; Fondon, I.; Sarmiento, A.; Jiménez, S.; Alemany, P. Automatic recognition of severity level for diagnosis of diabetic retinopathy using deep visual features. Med. Boil. Eng. Comput. 2017, 55, 1959–1974. [Google Scholar] [CrossRef]
  15. Orlando, J.I.; Prokofyeva, E.; del Fresno, M.; Blaschko, M.B. An ensemble deep learning based approach for red lesion detection in fundus images. Comput. Methods Programs Biomed. 2018, 153, 115–127. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Prentašić, P.; Lončarić, S. Detection of exudates in fundus photographs using convolutional neural networks. In Proceedings of the 9th International Symposium on Image and Signal Processing and Analysis (ISPA), Zagreb, Croatia, 7–9 September 2015; pp. 188–192. [Google Scholar]
  17. Wang, Z.; Yang, J. Diabetic Retinopathy Detection via Deep Convolutional Networks for Discriminative Localization and Visual Explanation. arXiv, 2017; arXiv:1703.10757. [Google Scholar]
  18. Kälviäinen, R.; Uusitalo, H. The DIARETDB1 diabetic retinopathy database and evaluation protocol. Med. Image Underst. Anal. 2007, 2007, 61. [Google Scholar]
  19. Sadek, I.; Elawady, M.; Shabayek, A.E.R. Automatic Classification of Bright Retinal Lesions via Deep Network Features. arXiv, 2017; arXiv:1707.02022. [Google Scholar]
  20. Yu, F.; Sun, J.; Li, A.; Cheng, J.; Wan, C.; Liu, J. Image quality classification for DR screening using deep learning. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Seogwipo, South Korea, 11–15 July 2017; pp. 664–667. [Google Scholar]
  21. Choi, J.Y.; Yoo, T.K.; Seo, J.G.; Kwak, J.; Um, T.T.; Rim, T.H. Multi-categorical deep learning neural network to classify retinal images: A pilot study employing small database. PLoS ONE 2017, 12, e0187336. [Google Scholar] [CrossRef]
  22. Prasad, D.K.; Vibha, L.; Venugopal, K. Early detection of diabetic retinopathy from digital retinal fundus images. In Proceedings of the 2015 IEEE Recent Advances in Intelligent Computational Systems (RAICS), Trivandrum, India, 10–12 December 2015; pp. 240–245. [Google Scholar]
  23. Bhatkar, A.P.; Kharat, G. Detection of diabetic retinopathy in retinal images using MLP classifier. In Proceedings of the IEEE International Symposium on Nanoelectronic and Information Systems (iNIS), Indore, India, 21–23 December 2015; pp. 331–335. [Google Scholar]
  24. Elbalaoui, A.; Boutaounte, M.; Faouzi, H.; Fakir, M.; Merbouha, A. Segmentation and detection of diabetic retinopathy exudates. In Proceedings of the 2014 International Conference on Multimedia Computing and Systems (ICMCS), Marrakech, Morocco, 14–16 April 2014; pp. 171–178. [Google Scholar]
  25. Raman, V.; Then, P.; Sumari, P. Proposed retinal abnormality detection and classification approach: Computer aided detection for diabetic retinopathy by machine learning approaches. In Proceedings of the 2016 8th IEEE International Conference on Communication Software and Networks (ICCSN), Beijing, China, 4–6 June 2016; pp. 636–641. [Google Scholar]
  26. Kaur, A.; Kaur, P. An integrated approach for diabetic retinopathy exudate segmentation by using genetic algorithm and switching median filter. In Proceedings of the International Conference on Image, Vision and Computing (ICIVC), Portsmouth, UK, 3–5 August 2016; pp. 119–123. [Google Scholar]
  27. ManojKumar, S.; Manjunath, R.; Sheshadri, H. Feature extraction from the fundus images for the diagnosis of diabetic retinopathy. In Proceedings of the International Conference on Emerging Research in Electronics, Computer Science and Technology (ICERECT), Mandya, India, 17–19 December 2015; pp. 240–245. [Google Scholar]
  28. Mansour, R.F. Deep-learning-based automatic computer-aided diagnosis system for diabetic retinopathy. Biomed. Eng. Lett. 2018, 8, 41–57. [Google Scholar] [CrossRef]
  29. Wijesinghe, A.; Kodikara, N.; Sandaruwan, D. Autogenous diabetic retinopathy censor for ophthalmologists-AKSHI. In Proceedings of the 2016 IEEE International Conference on Control and Robotics Engineering (ICCRE), Singapore, 2–4 April 2016; pp. 1–10. [Google Scholar]
  30. Seoud, L.; Hurtut, T.; Chelbi, J.; Cheriet, F.; Langlois, J.P. Red lesion detection using dynamic shape features for diabetic retinopathy screening. IEEE Trans. Med. Imaging 2016, 35, 1116–1126. [Google Scholar] [CrossRef]
  31. Gandhi, M.; Dhanasekaran, R. Investigation of severity of diabetic retinopathy by detecting exudates with respect to macula. In Proceedings of the 2015 International Conference on Communications and Signal Processing (ICCSP), Melmaruvathur, India, 2–4 April 2015; pp. 724–729. [Google Scholar]
  32. Quellec, G.; Charrière, K.; Boudi, Y.; Cochener, B.; Lamard, M. Deep image mining for diabetic retinopathy screening. Med. Image Anal. 2017, 39, 178–193. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Du, N.; Li, Y. Automated identification of diabetic retinopathy stages using support vector machine. In Proceedings of the 2013 32nd Chinese Control Conference (CCC), Xi’an, China, 26–28 July 2013; pp. 3882–3886. [Google Scholar]
  34. Yang, Y.; Li, T.; Li, W.; Wu, H.; Fan, W.; Zhang, W. Lesion detection and grading of diabetic retinopathy via two-stages deep convolutional neural networks. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Honolulu, HI, USA, 21–26 July 2017; pp. 533–540. [Google Scholar]
  35. Gurudath, N.; Celenk, M.; Riley, H.B. Machine learning identification of diabetic retinopathy from fundus images. In Proceedings of the 2014 IEEE Signal Processing in Medicine and Biology Symposium (SPMB), Philadelphia, PA, USA, 13 December 2014; pp. 1–7. [Google Scholar]
  36. Cao, W.; Shan, J.; Czarnek, N.; Li, L. Microaneurysm detection in fundus images using small image patches and machine learning methods. In Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Kansas City, MO, USA, 13–16 November 2017; pp. 325–331. [Google Scholar]
  37. Nanni, L.; Ghidoni, S.; Brahnam, S. Handcrafted vs. non-handcrafted features for computer vision classification. Pattern Recognit. 2017, 71, 158–172. [Google Scholar] [CrossRef]
  38. Hagiwara, Y.; Koh, J.E.W.; Tan, J.H.; Bhandary, S.V.; Laude, A.; Ciaccio, E.J.; Tong, L.; Acharya, U.R. Computer-Aided Diagnosis of Glaucoma Using Fundus Images: A Review. Comput. Methods Programs Biomed. 2018, 165, 1–12. [Google Scholar] [CrossRef] [PubMed]
  39. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.; Van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Alban, M.; Gilligan, T. Automated Detection of Diabetic Retinopathy Using Fluorescein Angiography Photographs. Report of standford education. 2016. Available online: https://www.semanticscholar.org/paper/Automated-Detection-of-Diabetic-Retinopathy-using-Stanford/e8155e4b2f163c8ef1dea36a6a902c744641eb5d (accessed on 21 December 2018).
  41. Rahim, S.S.; Jayne, C.; Palade, V.; Shuttleworth, J. Automatic detection of microaneurysms in colour fundus images for diabetic retinopathy screening. Neural Comput. Appl. 2016, 27, 1149–1164. [Google Scholar] [CrossRef]
  42. Available online: https://www.kaggle.com/c/diabetic-retinopathy-detection/data (accessed on 18 December 2018).
  43. Antal, B.; Hajdu, A. An ensemble-based system for microaneurysm detection and diabetic retinopathy grading. IEEE Trans. Biomed. Eng. 2012, 59, 1720. [Google Scholar] [CrossRef]
  44. Stauffer, C.; Grimson, W.E.L. Adaptive background mixture models for real-time tracking. In Proceedings of the 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), Fort Collins, CO, USA, 23–25 June 1999; p. 2246. [Google Scholar]
  45. Lachure, J.; Deorankar, A.; Lachure, S.; Gupta, S.; Jadhav, R. Diabetic Retinopathy using morphological operations and machine learning. In Proceedings of the 2015 IEEE International Advance Computing Conference (IACC), Banglore, India, 12–13 June 2015; pp. 617–622. [Google Scholar]
  46. Priya, R.; Aruna, P. SVM and neural network based diagnosis of diabetic retinopathy. Int. J. Comput. Appl. 2012, 41, 6–12. [Google Scholar]
  47. Singh, N.; Tripathi, R.C. Automated early detection of diabetic retinopathy using image analysis techniques. Int. J. Comput. Appl. 2010, 8, 18–23. [Google Scholar] [CrossRef]
  48. Rao, M.A.; Lamani, D.; Bhandarkar, R.; Manjunath, T. Automated detection of diabetic retinopathy through image feature extraction. In Proceedings of the 2014 International Conference on Advances in Electronics, Computers and Communications (ICAECC), Bangalore, India, 10–11 October 2014; pp. 1–6. [Google Scholar]
Figure 1. Difference between a normal retina and diabetic retinopathy.
Figure 1. Difference between a normal retina and diabetic retinopathy.
Symmetry 11 00001 g001
Figure 2. Colored and background retinal fundus image.
Figure 2. Colored and background retinal fundus image.
Symmetry 11 00001 g002
Figure 3. Retinal blood vessel segmentation and detection.
Figure 3. Retinal blood vessel segmentation and detection.
Symmetry 11 00001 g003
Figure 4. Retinal fundus image classification using the VGGNet architecture.
Figure 4. Retinal fundus image classification using the VGGNet architecture.
Symmetry 11 00001 g004
Figure 5. Block diagram of the proposed model.
Figure 5. Block diagram of the proposed model.
Symmetry 11 00001 g005
Figure 6. Proposed accuracy (%) classification of diabetic retinopathy.
Figure 6. Proposed accuracy (%) classification of diabetic retinopathy.
Symmetry 11 00001 g006
Table 1. KAGGLE standard data classification.
Table 1. KAGGLE standard data classification.
ClassesDiabetic Retinopathy ClassificationDiabetic Retinopathy ImagesPercentage of classification (%)
0Non-DR25,81073.48
1Mild severe24436.96
2Moderate severe529215.07
3Severe8732.48
4Proliferative DR7082.01
Table 2. Comparative results of diabetic retinopathy (VGGNet, AlexNet, and SIFT).
Table 2. Comparative results of diabetic retinopathy (VGGNet, AlexNet, and SIFT).
MethodsFeaturesClassification Accuracy (%)
VGGNet (Proposed)FC7-V-PCA92.21
FC7-V-SVD98.34
FC8-V-PCA97.96
FC8-V-SVD98.13
AlexNetFC6-A-PCA90.15
FC6-A-LDA97.93
FC7-A-PCA95.26
FC7-A-LDA97.28
Scale-Invariant Feature Transform (SIFT)S-PCA91.03
S-LDA94.40
Table 3. Classification results of comparative performance.
Table 3. Classification results of comparative performance.
Techniques with FeaturesClassification Accuracy (%)
GLCM + SVM [45]82.00
SVM + NN [46]89.60
FCM, NN, shape [47]93.00
HEDFD [48]94.60
DWT + PCA [28]95.00
DNN [3]96.00
FC7-V-PCA *92.21
FC7-V-SVD *98.34
FC8-V-PCA *97.96
FC8-V-SVD *98.13
* The proposed VGGNet features.

Share and Cite

MDPI and ACS Style

Mateen, M.; Wen, J.; Nasrullah; Song, S.; Huang, Z. Fundus Image Classification Using VGG-19 Architecture with PCA and SVD. Symmetry 2019, 11, 1. https://doi.org/10.3390/sym11010001

AMA Style

Mateen M, Wen J, Nasrullah, Song S, Huang Z. Fundus Image Classification Using VGG-19 Architecture with PCA and SVD. Symmetry. 2019; 11(1):1. https://doi.org/10.3390/sym11010001

Chicago/Turabian Style

Mateen, Muhammad, Junhao Wen, Nasrullah, Sun Song, and Zhouping Huang. 2019. "Fundus Image Classification Using VGG-19 Architecture with PCA and SVD" Symmetry 11, no. 1: 1. https://doi.org/10.3390/sym11010001

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop