Next Article in Journal
Heterogeneous Photocatalysis with Wireless UV-A LEDs
Previous Article in Journal
Development of a MEMS Multisensor Chip for Aerodynamic Pressure Measurements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Feature Extraction of Ophthalmic Images Using Deep Learning and Machine Learning Algorithms †

by
Tunuri Sundeep
*,
Uppalapati Divyasree
,
Karumanchi Tejaswi
,
Ummadi Reddy Vinithanjali
and
Anumandla Kiran Kumar
Department of Artificial Intelligence and Machine Learning, School of Engineering, Malla Reddy University, Maisammaguda, Dulapally, Hyderabad 500100, Telangana, India
*
Author to whom correspondence should be addressed.
Presented at the 4th International Electronic Conference on Applied Sciences, 27 October–10 November 2023; Available online: https://asec2023.sciforum.net/.
Eng. Proc. 2023, 56(1), 170; https://doi.org/10.3390/ASEC2023-15231
Published: 26 October 2023
(This article belongs to the Proceedings of The 4th International Electronic Conference on Applied Sciences)

Abstract

:
Deep learning and Machine Learning Algorithms has become the most popular method for analyzing and extracting features especially in medical images. And feature extraction has made this task much easier. Our aim is to check which feature extraction technique works best for a classifier. We used Ophthalmic Images and applied feature extraction techniques such as Gabor, LBP (Local Binary Pattern), HOG (Histograms of Oriented Gradients), and SIFT (Scale-Invariant Feature Transform), where the obtained feature extraction techniques are passed through classifiers such as RFC (Random Forest Classifier), CNN (Convolutional Neural Network), SVM (Support Vector Machine), and KNN (K-Nearest Neighbors). Then, we compared the performance of each technique and selected which feature extraction technique gives the best performance for a specified classifier. We achieved an accuracy of 94% for Gabor Feature Extraction technique using CNN Classifier, 92% accuracy for HOG Feature Extraction technique using RFC Classifier, 90% accuracy for LBP Feature Extraction technique using RFC Classifier and we achieved 92% accuracy for SIFT Feature Extraction technique using RFC Classifier.

1. Introduction

Deep learning and Machine Learning has transformed the field of ophthalmology by providing powerful tools for analyzing and extracting meaningful information from ophthalmic images. Ophthalmic images, such as retinal fundus images, Optical Coherence tomography (OCT) scans, or fluorescein angiography images, are used for diagnosing and monitoring various eye diseases and conditions. Feature Extraction plays a huge role when it comes to machine learning and deep learning. It transforms raw data into a set of meaningful and easily understandable features. These features are then used as input for the algorithms. Feature extraction is used for dimensionality reduction, noise reduction, and enhancing model performance.
In this paper, we did a survey on which feature extraction technique worked the best when it comes to a specific algorithm and for this model. We used extraction techniques such as LBP [1], SIFT, HOG [2] & Gabor and different algorithms such as SVM [1,3], CNN [4,5], KNN [2,6] & RFC [7] on this model. We were able to tell that Gabor as a feature extraction technique and RFC as an algorithm worked the best and gave good results for this specific model.

2. Proposed Methodology

In this section, we describe feature extraction techniques and applying them to fundus/ophthalmic image datasets that we have, with the goal of implementing the concept of using classification algorithms to build the model, reconstruct those images, and obtain performance metrics. As shown in Figure 1, we are using various feature extraction techniques such as LBP, HOG, SIFT, and Gabor, and we are checking the performance metrics for each technique by applying various classification algorithms such as SVM, CNN, RFC, and KNN, and comparing the techniques and deciding on the best classifier based on the metrics.

2.1. STEP-1

The first step is data preparation. When it comes to the dataset there are multiple diseases related to the fundus of the eye. For our model, we are taking a dataset consisting of fundus images related to Diabetic Retinopathy (DR) [5,8]. As shown in Figure 2, there are five stages in DR, i.e., No DR, Mild DR, Moderate DR, Severe non- proliferative DR, and Proliferative DR. After collecting the dataset, the images were resized into the desired pixels (i.e., 224 × 224) for them to be used with any of the pre-trained deep-learning classifiers.

2.2. STEP-2

Image Preprocessing: There were various preprocessing techniques that we used on the dataset such as Resizing, Cropping, Denoising.
Resizing: This involved adjusting the dimensions of the image while maintaining its proportions into a desired size.
Denoising: This is the removal of noise from an image so that it can be reproduced in its original form. In contemporary image processing systems, picture denoising is crucial.
Cropping: This involved removing of any unwanted parts of an image to focus on a particular region. It helped to improve composition, removing distractions, and reducing image size while maintaining the desired proportions.

2.3. STEP-3

Feature extraction methods:
  • HOG
In Figure 3, the image was broken into tiny groups, and these groups were linked to one another to form cells. For each cell, gradients and orientations were calculated as part of the object recognition process using HOG [2]. Local object appearance and shape inside an image were defined.
  • LBP
LBP stores local texture information in order to accomplish tasks such as classification, detection, and identification (Figure 4). It is commonly used in image processing applications. The LBP works in 3X3-pixel groups. Each pixel is compared to the pixels of its immediate neighborhood to get their local representation. LBP evaluates points near a central point and decides whether they are more than or less than the center point (i.e., it generates a binary response) [1,2]. Any pixels with values less than the center pixel were recorded as 0 and all other pixels were encoded as 1 in binary encoding.
  • SIFT
SIFT converts an image’s information to a collection of points that may be used to discover recurrent patterns in other images (Figure 5). This method is typically associated with computer vision applications such as object identification and picture matching. SIFT is a strong feature extraction technique that is both easy and effective. By removing redundant features, the size of the feature space is reduced, which has a substantial impact on machine learning training, which is frequently used in large-scale applications.
  • Gabor
In computer vision and image processing, Gabor feature extraction (Figure 6) is a common method for examining the texture information of pictures. It is built on the use of Gabor filters, which are mathematical operations that may store details about the direction, frequency, and phase of texture patterns. The fundamental idea underlying the extraction of Gabor features is to convolve an image with a number of Gabor filters, each of which is intended to find distinct textures at various orientations and frequencies. After that, the filter responses are used as features in additional analysis, including object recognition or image segmentation.

2.4. STEP-4

Classification:
  • CNN
CNN models, one of the earliest deep neural network hierarchies, have hidden layers introduced between the general layers to enable the system’s weights to learn more about the qualities included in the input image [4]. The convolutional layer generates the output feature map by adding an array of weights to each input region of the picture. The output of the convolutional layer is compressed by the pooling layers. The completely linked layer, which comes last, manages the accumulation of findings from earlier layers and creates an N-dimensional vector, where N is the overall number of classes.
  • KNN
KNN is a supervised machine learning method that gains knowledge from a labelled training set by learning to map the training data (X) and labels (Y) to the intended output (Y). The model solely uses training data; that is, it learns the whole training set and outputs the class where the majority of its closest ‘k’ Neighbors, as determined by some distance measure, are located [6]. In KNN classification, the class labels of a test sample’s closest neighbors in the feature space determine the test sample’s class label.
  • SVM
In supervised learning, SVM is frequently used for tasks like image classification and regression analysis. It can locate the closest features and works best on challenging courses due to its memory efficiency [3]. In order to quickly classify fresh data points in the future, the SVM approach creates a boundary or line that split n-dimensional space into classes. The hyperplane is created using SVM by selecting the extreme points. Support vectors, from which the SVM method derives its name, are these extreme instances or spots.
  • RFC
RFC approach is included in the category of Supervised classification methods. RFs extend on the prior session’s introduction of Decision tree learning. Random Forest Classifier is based on a large number of self-learning decision trees that, when placed together, form a “Forest.” Rather than using a single decision tree, the argument for using many decision trees, i.e., an ensemble is that various base learners can arrive at a single strong and robust result. By attempting to reduce the heterogeneity of the two ensuing sets of data, the optimal split might be determined given a set of input qualities and training points.

2.5. STEP-5

Training—Before testing we trained the data. Here, we took 80% of data for training and 20% for testing. Each image is passed under individual feature extraction techniques and then it is passed through individual classifier. The final step was to train and test the data, and the output was noted and accuracies were compared to decide which technique works the best for a classifier.

3. Results

In this model, we used different extraction methods and classifiers and trained these classifiers. As can be seen in Figure 7a, CNN classifier is used and the following results were obtained for each feature extraction method, HOG was 93%, LBP was 34%, SIFT was 82%, and Gabor was 91%. Similarly, in Figure 7b, SVM classifier is used, yielding 50% accuracy for HOG, 50% for LBP, 34% for SIFT, and 92% accuracy for Gabor. In Figure 7c, RFC classifier is used, HOG was 93%, LBP was 93%, SIFT was 92%, and Gabor was 93%. In Figure 7d, KNN classifier is used, HOG was 68%, LBP was 68%, SIFT was 72%, Gabor was 72%. The accuracy we achieved allowed us to further refine the HOG feature extraction technique for our CNN classifier. Similarly, for the SVM classifier, we obtained the Gabor feature extraction method. For the RFC classifiers, HOG, LBP, and Gabor gave similar accuracies. For KNN classifiers, SIFT and Gabor gave similar results.
After analyzing the results, we found that RFC performed better as a classifier compared to other classifiers, and the Gabor feature extraction technique improved the accuracy of this model.

4. Conclusions

In this paper, we explored the efficacy of feature extraction methods such as LBP, Gabor, HOG, and SIFT in combination with various classifiers like CNN, SVM, KNN, and RFC. Our model involved applying these feature extraction techniques to ophthalmic images for the purpose of classification. The extracted features were then applied to the training and performance testing of mentioned classifiers. The results demonstrated the efficacy of these techniques in capturing meaningful and discriminative information from ophthalmic images. The CNN, SVM, KNN, and RFC demonstrated respectable accuracy and computing efficiency. After comparing all the classifiers and feature extraction techniques, we were able to say that Random Forest Classifier (RFC) and Gabor feature extraction technique worked best for this model.

Author Contributions

Conceptualization, A.K.K.; methodology, T.S.; software, U.D.; validation, K.T., T.S. and U.D.; formal analysis, T.S.; investigation, T.S.; resources, K.T.; data curation, T.S.; writing—original draft preparation, U.R.V.; writing—review and editing, T.S.; visualization, T.S.; supervision, T.S.; project administration, A.K.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Vani Kumari, S.; Usha Rani, K. Analysis on various feature extraction methods for medical image classification. In Advances in Computational and Bio-Engineering: Proceeding of the International Conference on Computational and Bio Engineering, Tirupati, India, 27–28 December 2019; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; Volume 2, pp. 19–31. [Google Scholar]
  2. Khalil, M.; Ayad, H.; Adib, A. Performance evaluation of feature extraction techniques in MR-Brain image classification system. Procedia Comput. Sci. 2018, 127, 218–225. [Google Scholar] [CrossRef]
  3. Jeyabharathi, D.; Suruliandi, A. Performance analysis of feature extraction and classification techniques in CBIR. In Proceedings of the 2013 International Conference on Circuits, Power and Computing Technologies (ICCPCT), Nagercoil, India, 20–21 March 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1211–1214. [Google Scholar]
  4. Tariq, H.; Rashid, M.; Javed, A.; Zafar, E.; Alotaibi, S.S.; Zia, M.Y.I. Performance analysis of deep-neural-network-based automatic diagnosis of diabetic retinopathy. Sensors 2021, 22, 205. [Google Scholar] [CrossRef] [PubMed]
  5. Kamothi, N.; Thakur, R. Automatic Detection of Diabetic Retinopathy Using Convolution Neural Network. IRJET 2020, 7, 3810. [Google Scholar]
  6. Reddy, S.K.; Jaya, T. Feature Extraction and Reconstruction of Medical Images using Two-Dimensional Principal Component Analysis. J. Phys. Conf. Ser. 2021, 1817, 012012. [Google Scholar] [CrossRef]
  7. Sarhan, M.H.; Nasseri, M.A.; Zapp, D.; Maier, M.; Lohmann, C.P.; Navab, N.; Eslami, A. Machine learning techniques for ophthalmic data processing: A review. IEEE J. Biomed. Health Inform. 2020, 24, 3338–3350. [Google Scholar] [CrossRef] [PubMed]
  8. Porwal, P.; Pachade, S.; Kamble, R.; Kokare, M.; Deshmukh, G.; Sahasrabuddhe, V.; Meriaudeau, F. Indian diabetic retinopathy image dataset (IDRiD): A database for diabetic retinopathy screening research. Data 2018, 3, 25. [Google Scholar] [CrossRef]
Figure 1. Architecture of the Proposed Methodology.
Figure 1. Architecture of the Proposed Methodology.
Engproc 56 00170 g001
Figure 2. Types of stages related to Diabetic Retinopathy.
Figure 2. Types of stages related to Diabetic Retinopathy.
Engproc 56 00170 g002
Figure 3. Original and Extracted image of HOG.
Figure 3. Original and Extracted image of HOG.
Engproc 56 00170 g003
Figure 4. Original and Extracted image of LBP.
Figure 4. Original and Extracted image of LBP.
Engproc 56 00170 g004
Figure 5. Original and Extracted image of SIFT.
Figure 5. Original and Extracted image of SIFT.
Engproc 56 00170 g005
Figure 6. Original and Extracted image of Gabor.
Figure 6. Original and Extracted image of Gabor.
Engproc 56 00170 g006
Figure 7. It shows the accuracy for the classifiers (a) CNN (b) SVM (c) RFC (d) KNN.
Figure 7. It shows the accuracy for the classifiers (a) CNN (b) SVM (c) RFC (d) KNN.
Engproc 56 00170 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sundeep, T.; Divyasree, U.; Tejaswi, K.; Vinithanjali, U.R.; Kumar, A.K. Feature Extraction of Ophthalmic Images Using Deep Learning and Machine Learning Algorithms. Eng. Proc. 2023, 56, 170. https://doi.org/10.3390/ASEC2023-15231

AMA Style

Sundeep T, Divyasree U, Tejaswi K, Vinithanjali UR, Kumar AK. Feature Extraction of Ophthalmic Images Using Deep Learning and Machine Learning Algorithms. Engineering Proceedings. 2023; 56(1):170. https://doi.org/10.3390/ASEC2023-15231

Chicago/Turabian Style

Sundeep, Tunuri, Uppalapati Divyasree, Karumanchi Tejaswi, Ummadi Reddy Vinithanjali, and Anumandla Kiran Kumar. 2023. "Feature Extraction of Ophthalmic Images Using Deep Learning and Machine Learning Algorithms" Engineering Proceedings 56, no. 1: 170. https://doi.org/10.3390/ASEC2023-15231

Article Metrics

Back to TopTop