Next Article in Journal
Superficial and Deep Capillary Plexuses: Potential Biomarkers of Focal Retinal Defects in Eyes Affected by Macular Idiopatic Epiretinal Membranes? A Pilot Study
Previous Article in Journal
Utility of CK8, CK10, CK13, and CK17 in Differential Diagnostics of Benign Lesions, Laryngeal Dysplasia, and Laryngeal Squamous Cell Carcinoma
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Detection of Corneal Ulcer Using Combination Image Processing and Deep Learning

1
Biomedical Systems and Medical Informatics Engineering, Yarmouk University, Irbid 21163, Jordan
2
Department of Computer Engineering, Yarmouk University, Irbid 21163, Jordan
3
Faculty of Electrical Engineering & Technology, Campus Pauh Putra, Universiti Malaysia Perlis (UniMAP), Arau, Perlis 02600, Malaysia
4
Advanced Computing (AdvComp), Centre of Excellence (CoE), Campus Pauh Putra, Universiti Malaysia Perlis (UniMAP), Arau, Perlis 02600, Malaysia
5
The Institute of Biomedical Technology, King Hussein Medical Center, Royal Jordanian Medical Service, Amman 11855, Jordan
*
Author to whom correspondence should be addressed.
Diagnostics 2022, 12(12), 3204; https://doi.org/10.3390/diagnostics12123204
Submission received: 15 October 2022 / Revised: 26 November 2022 / Accepted: 12 December 2022 / Published: 17 December 2022
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)

Abstract

:
A corneal ulcers are one of the most common eye diseases. They come from various infections, such as bacteria, viruses, or parasites. They may lead to ocular morbidity and visual disability. Therefore, early detection can reduce the probability of reaching the visually impaired. One of the most common techniques exploited for corneal ulcer screening is slit-lamp images. This paper proposes two highly accurate automated systems to localize the corneal ulcer region. The designed approaches are image processing techniques with Hough transform and deep learning approaches. The two methods are validated and tested on the publicly available SUSTech-SYSU database. The accuracy is evaluated and compared between both systems. Both systems achieve an accuracy of more than 90%. However, the deep learning approach is more accurate than the traditional image processing techniques. It reaches 98.9% accuracy and Dice similarity 99.3%. However, the first method does not require parameters to optimize an explicit training model. The two approaches can perform well in the medical field. Moreover, the first model has more leverage than the deep learning model because the last one needs a large training dataset to build reliable software in clinics. Both proposed methods help physicians in corneal ulcer level assessment and improve treatment efficiency.

1. Introduction

A corneal ulcer is a type of illness in the cornea; it comes from infection or injury and leads to ocular morbidity [1,2]. The likelihood of vision impairment is decreased by early identification and differentiation of various ulcer conditions. Slit-lamp imaging techniques used in conventional clinical procedures can be tedious, costly, and time-consuming. The following issues make it challenging to appropriately segment corneal ulcers: significant discrepancies in the pathological morphologies of point-flaky and flaky corneal ulcers, hazy border, noise interference, and a dearth of reliable ground-truth slit-lamp pictures. To recognize and quantify corneal ulcers from ocular staining pictures, various segmentation procedures are needed. Due to the varied sizes and forms of point-flaky mixed corneal ulcers and flaky corneal ulcers, it is difficult to segment them in a slit-lamp picture. The lack of high-quality datasets for both corneal ulcers and their ground truth segment, particularly for supervised learning-based segmentation algorithms, has hampered the development of such systems [3,4]. Corneal segmentation is the first step for diagnosing and assessing ocular surface damage. Therefore, extracting information from fluorescein images is a big challenge for specialists. However, the automated method may help the specialist in localizing and extracting the corneal ulcer region for further assessment. This paper proposed two methods for corneal ulcer segmentation: image processing techniques and semantic segmentation using deep learning. Section 2 is devoted to the most recent studies on ulcer segmentation approaches.

2. Review of the Study

In 2018, Lijie Deng et al. proposed a pipeline for automatically extracting corneal ulcers that uses machine learning and image processing techniques based on fluorescein staining images. Each image was segmented using simple linear iterative clustering, and a support vector machine discriminated between the two classes, followed by erosion and dilation procedures to polish the images. The suggested method achieved a mean accuracy of 98.4%, significantly outperforming Otsu thresholding and active contour techniques. The problem with this study is that the suggested model is semiautomatic since it uses manually labeled landmarks [5]. In 2019, Zhenrong Liu et al. developed an automatic pipeline for segmenting flaky corneal ulcers from fluorescein staining images. They employed a combination of Gaussian Mixture Models (GMM) and Otsu thresholding. They employed the HSV color space, and the number of Gaussian was determined using information theory. The model was validated using 150 images and achieved a Dice similarity coefficient of 0.88 [6].
In 2020, Jessica Loo et al. developed SLIT-Net, an automatic algorithm for the segmentation of microbial keratitis biomarkers under two different illuminations. SLIT-Net segments and identifies four pathological ROIs on diffuse white light images, one pathological ROI on diffuse blue light images, and two pathological ROIs on all images. The model was tested using manually annotated slit lamp photographs from 133 eyes. They used seven-fold cross-validation and achieved a Dice that ranged between 0.62–0.95 for all ROIs [7]. Additionally, in 2020, Pablo Lima et al. suggested a semiautomatic approach using supervised machine learning and image processing techniques to segment corneal lesions. They evaluated the multi-layer perceptron, SVM, K-nearest neighbors, and random forest algorithms. Random forest outperformed all other algorithms, achieving a Dice similarity of 0.85 and an accuracy of 99.08% [8]. Finally, Junyan Lyn et al. proposed a novel transfer learning-based model for corneal segmentation using 712 images from the publicly available SUSTech-SYSU dataset. The suggested model contained an encoder-decoder with an Xception feature extractor using atrous spatial pyramid pooling. The proposed method achieved a Dice score of 0.9582, 97.63% accuracy, and 95.37% sensitivity [9].
In 2021, Veena Mayya et al. [10] developed a multi-scale convolutional neural network (MS-CNN) for accurate corneal segmentation. The suggested model consisted of a deep neural pipeline to automatically segment images followed by a ResNeXt for differentiation. The authors successfully detected fungal keratitis with an 88.96% accuracy using 133 images from the Loo et al. dataset [7]. Additionally, in 2021, Tingting Wang et al. proposed a novel Corneal Ulcer Segmentation Network (CU-SegNet) to segment corneal ulcers with different shapes and sizes in fluorescein images. They used a U-shape encoder-decoder structure and two novel modules. To demonstrate their network effectiveness, the proposed network was evaluated on the SUSTech-SYSU dataset and achieved a Dice coefficient of 0.8914 [11]. To improve the segmentation accuracy further, in 2022, the same research group developed a novel semi-supervised multi-scale self-transformer Generative Adversarial Network (Semi-MsST-GAN) for corneal ulcer segmentation in slit lamp images. Again, they evaluated their model using the SUSTech-SYSU dataset and achieved better segmentation performance than the state-of-the-art CNN-based methods. However, the limited number of slit lamp images available for training and evaluation represents a limitation for both studies [12].
This paper compares the effectiveness of employing image processing techniques and deep learning approaches on corneal ulcer region segmentation. Section 3 presents the two proposed methods, while Section 4 illustrates the results and discusses the performance of each method in terms of accuracy, sensitivity, and specificity. On the other hand, Section 5 is devoted to the conclusion and future work.

3. Materials and Methods

This paper proposes two methods for the automatic segmentation of corneal ulcers. The first method is image processing techniques, and the second is the semantic segmentation method. The dataset utilized in this paper is the publicly available SUSTech-SYSU database [13,14,15]. The dataset consists of 712 fluorescein-stained images that acquired the ocular surface region for patients with different corneal ulcer disease levels. In addition, there are 354 images labeled where the corneal ulcer region is localized. The labeled images are used for evaluating both methods. On top of that, they are used for building deep learning models in the semantic segmentation procedure. The corresponding sections clarify the proposed methods.

3.1. Image Processing with Hough Transform

The first method utilizes the benefits of image processing techniques with the Hough transform to segment the corneal ulcer region. The designed method is shown in Figure 1.
The corneal ulcer region segmentation system proposed in this work is fully automated. Segmentation of the corneal ulcer regions from the whole RGB eye image undergoes several stages. First, the image is subjected to preprocessing stage by initially excluding most unwanted details from the image, particularly the specular reflection region. This is performed by taking the blue part of the image, then squaring its pixel values and binarizing the output. Next, we applied the morphological operation of closing, followed by calculating its complement, as illustrated in Figure 2, for one of the corneal ulcer image datasets, as an example.
The binary image shown in Figure 2b was then multiplied by the green part of the original-colored image after smoothing using a Gaussian filter, which gives the output shown in Figure 3a. The pixel values are then squared and binarized to give the image shown in Figure 3b.
Next, designing an ellipse mask with proper semi-minor and major axis and centroid coordinates is similar to the binary image shown in Figure 3b. The mask shown in Figure 3a is used to exclude most of the unwanted details by multiplying the mask with the binary image shown in Figure 2b, which then gives the image shown in Figure 4b. The final step of the preprocessing stage is performing a thinning operation on the image shown in Figure 4b, which gives the image shown in Figure 5.
In general, the eye contour extraction shown in Figure 5 is insufficiently accurate due to the many details in the eye image. To make a better delineation of the eye border, we performed the second stage, which is eye border recognition using a proper eye border mathematical model, and then used a proper recognition algorithm. Hough transform was used as a parametric shape recognition algorithm, where the eye border parametric shape was generated using a closed mathematical formula introduced by Johan Gielis, namely the Superformula [16]. It models curves called Gielis curves, as described by the polar coordinate, r ( ϕ ) , in the corresponding equation
r ( ϕ ) = 1 [ ( | 1 a cos ( ϕ m 4 ) | ) n 2 + ( | 1 b sin ( ϕ m 4 ) | ) n 3 ] 1 / n 1   ,
where r is the radial distance to the origin, ϕ is the polar angle, and the rational number m is the value of rotational symmetry. The exponents n 1 , n 2 , and n 3 are introduced, which, with the m parameter, allow a greater degree of freedom and enable the Superformula equation to represent several useful shapes. The chosen parameters for mimicking the eye border are 1, 1, 1, and 2 for n 1 , n 2 , n 3 , and m, respectively, which gives the shape shown in Figure 6a. To determine the iris region, where the cornea is positioned directly in front of the iris and pupil, a disk is designed with a diameter and centroid equal to the semi-minor and centers of the eye-recognized shape respectively, as shown in Figure 5b. By applying this concept to the eye image border and cornea region in the adopted corneal ulcer image sample, we get the output shown in Figure 6 and Figure 7, respectively. Next, the ulcer region of interest is separated by multiplying the mask shown in image Figure 2b with the image shown in Figure 8a to get the image shown in Figure 8b.
The pixel values of the green part of the image shown in Figure 9b are squared and binarized, yielding the image shown in Figure 10a. The mask segments shown are tested in the segmentation system. Provided the segment is connected to the eye border in which its semi-major to semi-minor ratio is greater than a certain threshold, it will be considered as an accumulation of the fluorescein stain at the eyelids. It will then be excluded from the final ulcer regions result, as shown in Figure 10b. Finally, the original image will be masked with the remaining mask segments, as in the result shown in Figure 11.

3.2. Semantic Segmentation

The second method that is proposed in this paper is semantic segmentation. Figure 12 demonstrates the steps for automated segmentation using a deep learning model.
As stated in Figure 12, the system splits the dataset (images and their labels) into training and test partitions. The pre-trained convolutional network in this paper is ResNet 18 [15,17]. The pre-trained CNN model was trained and evaluated on the test data.
Semantic segmentation divides image pixels into one or more semantically interpretable classes rather than real-world objects. Region proposal and annotation is the process of categorizing pixel values into distinct groups using CNN. Candidate object patches (COMPs) are small groups of pixels that most likely belong to the same object as region proposals.
The semantic segmentation procedure is started by the encoder network and followed by the decoder network. The encoder is typically a pre-trained network such as ResNets, which is followed by a decoder network. The type of ResNet used in this paper is the Resnet-18 model that won the 2016 ImageNet competition. It is well-known due to its depth and use of residual blocks [18]. These blocks are essential for solving obstacle issues in training by introducing identity skip connections, which allow layers to copy their inputs to the next layer [19].
To create a segmentation map, encoders may be convolutional neural networks, and decoders may be based on deconvolutional or transposed neural networks [20,21]. Figure 13 describes the procedure of semantic segmentation, which is based mainly on the deep learning approach [22]. The corresponding figure illustrates that the input image passes through a trained deep-learning model to end by the localization of the ulcer region.
The pre-trained ResNet18 was used, and the data were divided into 70% training and 30% testing. The images were resized to 224 × 224 × 3 to match the input requirements for the first layer in ResNet18. The model was trained using MATLAB® with a single CPU. The hyper-parameters are the Adam optimization method besides the initial learning rate of 0.0001, with a minibatch size of 32 and a maximum epoch of 50.

4. Results and Discussion

Both methods are applied to the whole dataset, trained, validated, and tested to localize ulcer regions in the cornea.

4.1. Image Processing and Hough Transform

The method is applied to whole images. Figure 14, Figure 15, Figure 16, Figure 17 and Figure 18 depict some of the obtained results for different shapes of ulcer regions. Each figure illustrates the original image, the segmentation output, and its corresponding ground truth.
The examples of figures from Figure 14, Figure 15, Figure 16, Figure 17 and Figure 18 illustrate the output of the first proposed method. All figures describe the ability of the proposed method to localize the ulcer region with high similarities to the ground truth. Similarity indices are calculated for each case, such as the Jaccard similarity index and intersection union unit (IOU). The similarities indices are almost 100% for all presented images except the image in Figure 16. As shown in Figure 16, the method was sensitive to the bottom region of the eye to detect ulcer region that is not presented in the ground truth. In this case, the Jaccard and IOU indices are too low. However, the proposed method may have the capability to distinguish ulcer regions from other eye regions more than manual segmentation.

4.2. Semantic Segmentation

After training the model on 70% of the whole dataset, accuracy, sensitivity, and specificity were calculated for the training and test stages. The accuracy reveals the percentage of correctly classified pixels to all over pixels. Table 1 describes the results of sensitivity, accuracy, and specificity of semantic deep learning segmentation for both training and test stages [23,24,25,26,27,28,29,30,31].
A c c u r a c y = T P + T N T P + T N + F P + F N
S p e c i f i c i t y = T N F P + T N
S e n s i t i v i t y = T P T P + F N
The proposed method is applied to the dataset. The following Figure 19, Figure 20, Figure 21, Figure 22 and Figure 23 illustrate the output of the deep learning model. Each figure shows the original image and its corresponding ulcer region that is localized by the deep learning model.
Figure 19, Figure 20, Figure 21, Figure 22 and Figure 23 illustrate how sensitive the model is to the ulcer region. In addition, the time required for each test image is less than 1 s, implying that the second proposed method is accurate, sensitive, and fast after building the AI model.
The comparison is performed between the two methods in terms of sensitivity, accuracy, specificity, Jaccard index, and Dice similarity. The Jaccard index expresses the division of true classified pixels over the sum of the number of ground truth pixel and the predicted pixels. It is also defined as intersection of union (IOU), as is clear in the corresponding equation [31]:
I O U = T P T P + F P + F N
On the other hand, the Dice similarity defines as two times the area of intersection divided by the sum of the number of pixels predicted and the number of ground truth pixels, and it can be defined as F1 score. The corresponding equation reveals the relation [31]:
D S C = 2 T P 2 T P + F P + F N
All evaluated matrices are carried out on the same test data, which is formed by 30% of the whole dataset. The number of test data is 107 images. Table 2 depicts the performance of each method on the same images.
Table 2 abstracted the results for both methods and its conclusion of the benefit of deep learning techniques on the traditional image processing tools. In terms of accuracy, specificity, and Jaccard similarity, the second approach is higher than the first one. However, it is less sensitive than the first method. Additionally, the IOU is lower than the image processing proposed method. That comes from the truth; the deep learning approach needs a large dataset to obtain a robust and highly sensitive one by optimizing its training parameters. On the other hand, the time required for the second approach is less than the first approach where the first method requires almost 30s to detect the ulcer region whereas the second strategy is just 1 s for a single test image. Therefore, the second method can be the promised approach for ulcer segmentation in the medical field. Furthermore, building a sensible and reliable model requires training the semantic model on a large dataset.
Figure 24 describes the performance of each method. Both methods are effective as shown in the corresponding figures. Their IOU and Dice similarity are almost the same. Based on the experiment which is carried out in this paper, the time required to segment ulcers in a single image using AI is just 1 s, where using image processing needs 30 s.
This study compared with literature that used the same dataset. Table 3 describes the performance of both methods in terms of accuracy, sensitivity, specificity, and Dice index.
As illustrated in Table 3, both methods are effective and influence ulcer detection.

5. Conclusions

A corneal ulcer is commonly a corneal disease. It causes ocular morbidity due to injury or infection by bacteria, viral, or parasites. Ulcer early diagnosis decreases vision impairment chance. Employing slit-lamp imaging techniques in clinics can be tedious, expensive, and time-consuming. Localization of ulcer regions in slit-lamp images influences the level of diagnoses.
Manual detection needs highly expert physicians, and it is not accurate. Automated segmentation of the corneal ulcer region develops the assessment method and helps diagnose accurately.
This paper proposed two methods to extract the ulcer region automatically. The first approach utilizes image processing techniques with Hough transform to localize the corneal ulcer-affected segment. The second approach is designed based on deep learning algorithms. The two methods are trained and evaluated in terms of performance matrices: accuracy, sensitivity, specificity, Jaccard similarity, Dice similarity, and IOU. The results show the effectiveness of both methods in accuracy, but deep learning is more accurate than image processing. However, image processing is more sensitive to ulcer regions, whereas the deep learning method has higher specificity. This study recommends exploiting the properties of image processing algorithms and artificial intelligence (AI) to guide the residents in extracting the affected ulcer region.
The sensitivity of the AI model can be enhanced using a large dataset to achieve a more sensitive, reliable, and robust model. The two approaches leverage finding appropriate treatment based on the assessment report, which decreases the probability of reaching the visually impaired.

Author Contributions

Conceptualization I.A.Q., H.A., Y.A.-I. and M.A.; methodology, I.A.Q., H.A., A.Z., Y.A.-I. and M.A.; software, I.A.Q., H.A., M.A. and A.Z.; validation, H.A., Y.A.-I., W.A.M. and M.A.; formal analysis, Y.A.-I., H.A. and M.A.; writing—original draft preparation, I.A.Q., Y.A.-I., H.A. and W.A.M.; writing—review and editing, I.A.Q., H.A., W.A.M., M.A., Y.A.-I. and A.Z.; and visualization, H.A. and A.Z.; supervision, I.A.Q., H.A. and W.A.M.; project administration, I.A.Q., H.A., W.A.M. and Y.A.-I. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no specific external funding. However, thank you to Wan Azani Mustafa for funding dealing and arrangement.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset that has been analyzed in this study was derived from the following public domain resource SUSTech-SYSU dataset. Available online: https://github.com/CRazorback/The-SUSTech-SYSU-dataset-for-automatically-segmenting-and-classifying-corneal-ulcers (accessed on 1 May 2022).

Acknowledgments

The authors would thank the authors of the dataset for making it available online.

Conflicts of Interest

The authors declare that they have no conflict of interest to report regarding the present study.

References

  1. Alhajraf, K.; Lin, S.R.; Jacobs, D.S. A corneal ring ulcer. Am. J. Ophthalmol. Case Rep. 2020, 20, 100856. [Google Scholar] [CrossRef] [PubMed]
  2. Mansoor, H.; Tan, H.C.; Lin, M.T.-Y.; Mehta, J.S.; Liu, Y.-C. Diabetic Corneal Neuropathy. J. Clin. Med. 2020, 9, 3956. [Google Scholar] [CrossRef] [PubMed]
  3. Akram, A.; Debnath, R. An Efficient Automated Corneal Ulcer Detection Method using Convolutional Neural Network. In Proceedings of the 2019 22nd International Conference on Computer and Information Technology (ICCIT), Dhaka, Bangladesh, 18–20 December 2019. [Google Scholar] [CrossRef]
  4. Im, J.; Kim, D. Corneal Ulcers Detection Using Random Seed Appointment Algorithm. J. Inst. Electron. Inf. Eng. 2019, 56, 53–66. [Google Scholar] [CrossRef]
  5. Deng, L.; Huang, H.; Yuan, J.; Tang, X. Superpixel-based automatic segmentation of corneal ulcers from ocular staining images. In Proceedings of the 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP), Shanghai, China, 19–21 November 2018; pp. 1–5. [Google Scholar]
  6. Liu, Z.; Shi, Y.; Zhan, P.; Zhang, Y.; Gong, Y.; Tang, X. Automatic corneal ulcer segmentation combining Gaussian mixture modeling and Otsu method. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 6298–6301. [Google Scholar]
  7. Loo, J.; Kriegel, M.F.; Tuohy, M.M.; Kim, K.H.; Prajna, V.; Woodward, M.A.; Farsiu, S. Open-source automatic segmentation of ocular structures and biomarkers of microbial keratitis on slit-lamp photography images using deep learning. IEEE J. Biomed. Health Inform. 2020, 25, 88–99. [Google Scholar] [CrossRef]
  8. Lima, P.V.; de MSVeras, R.; Vogado, L.H.; Portela, H.M.; de Almeida, J.D.; Aires, K.R.; Leite, D. A semiautomatic segmentation approach to corneal lesions. Comput. Electr. Eng. 2020, 84, 106625. [Google Scholar] [CrossRef]
  9. Lyu, J.; Qiu, J.; Deng, L.; Zhang, Y.; Ye, T.T.T.; Tang, X. Transfer Learning for Automatic Cornea Segmentation based on Ocular Staining Images. In Proceedings of the Fourth International Symposium on Image Computing and Digital Medicine, Shenyang China, 5–7 December 2020; pp. 108–111. [Google Scholar]
  10. Mayya, V.; Kamath Shevgoor, S.; Kulkarni, U.; Hazarika, M.; Barua, P.D.; Acharya, U.R. Multi-scale convolutional neural network for accurate corneal segmentation in early detection of fungal keratitis. J. Fungi 2021, 7, 850. [Google Scholar] [CrossRef]
  11. Wang, T.; Zhu, W.; Wang, M.; Chen, Z.; Chen, X. Cu-Segnet: Corneal Ulcer Segmentation Network. In Proceedings of the 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), Nice, France, 13–16 April 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1518–1521. [Google Scholar]
  12. Wang, T.; Wang, M.; Zhu, W.; Wang, L.; Chen, Z.; Peng, Y.; Chen, X. Semi-MsST-GAN: A Semi-Supervised Segmentation Method for Corneal Ulcer Segmentation in Slit-Lamp Images. Front. Neurosci. 2021, 15, 1705. [Google Scholar] [CrossRef]
  13. Deng, L.; Lyu, J.; Huang, H.; Deng, Y.; Yuan, J.; Tang, X. The SUSTech-SYSU dataset for automatically segmenting and classifying corneal ulcers. Sci. Data 2020, 7, 23. [Google Scholar] [CrossRef] [Green Version]
  14. Wang, Z.; Lyu, J.; Luo, W.; Tang, X. Adjacent Scale Fusion and Corneal Position Embedding for Corneal Ulcer Segmentation. Ophthalmic Medical Image Analysis. OMIA 2021. In Lecture Notes in Computer Science; Fu, H., Garvin, M.K., MacGillivray, T., Xu, Y., Zheng, Y., Eds.; Springer: Cham, Switzerland, 2021; Volume 12970. [Google Scholar]
  15. Alquran, H.; Al-Issa, Y.; Alsalatie, M.; Mustafa, W.A.; Qasmieh, I.A.; Zyout, A. Intelligent Diagnosis and Classification of Keratitis. Diagnostics 2022, 12, 1344. [Google Scholar] [CrossRef]
  16. Gielis, J. A generic geometric transformation that unifies a wide range of natural and abstract shapes. Am. J. Bot. 2003, 90, 333–338. [Google Scholar] [CrossRef]
  17. Alquran, H.; Mustafa, W.A.; Qasmieh, I.A.; Yacob, Y.M.; Alsalatie, M.; Al-Issa, Y.; Alqudah, A.M. Cervical Cancer Classification Using Combined Machine Learning and Deep Learning Approach. CMC-Comput. Mater. Contin. 2022, 72, 5117–5134. [Google Scholar] [CrossRef]
  18. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  19. Zhou, Q.; Zhu, W.; Li, F.; Yuan, M.; Zheng, L.; Liu, X. Transfer Learning of the ResNet-18 and DenseNet-121 Model Used to Diagnose Intracranial Hemorrhage in CT Scanning. Curr. Pharm. Des. 2022, 28, 287–295. [Google Scholar] [CrossRef] [PubMed]
  20. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
  21. Brostow, G.J.; Fauqueur, J.; Cipolla, R. Semantic object classes in video: A high-definition ground truth database. Pattern Recognit. Lett. 2009, 30, 88–97. [Google Scholar] [CrossRef]
  22. Shah, M. Semantic Segmentation Using Fully Convolutional Networks Over the Years. Meet Shah Blog Website. 2017. Available online: https://meetshah1995.github.io/semantic-segmentation/deep-learning/pytorch/visdom/2017/06/01/semantic-segmentation-over-the-years.html (accessed on 15 September 2022).
  23. Madani, A.; Namazi, B.; Altieri, M.S.; Hashimoto, D.A.; Rivera, A.M.; Pucher, P.H.; Alseidi, A. Artificial intelligence for intraoperative guidance: Using semantic segmentation to identify surgical anatomy during laparoscopic cholecystectomy. Ann. Surg. 2022, 276, 363–369. [Google Scholar] [CrossRef] [PubMed]
  24. Irfan, R.; Almazroi, A.A.; Rauf, H.T.; Damaševičius, R.; Nasr, E.A.; Abdelgawad, A.E. Dilated semantic segmentation for breast ultrasonic lesion detection using parallel feature fusion. Diagnostics 2021, 11, 1212. [Google Scholar] [CrossRef]
  25. Khalifa, N.E.M.; Manogaran, G.; Taha, M.H.N.; Loey, M. A deep learning semantic segmentation architecture for COVID-19 lesions discovery in limited chest CT datasets. Expert Syst. 2022, 39, e12742. [Google Scholar] [CrossRef]
  26. Tiwari, T.; Saraswat, M. A new modified-unet deep learning model for semantic segmentation. Multimed. Tools Appl. 2022, 1–21. [Google Scholar] [CrossRef]
  27. Ruiz-Santaquiteria, J.; Bueno, G.; Deniz, O.; Vallez, N.; Cristobal, G. Semantic versus instance segmentation in microscopic algae detection. Eng. Appl. Artificial Intell. 2020, 87, 103271. [Google Scholar] [CrossRef]
  28. Sambyal, N.; Saini, P.; Syal, R.; Gupta, V. Modified U-Net architecture for semantic segmentation of diabetic retinopathy images. Biocybern. Biomed. Eng. 2020, 40, 1094–1109. [Google Scholar] [CrossRef]
  29. Kar, J.; Cohen, M.V.; McQuiston, S.P.; Malozzi, C.M. A deep-learning semantic segmentation approach to fully automated MRI-based left-ventricular deformation analysis in cardiotoxicity. Magn. Reson. Imaging 2021, 78, 127–139. [Google Scholar] [CrossRef]
  30. Nurmaini, S.; Tama, B.A.; Rachmatullah, M.N.; Darmawahyuni, A.; Sapitri, A.I.; Firdaus, F.; Tutuko, B. An improved semantic segmentation with region proposal network for cardiac defect interpretation. Neural Comput. Appl. 2022, 3, 13937–13950. [Google Scholar] [CrossRef]
  31. Harkat, H.; Nascimento, J.; Bernardino, A. Fire segmentation using a DeepLabv3+ architecture. In Image and Signal Processing for Remote Sensing XXVI; SPIE: Bellingham, WA, USA, 2020; Volume 11533, pp. 134–145. [Google Scholar]
Figure 1. The proposed method block diagram.
Figure 1. The proposed method block diagram.
Diagnostics 12 03204 g001
Figure 2. (a) The original colored image (b). The specular reflection mask.
Figure 2. (a) The original colored image (b). The specular reflection mask.
Diagnostics 12 03204 g002
Figure 3. (a). The green part smoothed image after masking (b). The binarized image of (a).
Figure 3. (a). The green part smoothed image after masking (b). The binarized image of (a).
Diagnostics 12 03204 g003
Figure 4. (a) The ellipse mask, (b) the binarized image of Figure 3, (b) after closing and masking with ellipse mask.
Figure 4. (a) The ellipse mask, (b) the binarized image of Figure 3, (b) after closing and masking with ellipse mask.
Diagnostics 12 03204 g004
Figure 5. After applying image thinning to the image shown in Figure 4b.
Figure 5. After applying image thinning to the image shown in Figure 4b.
Diagnostics 12 03204 g005
Figure 6. (a). Eye border model using Superformula with proper parameters. (b). The enclosed disk should separate the cornea region.
Figure 6. (a). Eye border model using Superformula with proper parameters. (b). The enclosed disk should separate the cornea region.
Diagnostics 12 03204 g006
Figure 7. (a). The eye border recognition using the Superformula shape model and Hough transform, (b). The enclosed recognized eye border with the original image for illustration.
Figure 7. (a). The eye border recognition using the Superformula shape model and Hough transform, (b). The enclosed recognized eye border with the original image for illustration.
Diagnostics 12 03204 g007
Figure 8. (a) The filled recognized eye border in the adopted example. (b) The enclosed disk is used as a mask to separate the cornea region.
Figure 8. (a) The filled recognized eye border in the adopted example. (b) The enclosed disk is used as a mask to separate the cornea region.
Diagnostics 12 03204 g008
Figure 9. (a). Separation of the cornea region using the recognized disk mask. (b) The separated cornea region after masking with a specular reflection mask.
Figure 9. (a). Separation of the cornea region using the recognized disk mask. (b) The separated cornea region after masking with a specular reflection mask.
Diagnostics 12 03204 g009
Figure 10. (a). Two potential corneal ulcer segments mask (b). The mask’s segment connected to the recognized eye border with a ratio of semi-major to semi-minor is greater than the predefined threshold being excluded.
Figure 10. (a). Two potential corneal ulcer segments mask (b). The mask’s segment connected to the recognized eye border with a ratio of semi-major to semi-minor is greater than the predefined threshold being excluded.
Diagnostics 12 03204 g010
Figure 11. (a) The original image. (b) After masking the original image with the corneal ulcer mask shown in the image of Figure 10b.
Figure 11. (a) The original image. (b) After masking the original image with the corneal ulcer mask shown in the image of Figure 10b.
Diagnostics 12 03204 g011
Figure 12. Deep learning method for ulcer localization.
Figure 12. Deep learning method for ulcer localization.
Diagnostics 12 03204 g012
Figure 13. Deep learning method for ulcer localization [20].
Figure 13. Deep learning method for ulcer localization [20].
Diagnostics 12 03204 g013
Figure 14. Example 1: Comparison between segmentation output and the ground truth (a) original image, (b) the segmentation of ulcer region using the first proposed method, (c) and the ground truth of the corresponding input image.
Figure 14. Example 1: Comparison between segmentation output and the ground truth (a) original image, (b) the segmentation of ulcer region using the first proposed method, (c) and the ground truth of the corresponding input image.
Diagnostics 12 03204 g014
Figure 15. Example 2: Comparison between segmentation output and the ground truth (a) original image, (b) the segmentation of ulcer region using the first proposed method, (c) and the ground truth of the corresponding input image.
Figure 15. Example 2: Comparison between segmentation output and the ground truth (a) original image, (b) the segmentation of ulcer region using the first proposed method, (c) and the ground truth of the corresponding input image.
Diagnostics 12 03204 g015
Figure 16. Example 3: Comparison between segmentation output and the ground truth (a) original image, (b) the segmentation of ulcer region using the first proposed method, (c) and the ground truth of the corresponding input image.
Figure 16. Example 3: Comparison between segmentation output and the ground truth (a) original image, (b) the segmentation of ulcer region using the first proposed method, (c) and the ground truth of the corresponding input image.
Diagnostics 12 03204 g016aDiagnostics 12 03204 g016b
Figure 17. Example 4: Comparison between segmentation output and the ground truth (a) original image, (b) the segmentation of ulcer region using the first proposed method, (c) and the ground truth of the corresponding input image.
Figure 17. Example 4: Comparison between segmentation output and the ground truth (a) original image, (b) the segmentation of ulcer region using the first proposed method, (c) and the ground truth of the corresponding input image.
Diagnostics 12 03204 g017
Figure 18. Example 5: Comparison between segmentation output and the ground truth (a) original image, (b) the segmentation of ulcer region using the first proposed method, (c) and the ground truth of the corresponding input image.
Figure 18. Example 5: Comparison between segmentation output and the ground truth (a) original image, (b) the segmentation of ulcer region using the first proposed method, (c) and the ground truth of the corresponding input image.
Diagnostics 12 03204 g018
Figure 19. Example 1: Semantic segmentation approach, (a) the original image, (b) the segmentation output, (c) and the ground truth of the corresponding input image.
Figure 19. Example 1: Semantic segmentation approach, (a) the original image, (b) the segmentation output, (c) and the ground truth of the corresponding input image.
Diagnostics 12 03204 g019
Figure 20. Example 2: Semantic segmentation approach, (a) the original image, (b) the segmentation output, (c) and the ground truth of the corresponding input image.
Figure 20. Example 2: Semantic segmentation approach, (a) the original image, (b) the segmentation output, (c) and the ground truth of the corresponding input image.
Diagnostics 12 03204 g020
Figure 21. Example 3: Semantic segmentation approach, (a) the original image, (b) the segmentation output, (c) and the ground truth of the corresponding input image.
Figure 21. Example 3: Semantic segmentation approach, (a) the original image, (b) the segmentation output, (c) and the ground truth of the corresponding input image.
Diagnostics 12 03204 g021aDiagnostics 12 03204 g021b
Figure 22. Example 4: Semantic segmentation approach, (a) the original image, (b) the segmentation output, (c) and the ground truth segment.
Figure 22. Example 4: Semantic segmentation approach, (a) the original image, (b) the segmentation output, (c) and the ground truth segment.
Diagnostics 12 03204 g022
Figure 23. Example 5: Semantic segmentation approach, (a) the original image, (b) the segmentation output, (c) and the ground truth segment.
Figure 23. Example 5: Semantic segmentation approach, (a) the original image, (b) the segmentation output, (c) and the ground truth segment.
Diagnostics 12 03204 g023
Figure 24. The comparison between the two proposed methods.
Figure 24. The comparison between the two proposed methods.
Diagnostics 12 03204 g024
Table 1. Performance of semantic deep learning segmentation.
Table 1. Performance of semantic deep learning segmentation.
Global AccuracySpecificitySensitivity
Training Phase99.75%99.84%96.77%
Test Phase98.8%99.3%83.5%
Table 2. Comparison between two proposed methods over the test dataset (30% of whole data).
Table 2. Comparison between two proposed methods over the test dataset (30% of whole data).
MethodGlobal AccuracySpecificitySensitivityJaccard SimilarityDice Similarity
Image Processing Techniques Method98.7%63.4%99.4%98.64%98.9%
Deep Learning Method98.8%99.3%83.5%98.655%99.3%
Table 3. Comparison of the proposed method with previous studies.
Table 3. Comparison of the proposed method with previous studies.
StudyAccuracySensitivitySpecificityDice Index
[10]88.96%90.67%87.57%88.01%
[12]-89.65%99.7% 89.14%
[13]-91.9% 90.93%
This Study (1st method)97.97%99.8%63.4%
This study (2nd method)98.9%83.599.3%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Qasmieh, I.A.; Alquran, H.; Zyout, A.; Al-Issa, Y.; Mustafa, W.A.; Alsalatie, M. Automated Detection of Corneal Ulcer Using Combination Image Processing and Deep Learning. Diagnostics 2022, 12, 3204. https://doi.org/10.3390/diagnostics12123204

AMA Style

Qasmieh IA, Alquran H, Zyout A, Al-Issa Y, Mustafa WA, Alsalatie M. Automated Detection of Corneal Ulcer Using Combination Image Processing and Deep Learning. Diagnostics. 2022; 12(12):3204. https://doi.org/10.3390/diagnostics12123204

Chicago/Turabian Style

Qasmieh, Isam Abu, Hiam Alquran, Ala’a Zyout, Yazan Al-Issa, Wan Azani Mustafa, and Mohammed Alsalatie. 2022. "Automated Detection of Corneal Ulcer Using Combination Image Processing and Deep Learning" Diagnostics 12, no. 12: 3204. https://doi.org/10.3390/diagnostics12123204

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop