Artificial Intelligence in Eye Disease – Volume 2

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: closed (28 February 2023) | Viewed by 22122

Special Issue Editor


E-Mail Website
Guest Editor
1. Department of Brain and Cognitive Engineering, Korea University, Seoul 136-701, Republic of Korea
2. Department of Artificial Intelligence, Korea University, Seoul 136-701, Republic of Korea
Interests: artificial intelligence in biomedicine; diagnosis of retinal diseases; deep learning for ophthalmology images; neuroscience research
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

While the use of artificial intelligence (AI) is rapidly spreading to the medical world amid the vortex of the fourth industrial revolution, the use of AI in ophthalmology is attracting attention for the diagnosis of various ophthalmic diseases, including optic nerve diseases, which are difficult to diagnose. In particular, it could help to diagnose with high accuracy by introducing AI when applied to fundus photographs, optical coherence tomography, and the visual field to achieve strong classification performance in the detection of ocular and retinal diseases. In ocular imaging, AI can be used as a possible solution for screening, diagnosing, and monitoring patients with major eye disease in primary care and community settings. For instance, through deep learning algorithms that read retinal images, various diseases can be observed, such as bleeding, macular abnormalities—e.g., drusen—choroidal abnormalities, retinal vessel abnormalities, nerve fiber layer defects, and glaucomatous optic nerve papilla changes. Thus, deep learning architecture can be applied to learn to recognize eye diseases, thereby raising the diagnosis rate with a clinically acceptable performance. In other words, AI serves as a safety device for both patients and doctors, and as an auxiliary tool to quickly judge the results. It prevents the possibility of misdiagnosis that can occur in the first place, provides treatment efficiency, and increases patient reliability. Consequently, AI could potentially revolutionize the way that ophthalmology is practiced in the future. Thus, the aim of this Special Issue is to highlight the recent progress and trends in utilizing AI techniques, such as machine learning and deep learning for detecting, screening, diagnosing, and monitoring numerous eye diseases not only in diverse clinical practice but also in basic research of ophthalmology.

Prof. Dr. Jae-Ho Han
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • deep learning
  • fundus image
  • optical coherence tomography
  • ophthalmology
  • retinal vessel
  • glaucoma
  • retinopahty
  • macular degeneration
  • image segmentation

Related Special Issues

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

13 pages, 3700 KiB  
Article
A Deep Feature Fusion of Improved Suspected Keratoconus Detection with Deep Learning
by Ali H. Al-Timemy, Laith Alzubaidi, Zahraa M. Mosa, Hazem Abdelmotaal, Nebras H. Ghaeb, Alexandru Lavric, Rossen M. Hazarbassanov, Hidenori Takahashi, Yuantong Gu and Siamak Yousefi
Diagnostics 2023, 13(10), 1689; https://doi.org/10.3390/diagnostics13101689 - 10 May 2023
Cited by 8 | Viewed by 2530
Abstract
Detection of early clinical keratoconus (KCN) is a challenging task, even for expert clinicians. In this study, we propose a deep learning (DL) model to address this challenge. We first used Xception and InceptionResNetV2 DL architectures to extract features from three different corneal [...] Read more.
Detection of early clinical keratoconus (KCN) is a challenging task, even for expert clinicians. In this study, we propose a deep learning (DL) model to address this challenge. We first used Xception and InceptionResNetV2 DL architectures to extract features from three different corneal maps collected from 1371 eyes examined in an eye clinic in Egypt. We then fused features using Xception and InceptionResNetV2 to detect subclinical forms of KCN more accurately and robustly. We obtained an area under the receiver operating characteristic curves (AUC) of 0.99 and an accuracy range of 97–100% to distinguish normal eyes from eyes with subclinical and established KCN. We further validated the model based on an independent dataset with 213 eyes examined in Iraq and obtained AUCs of 0.91–0.92 and an accuracy range of 88–92%. The proposed model is a step toward improving the detection of clinical and subclinical forms of KCN. Full article
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease – Volume 2)
Show Figures

Figure 1

16 pages, 2644 KiB  
Article
End-to-End Automatic Classification of Retinal Vessel Based on Generative Adversarial Networks with Improved U-Net
by Jieni Zhang, Kun Yang, Zhufu Shen, Shengbo Sang, Zhongyun Yuan, Runfang Hao, Qi Zhang and Meiling Cai
Diagnostics 2023, 13(6), 1148; https://doi.org/10.3390/diagnostics13061148 - 17 Mar 2023
Cited by 2 | Viewed by 1464
Abstract
The retinal vessels in the human body are the only ones that can be observed directly by non-invasive imaging techniques. Retinal vessel morphology and structure are the important objects of concern for physicians in the early diagnosis and treatment of related diseases. The [...] Read more.
The retinal vessels in the human body are the only ones that can be observed directly by non-invasive imaging techniques. Retinal vessel morphology and structure are the important objects of concern for physicians in the early diagnosis and treatment of related diseases. The classification of retinal vessels has important guiding significance in the basic stage of diagnostic treatment. This paper proposes a novel method based on generative adversarial networks with improved U-Net, which can achieve synchronous automatic segmentation and classification of blood vessels by an end-to-end network. The proposed method avoids the dependency of the segmentation results in the multiple classification tasks. Moreover, the proposed method builds on an accurate classification of arteries and veins while also classifying arteriovenous crossings. The validity of the proposed method is evaluated on the RITE dataset: the accuracy of image comprehensive classification reaches 96.87%. The sensitivity and specificity of arteriovenous classification reach 91.78% and 97.25%. The results verify the effectiveness of the proposed method and show the competitive classification performance. Full article
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease – Volume 2)
Show Figures

Figure 1

11 pages, 2780 KiB  
Article
Artificial Intelligence for Evaluation of Retinal Vasculopathy in Facioscapulohumeral Dystrophy Using OCT Angiography: A Case Series
by Martina Maceroni, Mauro Monforte, Rossella Cariola, Benedetto Falsini, Stanislao Rizzo, Maria Cristina Savastano, Francesco Martelli, Enzo Ricci, Sara Bortolani, Giorgio Tasca and Angelo Maria Minnella
Diagnostics 2023, 13(5), 982; https://doi.org/10.3390/diagnostics13050982 - 04 Mar 2023
Viewed by 1545
Abstract
Facioscapulohumeral muscular dystrophy (FSHD) is a slowly progressive muscular dystrophy with a wide range of manifestations including retinal vasculopathy. This study aimed to analyse retinal vascular involvement in FSHD patients using fundus photographs and optical coherence tomography-angiography (OCT-A) scans, evaluated through artificial intelligence [...] Read more.
Facioscapulohumeral muscular dystrophy (FSHD) is a slowly progressive muscular dystrophy with a wide range of manifestations including retinal vasculopathy. This study aimed to analyse retinal vascular involvement in FSHD patients using fundus photographs and optical coherence tomography-angiography (OCT-A) scans, evaluated through artificial intelligence (AI). Thirty-three patients with a diagnosis of FSHD (mean age 50.4 ± 17.4 years) were retrospectively evaluated and neurological and ophthalmological data were collected. Increased tortuosity of the retinal arteries was qualitatively observed in 77% of the included eyes. The tortuosity index (TI), vessel density (VD), and foveal avascular zone (FAZ) area were calculated by processing OCT-A images through AI. The TI of the superficial capillary plexus (SCP) was increased (p < 0.001), while the TI of the deep capillary plexus (DCP) was decreased in FSHD patients in comparison to controls (p = 0.05). VD scores for both the SCP and the DCP results increased in FSHD patients (p = 0.0001 and p = 0.0004, respectively). With increasing age, VD and the total number of vascular branches showed a decrease (p = 0.008 and p < 0.001, respectively) in the SCP. A moderate correlation between VD and EcoRI fragment length was identified as well (r = 0.35, p = 0.048). For the DCP, a decreased FAZ area was found in FSHD patients in comparison to controls (t (53) = −6.89, p = 0.01). A better understanding of retinal vasculopathy through OCT-A can support some hypotheses on the disease pathogenesis and provide quantitative parameters potentially useful as disease biomarkers. In addition, our study validated the application of a complex toolchain of AI using both ImageJ and Matlab to OCT-A angiograms. Full article
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease – Volume 2)
Show Figures

Figure 1

25 pages, 4889 KiB  
Article
Classification of Retinal Diseases in Optical Coherence Tomography Images Using Artificial Intelligence and Firefly Algorithm
by Mehmet Batuhan Özdaş, Fatih Uysal and Fırat Hardalaç
Diagnostics 2023, 13(3), 433; https://doi.org/10.3390/diagnostics13030433 - 25 Jan 2023
Cited by 7 | Viewed by 2349
Abstract
In recent years, the number of studies for the automatic diagnosis of biomedical diseases has increased. Many of these studies have used Deep Learning, which gives extremely good results but requires a vast amount of data and computing load. If the processor is [...] Read more.
In recent years, the number of studies for the automatic diagnosis of biomedical diseases has increased. Many of these studies have used Deep Learning, which gives extremely good results but requires a vast amount of data and computing load. If the processor is of insufficient quality, this takes time and places an excessive load on the processor. On the other hand, Machine Learning is faster than Deep Learning and does not have a much-needed computing load, but it does not provide as high an accuracy value as Deep Learning. Therefore, our goal is to develop a hybrid system that provides a high accuracy value, while requiring a smaller computing load and less time to diagnose biomedical diseases such as the retinal diseases we chose for this study. For this purpose, first, retinal layer extraction was conducted through image preprocessing. Then, traditional feature extractors were combined with pre-trained Deep Learning feature extractors. To select the best features, we used the Firefly algorithm. In the end, multiple binary classifications were conducted instead of multiclass classification with Machine Learning classifiers. Two public datasets were used in this study. The first dataset had a mean accuracy of 0.957, and the second dataset had a mean accuracy of 0.954. Full article
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease – Volume 2)
Show Figures

Figure 1

17 pages, 2766 KiB  
Article
Performance of the Deep Neural Network Ciloctunet, Integrated with Open-Source Software for Ciliary Muscle Segmentation in Anterior Segment OCT Images, Is on Par with Experienced Examiners
by Torsten Straßer and Sandra Wagner
Diagnostics 2022, 12(12), 3055; https://doi.org/10.3390/diagnostics12123055 - 06 Dec 2022
Cited by 2 | Viewed by 1165
Abstract
Anterior segment optical coherence tomography (AS-OCT), being non-invasive and well-tolerated, is the method of choice for an in vivo investigation of ciliary muscle morphology and function. The analysis requires the segmentation of the ciliary muscle, which is, when performed manually, both time-consuming and [...] Read more.
Anterior segment optical coherence tomography (AS-OCT), being non-invasive and well-tolerated, is the method of choice for an in vivo investigation of ciliary muscle morphology and function. The analysis requires the segmentation of the ciliary muscle, which is, when performed manually, both time-consuming and prone to examiner bias. Here, we present a convolutional neural network trained for the automatic segmentation of the ciliary muscle in AS-OCT images. Ciloctunet is based on the Freiburg U-net and was trained and validated using 1244 manually segmented OCT images from two previous studies. An accuracy of 97.5% for the validation dataset was achieved. Ciloctunet’s performance was evaluated by replicating the findings of a third study with 180 images as the test data. The replication demonstrated that Ciloctunet performed on par with two experienced examiners. The intersection-over-union index (0.84) of the ciliary muscle thickness profiles between Ciloctunet and an experienced examiner was the same as between the two examiners. The mean absolute error between the ciliary muscle thickness profiles of Ciloctunet and the two examiners (35.16 µm and 45.86 µm) was comparable to the one between the examiners (34.99 µm). A statistically significant effect of the segmentation type on the derived biometric parameters was found for the ciliary muscle area but not for the selective thickness reading (“perpendicular axis”). Both the inter-rater and the intra-rater reliability of Ciloctunet were good to excellent. Ciloctunet avoids time-consuming manual segmentation, thus enabling the analysis of large numbers of images of ample study cohorts while avoiding possible examiner biases. Ciloctunet is available as open-source. Full article
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease – Volume 2)
Show Figures

Figure 1

19 pages, 5489 KiB  
Article
Performance Evaluation of Different Object Detection Models for the Segmentation of Optical Cups and Discs
by Gendry Alfonso-Francia, Jesus Carlos Pedraza-Ortega, Mariana Badillo-Fernández, Manuel Toledano-Ayala, Marco Antonio Aceves-Fernandez, Juvenal Rodriguez-Resendiz, Seok-Bum Ko and Saul Tovar-Arriaga
Diagnostics 2022, 12(12), 3031; https://doi.org/10.3390/diagnostics12123031 - 02 Dec 2022
Cited by 7 | Viewed by 1982
Abstract
Glaucoma is an eye disease that gradually deteriorates vision. Much research focuses on extracting information from the optic disc and optic cup, the structure used for measuring the cup-to-disc ratio. These structures are commonly segmented with deeplearning techniques, primarily using Encoder–Decoder models, which [...] Read more.
Glaucoma is an eye disease that gradually deteriorates vision. Much research focuses on extracting information from the optic disc and optic cup, the structure used for measuring the cup-to-disc ratio. These structures are commonly segmented with deeplearning techniques, primarily using Encoder–Decoder models, which are hard to train and time-consuming. Object detection models using convolutional neural networks can extract features from fundus retinal images with good precision. However, the superiority of one model over another for a specific task is still being determined. The main goal of our approach is to compare object detection model performance to automate segment cups and discs on fundus images. This study brings the novelty of seeing the behavior of different object detection models in the detection and segmentation of the disc and the optical cup (Mask R-CNN, MS R-CNN, CARAFE, Cascade Mask R-CNN, GCNet, SOLO, Point_Rend), evaluated on Retinal Fundus Images for Glaucoma Analysis (REFUGE), and G1020 datasets. Reported metrics were Average Precision (AP), F1-score, IoU, and AUCPR. Several models achieved the highest AP with a perfect 1.000 when the threshold for IoU was set up at 0.50 on REFUGE, and the lowest was Cascade Mask R-CNN with an AP of 0.997. On the G1020 dataset, the best model was Point_Rend with an AP of 0.956, and the worst was SOLO with 0.906. It was concluded that the methods reviewed achieved excellent performance with high precision and recall values, showing efficiency and effectiveness. The problem of how many images are needed was addressed with an initial value of 100, with excellent results. Data augmentation, multi-scale handling, and anchor box size brought improvements. The capability to translate knowledge from one database to another shows promising results too. Full article
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease – Volume 2)
Show Figures

Figure 1

19 pages, 5848 KiB  
Article
Automated Diabetic Retinopathy Detection Using Horizontal and Vertical Patch Division-Based Pre-Trained DenseNET with Digital Fundus Images
by Sabiha Gungor Kobat, Nursena Baygin, Elif Yusufoglu, Mehmet Baygin, Prabal Datta Barua, Sengul Dogan, Orhan Yaman, Ulku Celiker, Hakan Yildirim, Ru-San Tan, Turker Tuncer, Nazrul Islam and U. Rajendra Acharya
Diagnostics 2022, 12(8), 1975; https://doi.org/10.3390/diagnostics12081975 - 15 Aug 2022
Cited by 43 | Viewed by 2819
Abstract
Diabetic retinopathy (DR) is a common complication of diabetes that can lead to progressive vision loss. Regular surveillance with fundal photography, early diagnosis, and prompt intervention are paramount to reducing the incidence of DR-induced vision loss. However, manual interpretation of fundal photographs is [...] Read more.
Diabetic retinopathy (DR) is a common complication of diabetes that can lead to progressive vision loss. Regular surveillance with fundal photography, early diagnosis, and prompt intervention are paramount to reducing the incidence of DR-induced vision loss. However, manual interpretation of fundal photographs is subject to human error. In this study, a new method based on horizontal and vertical patch division was proposed for the automated classification of DR images on fundal photographs. The novel sides of this study are given as follows. We proposed a new non-fixed-size patch division model to obtain high classification results and collected a new fundus image dataset. Moreover, two datasets are used to test the model: a newly collected three-class (normal, non-proliferative DR, and proliferative DR) dataset comprising 2355 DR images and the established open-access five-class Asia Pacific Tele-Ophthalmology Society (APTOS) 2019 dataset comprising 3662 images. Two analysis scenarios, Case 1 and Case 2, with three (normal, non-proliferative DR, and proliferative DR) and five classes (normal, mild DR, moderate DR, severe DR, and proliferative DR), respectively, were derived from the APTOS 2019 dataset. These datasets and these cases have been used to demonstrate the general classification performance of our proposal. By applying transfer learning, the last fully connected and global average pooling layers of the DenseNet201 architecture were used to extract deep features from input DR images and each of the eight subdivided horizontal and vertical patches. The most discriminative features are then selected using neighborhood component analysis. These were fed as input to a standard shallow cubic support vector machine for classification. Our new DR dataset obtained 94.06% and 91.55% accuracy values for three-class classification with 80:20 hold-out validation and 10-fold cross-validation, respectively. As can be seen from steps of the proposed model, a new patch-based deep-feature engineering model has been proposed. The proposed deep-feature engineering model is a cognitive model, since it uses efficient methods in each phase. Similar excellent results were seen for three-class classification with the Case 1 dataset. In addition, the model attained 87.43% and 84.90% five-class classification accuracy rates using 80:20 hold-out validation and 10-fold cross-validation, respectively, on the Case 2 dataset, which outperformed prior DR classification studies based on the five-class APTOS 2019 dataset. Our model attained about >2% classification results compared to others. These findings demonstrate the accuracy and robustness of the proposed model for classification of DR images. Full article
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease – Volume 2)
Show Figures

Figure 1

18 pages, 16383 KiB  
Article
Chronological Registration of OCT and Autofluorescence Findings in CSCR: Two Distinct Patterns in Disease Course
by Monty Santarossa, Ayse Tatli, Claus von der Burchard, Julia Andresen, Johann Roider, Heinz Handels and Reinhard Koch
Diagnostics 2022, 12(8), 1780; https://doi.org/10.3390/diagnostics12081780 - 22 Jul 2022
Viewed by 1585
Abstract
Optical coherence tomography (OCT) and fundus autofluorescence (FAF) are important imaging modalities for the assessment and prognosis of central serous chorioretinopathy (CSCR). However, setting the findings from both into spatial and temporal contexts as desirable for disease analysis remains a challenge due to [...] Read more.
Optical coherence tomography (OCT) and fundus autofluorescence (FAF) are important imaging modalities for the assessment and prognosis of central serous chorioretinopathy (CSCR). However, setting the findings from both into spatial and temporal contexts as desirable for disease analysis remains a challenge due to both modalities being captured in different perspectives: sparse three-dimensional (3D) cross sections for OCT and two-dimensional (2D) en face images for FAF. To bridge this gap, we propose a visualisation pipeline capable of projecting OCT labels to en face image modalities such as FAF. By mapping OCT B-scans onto the accompanying en face infrared (IR) image and then registering the IR image onto the FAF image by a neural network, we can directly compare OCT labels to other labels in the en face plane. We also present a U-Net inspired segmentation model to predict segmentations in unlabeled OCTs. Evaluations show that both our networks achieve high precision (0.853 Dice score and 0.913 Area under Curve). Furthermore, medical analysis performed on exemplary, chronologically arranged CSCR progressions of 12 patients visualized with our pipeline indicates that, on CSCR, two patterns emerge: subretinal fluid (SRF) in OCT preceding hyperfluorescence (HF) in FAF and vice versa. Full article
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease – Volume 2)
Show Figures

Figure 1

Review

Jump to: Research

23 pages, 4286 KiB  
Review
An Overview of Deep-Learning-Based Methods for Cardiovascular Risk Assessment with Retinal Images
by Rubén G. Barriada and David Masip
Diagnostics 2023, 13(1), 68; https://doi.org/10.3390/diagnostics13010068 - 26 Dec 2022
Cited by 6 | Viewed by 3840
Abstract
Cardiovascular diseases (CVDs) are one of the most prevalent causes of premature death. Early detection is crucial to prevent and address CVDs in a timely manner. Recent advances in oculomics show that retina fundus imaging (RFI) can carry relevant information for the early [...] Read more.
Cardiovascular diseases (CVDs) are one of the most prevalent causes of premature death. Early detection is crucial to prevent and address CVDs in a timely manner. Recent advances in oculomics show that retina fundus imaging (RFI) can carry relevant information for the early diagnosis of several systemic diseases. There is a large corpus of RFI systematically acquired for diagnosing eye-related diseases that could be used for CVDs prevention. Nevertheless, public health systems cannot afford to dedicate expert physicians to only deal with this data, posing the need for automated diagnosis tools that can raise alarms for patients at risk. Artificial Intelligence (AI) and, particularly, deep learning models, became a strong alternative to provide computerized pre-diagnosis for patient risk retrieval. This paper provides a novel review of the major achievements of the recent state-of-the-art DL approaches to automated CVDs diagnosis. This overview gathers commonly used datasets, pre-processing techniques, evaluation metrics and deep learning approaches used in 30 different studies. Based on the reviewed articles, this work proposes a classification taxonomy depending on the prediction target and summarizes future research challenges that have to be tackled to progress in this line. Full article
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease – Volume 2)
Show Figures

Figure 1

15 pages, 329 KiB  
Review
Integration of Artificial Intelligence into the Approach for Diagnosis and Monitoring of Dry Eye Disease
by Hee Kyung Yang, Song A Che, Joon Young Hyon and Sang Beom Han
Diagnostics 2022, 12(12), 3167; https://doi.org/10.3390/diagnostics12123167 - 14 Dec 2022
Cited by 1 | Viewed by 1674
Abstract
Dry eye disease (DED) is one of the most common diseases worldwide that can lead to a significant impairment of quality of life. The diagnosis and treatment of the disease are often challenging because of the lack of correlation between the signs and [...] Read more.
Dry eye disease (DED) is one of the most common diseases worldwide that can lead to a significant impairment of quality of life. The diagnosis and treatment of the disease are often challenging because of the lack of correlation between the signs and symptoms, limited reliability of diagnostic tests, and absence of established consensus on the diagnostic criteria. The advancement of machine learning, particularly deep learning technology, has enabled the application of artificial intelligence (AI) in various anterior segment disorders, including DED. Currently, many studies have reported promising results of AI-based algorithms for the accurate diagnosis of DED and precise and reliable assessment of data obtained by imaging devices for DED. Thus, the integration of AI into clinical approaches for DED can enhance diagnostic and therapeutic performance. In this review, in addition to a brief summary of the application of AI in anterior segment diseases, we will provide an overview of studies regarding the application of AI in DED and discuss the recent advances in the integration of AI into the clinical approach for DED. Full article
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease – Volume 2)
Back to TopTop