Next Article in Journal
Development and Characterization of Zein/Ag-Sr Doped Mesoporous Bioactive Glass Nanoparticles Coatings for Biomedical Applications
Next Article in Special Issue
Contrastive Self-Supervised Learning for Stress Detection from ECG Data
Previous Article in Journal
Bioprocess Strategies for Vitamin B12 Production by Microbial Fermentation and Its Market Applications
Previous Article in Special Issue
Analytical Studies of Antimicrobial Peptides as Diagnostic Biomarkers for the Detection of Bacterial and Viral Pneumonia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Role of Medical Image Modalities and AI in the Early Detection, Diagnosis and Grading of Retinal Diseases: A Survey

1
Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt
2
Electronics and Communications Department, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt
3
Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
4
College of Technological Innovation, Zayed University, Dubai 19282, United Arab Emirates
5
Ophthalmic Department, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt
*
Author to whom correspondence should be addressed.
Bioengineering 2022, 9(8), 366; https://doi.org/10.3390/bioengineering9080366
Submission received: 3 June 2022 / Revised: 28 July 2022 / Accepted: 1 August 2022 / Published: 4 August 2022
(This article belongs to the Special Issue Machine Learning for Biomedical Applications)

Abstract

:
Traditional dilated ophthalmoscopy can reveal diseases, such as age-related macular degeneration (AMD), diabetic retinopathy (DR), diabetic macular edema (DME), retinal tear, epiretinal membrane, macular hole, retinal detachment, retinitis pigmentosa, retinal vein occlusion (RVO), and retinal artery occlusion (RAO). Among these diseases, AMD and DR are the major causes of progressive vision loss, while the latter is recognized as a world-wide epidemic. Advances in retinal imaging have improved the diagnosis and management of DR and AMD. In this review article, we focus on the variable imaging modalities for accurate diagnosis, early detection, and staging of both AMD and DR. In addition, the role of artificial intelligence (AI) in providing automated detection, diagnosis, and staging of these diseases will be surveyed. Furthermore, current works are summarized and discussed. Finally, projected future trends are outlined. The work done on this survey indicates the effective role of AI in the early detection, diagnosis, and staging of DR and/or AMD. In the future, more AI solutions will be presented that hold promise for clinical applications.

1. Introduction to Retinal Diseases

Retinal diseases receive serious and widespread attention, as retinopathies are some of the leading causes of severe vision loss and blindness at a global level [1]. Ocular imaging is critical in the management of retinal diseases, especially diabetic eye disease. Advanced imaging modalities allow better understanding of diabetic eye diseases and selection of suitable management options [2].
Multiple retinal diseases can be detected, such as age-related macular degeneration (AMD), diabetic retinopathy (DR), diabetic macular edema (DME), retinal tear, epiretinal membrane, macular hole, retinal detachment, retinitis pigmentosa, retinal vein occlusion (RVO), and retinal artery occlusion (RAO) (see Figure 1). Among these diseases, AMD and DR are the major causes of progressive vision loss, while the latter is recognized as a world-wide epidemic [3]. This review outlines the basic modalities used for the diagnosis of AMD and DR. In addition, the role of AI in diagnosis and staging of these diseases will be surveyed.

1.1. Diabetic Retinopathy (DR)

DR and DME are considered the leading cause of blindness worldwide, and lead to significant visual morbidity [2,4]. The prevalence of DR among diabetic patients may reach 34.6%, while the prevalence of severe DR that threatens the vision is 10.2% [5]. The prevalence of diabetes mellitus (DM) has been continuously increasing over the last three decades due to lifestyle changes [6]. Patients with type 1 DM are more prone to DR than those with type 2 DM. The most important preventable risk factor for DR is hyperglycemia [7].
Screening for DR is a crucial component of DM management [8], thus supported by the fact that major complications of DM that affect vision such as diabetic macular edema (DME) and proliferative DR can respond to treatment [9].
The International Council of Ophthalmology Guidelines for Diabetic Eye Care 2017 recommended that examinations for screening for DR should include retinal evaluation by ophthalmoscope or fundus photography [10]. Early detection of DR through population screening and timely treatment can reduce the retinal complications in diabetic patients and prevent visual loss and blindness [11]. One of the aims of this review is to discuss the different retinal imaging techniques for early detection and grading of DR.
Retinal imaging techniques play a major role in the management and prognosis of diabetic eye disease. Better understanding of the advances in retinal imaging modalities helps in the screening, diagnosis, and treatment of different disease presentations [2,12].
While direct and indirect ophthalmoscopy are the primary techniques for evaluation of DR, several imaging techniques such as color fundus photography (CFP), fundus fluorescence angiography (FFA), ultrasonography, fundus autofluorescence (FAF), and optical coherence tomography (OCT) have proven to be useful depending on the manifestation of the disease [13].

1.2. Age-Related Macular Degeneration (AMD)

Macular degeneration or AMD is the primary cause of blindness affecting elder individuals [14]. The progression of the disease involves deterioration with age. Hence, aging is the key factor behind the development and progress of AMD [15].
AMD has traditionally been divided into two major types: the non-neovascular (dry or atrophic) form and the neovascular (wet or exudative) form [16]. AMD is also classified into early, intermediate, and advanced AMD, according to the natural course of the disease [17,18]. Non-neovascular AMD represents nearly 90% of all cases and is exemplified by drusen accumulation, the absence of choroidal neovascularization, and retinal pigment epithelium (RPE) atrophy [19]. Neovascular AMD (nAMD) is distinguished by choroidal neovascularization (NV) with abnormal blood vessels that tend to leak fluid or blood [20]. It causes more than 80% of serious vision loss from AMD, with rapid deterioration toward blindness [18]. Untreated nAMD often leads to fibrovascular scarring with associated loss of central visual function [16].
Age at time of diagnosis with AMD is the vital factor in stopping AMD progression. In addition to the aging process, smoking is a risk factor related with AMD. A cohort study observed that smoking doubles the risk of having AMD in 5 years compared to nonsmokers [21]. The precise pathophysiology of AMD is still unknown. Various theories are hypothesized to be the underlying factors for AMD. These include drusen accumulation, chronic inflammation, oxidative stress, reduction of antioxidant, and dysregulated complement [21,22].
Early diagnosis of AMD plays a major role in delaying progression and improving outcomes. Multimodal imaging in diagnosis offers a detailed structure of retinal alteration in AMD without the need for invasive procedures, resulting in early detection and comprehensive management of AMD [6]. Imaging not only plays an important diagnostic role in AMD, but is also used to deliver improved knowledge of its pathophysiology, define treatment options, and assess the treatment response [23]. Imaging helps clinicians to visualize abnormalities, such as RPE atrophy, drusen deposits, subretinal fluid, and choroidal neovascularization [23].
Imaging modalities comprise CFP, FFA, indocyanine green angiography (ICGA), fundus autofluorescence (FAF), OCT, and OCTA [6].
The FFA is still considered by some to be the gold standard for the diagnosis of wet nAMD [5]. However, the concomitant use of FFA with OCT has become the standard in current practice due to the progressive use of OCT as a first-line diagnostic tool for nAMD [7,8].
With the introduction of advanced retinal imaging modalities, the CFP is no longer the primary method for diagnosing and monitoring dry AMD. FAF and OCT are now considered essential methods, whereas other modalities, such as near-infrared autofluorescence (NIA), FFA, and OCTA, may deliver complementary data [24].
As shown in Figure 2, imaging modalities are the input of any AI system that aims to detect, diagnose, classify, and/or stage retinal diseases. The goal of this manuscript is to outline the different medical image modalities and technologies that help in the detection, diagnosis, classification, and grading of the retinal diseases, and more specifically, DR and AMD. In addition, this paper aims at providing a review of the literature on AI systems, surveyed from 1995 to 2022, used for the automated detection, diagnosis, classification, and/or staging of DR and AMD.
The rest of this paper is organized as follows. Section 2 summarizes the retinal imaging modalities, their technologies, and their role in the detection and staging of DR and AMD. Section 3 summarizes the noise sources and denoising methods of retinal images. Section 4 introduces the concept of AI to assist the clinicians in the detection and staging processes of retinal diseases. Section 5 and Section 6 specify the role of AI in the detection, diagnosis, and grading of RD and AMD, respectively. Section 7 discusses the findings of the paper and outlines the future trends. Finally, Section 8 concludes the paper.

2. Retinal Imaging Modalities

As shown in Figure 2, to build any AI system for the detection, diagnosis, and grading of retinal diseases, the first step is to capture the retinal image using the appropriate medical image modality. Advances in retinal imaging have improved the diagnosis and management of diabetic retinopathy (DR) and age-related macular degeneration (AMD). A summary of the medical image modalities that are used for the detection, diagnosis, and staging of DR and AMD is illustrated in Figure 3. Fundus fluorescein angiography (FFA) is the classic imaging modality for AMD, and is a powerful technology for identifying its presence and degree. Optical coherence tomography (OCT) is now widely used for early diagnosis and determination of the anti-vascular endothelial growth factor therapy (anti-VEGF) retreatment criteria for neovascular lesions. FFA is currently considered the gold standard technique for the evaluation of the retinal vasculature, which is the most affected part of the retina in the diabetic eye. Optical coherence tomography angiography (OCTA) can detect subtle changes in the retinal vasculature before the development of the clinical features of retinopathy, allowing early detection of DR and help in screening for DR among populations at risk. In this section, we will go over the different modalities for the detection and staging of DR and AMD. For each modality, we will illustrate its subtypes and sub-technologies, along with a detailed illustration of how to use this modality for the early detection and staging of both DR and AMD.

2.1. Color Fundus Photography (CFP)

Fundus imaging is the process whereby reflected light is used to acquire a two-dimensional image of three-dimensional retinal tissue, with image intensities representing the quantity of reflected light [9]. Fundus photography provides a colored image of the retina. Conventional fundus photography was performed using film, before becoming digitalized. Digital fundus photography has the advantages of rapid acquisition, immediate availability, and the ability to enhance and process the images [10,13].
Types of fundus photography include (i) standard, (ii) widefield/ultra-widefield, and (iii) stereoscopic fundus photography [13].
  • Standard fundus photography is widely available and easy to use. It captures a 30° to 50° image of the posterior pole of the eye, including the macula and the optic nerve. Standard fundus photography cameras can collect multiple fundus field images. These images are then overlapped to create a montage image with a 75° field of view [10,13].
  • Widefield/ultra-widefield fundus photography can image the peripheral retina. It can capture a 200° field of view even if the pupil is not dilated. This 200° field extends beyond the macula to cover 80% of the total surface of the retina. Theoretically, the large field of view permits better detection of peripheral retinal pathology. However, widefield fundus photography presents some limitations; the spherical shape of the globe causes image distortion, artifacts as a result of eyelashes, and false findings due to inadequate color representation, in addition to the expensive equipment. Consequently, standard 30° fundus photography remains the best choice for fundus imaging [10,13].
  • Stereoscopic fundus photography can be used to obtain a stereo image created by merging photographs taken at two slightly different positions from both eyes to enable the perception of depth [11,13,25]. Despite the potential value of stereoscopic fundus photography, its clinical value is controversial due to several limitations. The acquisition of stereo images is time consuming, and patients must be exposed to double the number of light flashes [11]. The photographer’s experience has an impact on the technique, and the left and right images must be equally sharp and have the same illumination in each image in the pair [12,26]. Image interpretation is time consuming and requires special goggles or optical viewers to fuse the image stereoscopically and achieve depth [11,25].

2.1.1. Application of Color Fundus Photography (CFP) in DR

CFP is widely available, and therefore it is used in screening and clinical trials of DR; it provides good visualization of obvious signs of diabetic macular edema and proliferative diabetic retinopathy such as microaneurysms, liquid exudate, and dot and blot hemorrhages [10]. The wider field of view obtained using the steered images technique in color fundus photography is the basis of the Early Treatment Diabetic Retinopathy Study (ETDR S) grading system, which modifies the Airlie House classification and develops a severity scale of 13 levels [27,28,29]. These levels range from no evidence of retinopathy to significant vitreous hemorrhage [5]. Ultra-widefield color fundus photography allows better detection of peripheral retinal pathology. However, the drawbacks of widefield fundus photography plus the expensive equipment make standard 30° fundus photography the best choice for fundus imaging [10,13].

2.1.2. Application of Color Fundus Photography (CFP) in AMD

CFP is one of the simplest imaging modalities for detecting both dry (non-neovascular) and wet (neovascular) AMD [30]. CFP offers an illustration of variable fundus abnormalities, involving various subtypes of macular drusen and pigmentary abnormalities, and closely parallels biomicroscopic examination [31]. Early funduscopic classification systems of non-neovascular AMD include descriptions of the following: drusen size (i.e., large versus small), consistency (i.e., soft versus hard), location, number, area of involvement, geographic atrophy (GA) size, location, and area [32]. Drusen appears as a yellowish round lesion, with a pigmentary deposit around the macula, while atrophic RPE shows a hypopigmentation around the macula. Meanwhile, the application of CFP in neovascular AMD (nAMD) is helpful in the detection of exudative complications, such as macular edema and macular detachment [33]. CFP has several drawbacks, as the image is created in 2D, and thus lacks proper visualization of small details. Abnormalities in the refractive media, such as cataracts, result in lower image clarity [18]. Additionally, it has a lower sensitivity of 78% when used as an individual imaging procedure to detect choroidal neovascularization, compared to the sensitivity of OCT (94%) [33,34]. CFP alone is deficient for diagnosing nAMD, as it underestimates the presence of choroidal neovascularization [30].

2.2. Fundus Fluorescein Angiography (FFA)

FFA is a two-dimensional imaging technique that depends on an intravenous injection of fluorescent dye (resorcinolphthalein sodium) [10,30]. A ring-shaped flash camera is used for excitation of the dye molecules and the projected blue light is reflected from the layers of the retina. Some of the projected light becomes absorbed by the fluorescent dye, and then it is emitted back as green light with a wavelength of 530 nm to be captured by a filter on a digital detector. FFA records the dynamic changes in the retinal and choroidal vasculature [30].
The disadvantages of FFA are general systemic side effects of the injected dye, such as anaphylactic or allergic reaction and nausea, and extensive leak of the dye into the surrounding tissue, which could alter the readings [35].
Types of FFA include (i) indocyanine green angiography (ICGA).
  • Indocyanine green angiography (ICGA) is a type of FFA based on intravenously injected high-molecular-weight indocyanine green dye. It projects light with a longer wavelength (near infrared light (790 nm)), which allows deeper penetration of the retinal layers, resulting in better visualization of choroidal and retinal circulation [23,36]. Systemic side effects can similarly occur [36]. In ICGA, the dye combines with plasma proteins, leading to less dye leakage than in FFA [37].

2.2.1. Application of FFA in DR

FFA is currently considered the gold standard technique for the evaluation of the retinal vasculature, which is the most affected part of the retina in the diabetic eye [13].
DR signs in FFA images:
  • Microaneurysms: appear as punctate hyperfluorescent areas.
  • Retinal hypoperfusion: nonperfused retinal capillaries, which can cause ischemia and appear as patches of hypofluorescent areas.
  • Increased foveal avascular zone: results from macular ischemia and can explain the cause of loss of vision in some diabetic patients.
  • Retinal neo-vascularization or intraretinal microvascular abnormalities.
Fluorescein dye leaks out from abnormal blood vessels. The visualization of this leak over time is beneficial in the detection of the breakdown of the blood–retinal barrier. Monitoring fluorescein leakage overtime in the macula is very useful in patients with DME. Fluorescein dye leakage also occurs with retinal neovascularization. In proliferative DR patients, this can help in the diagnosis of neovascularization in the optic disc and other areas of the retina [13].
Given the advantage of widefield FFA in imaging of the peripheral retina, it can used in the detection of peripheral retinal neovascularization and the determination of the extent of peripheral areas of capillary nonperfusion and hypoperfusion [38,39].

2.2.2. Application of FFA in AMD

FFA is the gold standard for nAMD, compared to other modalities. FFA outperforms its rivals in specifying choroidal neovascularization (CNV) in its structural and leakage state. Based on the location of CNV, it can be classified as extrafoveal, subfoveal, and juxtafoveal [30,40]. Recognition of the CNV location is a valuable prognostic factor. Extrafoveal CNV is situated around 200–2500 μm from the center of the foveal avascular zone (FAZ), subfoveal CNV is situated beneath the center of the FAZ, while juxtafoveal CNV is situated up to 199 μm from the center of the FAZ and some part of the FAZ excluding the center portion [30].
FFA also indicates the leakage properties of the CNV, which can be categorized into occult CNV (type I), classic CNV (type II), and retinal angiomatous proliferation (type III) [23]. Occult CNV exists as mottled and patchy hyperfluorescence in early-phase angiograms and leaks in the later phase, forming larger hyperfluorescent dots [30]. The occult CNV is subgrouped into two types on the basis of its leakage features. Type I occult CNV is fibrovascular and is defined as stippled hyperfluorescence in the early phase, with progressive leakage upon late-phase angiogram. Meanwhile Type II occult CNV consists of late leakage from an unspecified source and does not appear in the early phase, but displays speckled hyperfluorescence upon mid- to late-phase angiogram [30,41]. Classic CNV presents as a well-defined hyperfluorescence network membrane in the early phase, followed by progressive leakage upon late-phase angiogram [42]. Retinal angiomatous proliferation (RAP) was recently described as NV arising from the intraretinal layer and infiltrating into the choroid layer, forming a retinal–choroidal anastomosis. RAP can be divided into three stages based on the extent of NV and proliferation [43].
Disadvantages of the FFA procedure include its systemic complication from the injected dye. Additionally, the dye leaks considerably to the surrounding tissue, which might affect the detailing of the CNV. Hence, in some cases of type I CNV and RAP, ICGA is a preferable method to FFA [44].

2.2.3. Application of Indocyanine Green Angiography (ICGA) in AMD

ICGA uses a high-molecular-weight contrast that binds to plasma proteins and thus leaks less compared to FFA [45]. ICGA is well established in the detection of type I CNV and occult CNV: the early phase often shows ill-defined hypercyanescent lesions, the mid phase shows progressive intensity, and the late phase exhibits hypercyanescent plaque. However, ICGA is less appropriate for detecting classic CNV, which appears as well-defined hypercyanescence [46].
Polypoidal choroidal vasculopathy (PCV) is an abnormal choroidal vascular network with aneurysmal dilatation (polypoidal characteristics) at its periphery [24]. On ICGA, PCV exists as a hypercyanescent hot spot in early angiograms, with a grape-like/polypoid structure. PCV frequently masks the appearance of occult or classic CNV on FFA, and therefore ICGA is considered the gold standard in the detection of PCV, as its appearance is masked by the RPE layers in FFA [30]. PCV is associated with neurosensory detachment, high numbers of recurrent cases, and poor visual outcomes [31].
RAP is best visualized by ICGA and appears in the early phase as a hyperfluorescent hot spot with apparent retinal artery communication into the CNV, followed by a progressive increase in both size and intensity in the late phase [43]. The recognition of RAP on one eye is associated with increased incidence of NV on the other eye, with nearly a 100% risk in 3 years of follow-up [47].
ICGA uses a longer wavelength of infrared compared to FFA, thus allowing deeper penetration into the RPE, choroidal structure, and any subretinal fluid, hemorrhages, or pigment epithelium detachment, which often alter imaging in FFA. Hence, in the presence of hemorrhage, ICGA images offer a more detailed overview of the characteristics of CNV in AMD patients compared to FFA [30].

2.3. Fundus Autofluorescence Imaging (FAF)

FAF is a noninvasive imaging technique for the mapping of natural or pathological fluorophores of the ocular fundus. The dominant source of fluorophores is the lipofuscin located in the retinal pigment epithelium; lipofuscin is responsible for the fluorescent properties necessary for FAF imaging [48]. A light with a specific wavelength of 300–500 nm is used to stimulate fundus fluorescent properties without the use of contrast material, and excites the lipofuscin particles, which then emit a light with a wavelength of 500–700 nm [18,33,37].
FAF can be performed using a fundus camera, fundus spectrophotometer, or confocal scanning laser ophthalmoscope. The best choice is confocal scanning laser ophthalmoscope, because of its ability to decrease the noise from other autofluorescence materials from the anterior eye segment [33,49].
Types of fundus FAF include near infrared autofluorescence (NIA).
  • Near infrared autofluorescence (NIA) is another fundus imaging technique that uses the other fluorophore properties of the retina located in melanin. Melanin is present mainly in the retinal pigment epithelium, and to a lesser extent in the choroid in small amounts. NIA uses diode laser light with a longer wavelength of 787 nm for excitation, and then a specific wavelength above 800 nm is captured using a confocal scanning laser ophthalmoscope [50,51]. The captured image shows increased hyperautofluorescence in the center of the fovea due to the high melanin content of the retinal pigment epithelial cells [50]. Retromode imaging (RM) is an imaging modality using an infrared laser at 790 nm, generating a pseudo-3D appearance of the deeper retinal layer [52].
FAF techniques include (i) fundus spectrophotometry, (ii) scanning laser ophthalmoscopy, (iii) fundus camera, and (iv) widefield imaging.
  • Fundus spectrophotometry is able to process the excitation and emission spectra of autofluorescence signals originating from a small retinal area of the fundus (only 2° in diameter) [53]. It is composed of an image intensifier, diode array detector, and crystalline lens. The beam is separated in the pupil, and the detection is confocal to reduce the contribution of the crystalline lens in the autofluorescence. The complex instrumentation and the small examined area have led fundus spectrophotometry not to be the preferred technique in clinical practice for FAF [48,53].
  • Scanning laser ophthalmoscopy can image larger areas of the retina by using a low-power laser beam that is projected onto the retina and distributed over the fundus [54]. Then, the reflected light intensities from each point after passing through a confocal pinhole are collected via a detector, and the image is produced [48]. A series of several images are recorded, then averaged to form the final image, reduce the background noise, and improve the image contrast [55].
  • Fundus cameras have limitations with respect to FAF, such as weak signal, the crystalline lens absorptive effect, nonconfocal imaging, and light scattering [48]. A modified fundus camera was designed by adding an aperture to the illumination optics to decrease the effect of light scattering from the crystalline lens and reduce the loss of contrast [56]. This modified design is limited by the small field of view (only 13° in diameter) and complex instrumentation [48].
  • Widefield imaging: confocal scanning laser ophthalmoscopy has a 30° × 30° retinal field. Therefore, imaging of larger retinal areas like a 55° field needs additional lenses. The fundus camera can be used to manually produce montage images using seven field panorama-based software packages [48].
  • Widefield scanning laser ophthalmoscopy was developed to record peripheral autofluorescence images using green light excitation (532 nm) with an acquisition time of less than two seconds. The widefield extends beyond the vascular arcades and can be used to assess the peripheral involvement of retinal diseases [48]. Ultra-widefield scanning laser ophthalmoscopy was developed by combining confocal scanning laser ophthalmoscopy with a concave elliptical mirror. It can record a wider view of the retina of up to 200° in a single image with an acquisition time of less than one second, without the need for pupil dilatation [25,57]. The use of ultra-widefield scanning laser ophthalmoscopy is still limited due to its high cost [12].

2.3.1. Application of FAF in DR

Previous studies have reported increased autofluorescence in patients with DME [58,59]. Multiple patterns have been used to describe the FAF findings: single cyst, multiple cysts of increased FAF, or both combined [60]. Other patterns include normal, increased FAF, single spot, and multiple spots of increased FAF [61].
In DME, an association has been reported between increased FAF and decreased visual acuity [59]. Follow-up visits for patients with DME revealed that patients with deteriorated vision had increased FAF compared to patients with stationary or improved vision [2].

2.3.2. Application of FAF in AMD

FAF is an imaging method that uses a specific wavelength of light to trigger the fundus fluorescence characteristics without the need for contrast [33]. FAF images have the ability to detect numerous retinal abnormalities, such as pigmentary changes, drusen, geographic atrophy, and reticular pseudodrusen [49].
Drusen exists in numerous patterns on FAF, such as hypoautofluorescence, hyperautofluorescence, and normal lesions, depending on the variability of the fluorophore contents and the size of the drusen [49,62]. FAF is the gold standard modality for the assessment of GA, as it offers high-contrast retinal images that can be used to detect areas of atrophy. Atrophic lesions present as hypoautofluorescent areas, owing to the loss of the RPE cells containing intrinsic fluorophores, such as lipofuscin [16]. The disparity between areas of RPE loss and adjacent areas of intact photoreceptors allows the reproducible semiautomated quantification of atrophic areas. Therefore, FAF has been recognized as an anatomic outcome parameter for the progression of GA in clinical trials by international agencies [45,63].
Patchy, linear, or reticular patterns recognized on FAF have been associated with the development of nAMD, while the patchy pattern is the highest-risk FAF pattern for conversion to nAMD [64,65]. Hemorrhages, scarring, and fibrovascular membranes are hypoautofluorescence lesions, while subretinal fluid appears hyperautofluorescent [49].
The currently most frequently used FAF imaging method uses a confocal scanning laser ophthalmoscope (cSLO) with a blue light excitation wavelength filter (488 nm) and an emission filter of 500 to 521 nm. In comparison with CFP, FAF has the capacity to detect retinal changes in early and intermediate AMD that may appear normal in CFP [50].
FAF has high sensitivity in identifying nAMD (93%), but relatively low specificity (37%) compared to FFA as the gold standard [65]. Obstacles to FAF imaging comprise susceptibility to media opacities, difficult foveal imaging due to macular pigment that absorbs blue light, and patient discomfort [66]. Alternate wavelengths, such as green light, have advantages. For example, it may be more comfortable for patients, and it can reduce macular pigment absorption. However, it can still generate an excellent visualization of the atrophic areas [44].
NIA employs the other fluorophore properties of the retina and melanin [46]. The NIA images show high hyperautofluorescence in the center of the fovea due to the elevated melanin content in RPE cells [50].
Both NIA and FAF appear dark in the atrophic region in dry AMD, while the adjacent area appears to possess increased intensity. Half of AMD patients had increased NIA at the normal FAF site, thus suggesting that there is an increase in melanin activity preceding lipofuscin activity [60]. In nAMD, the image seems dark in both NIA and FAF owing to the blockage of the autofluorescence signal by subretinal fluid, hemorrhage, or choroidal NV [30,67]. Nevertheless, FAF (56.5%) is more efficient at describing exudative activity than NIA (33.9%) [51].
RM is helpful for distinguishing pathological structures in dry and wet AMD. For example, drusen is more obvious in RM compared to in fundus photography [52]. In wet AMD, RM has a higher agreement with OCT in imagining macular edema, but relatively low for RPE detachment [68].

2.4. Optical Coherence Tomography (OCT)

OCT plays a crucial role in the diagnosis and management of retinal diseases, as it provides detailed cross-sectional images of the retina, so that ophthalmologists can detect changes in anatomy and monitor treatment response [2].
OCT uses light waves to generate the image in a method comparable to ultrasonography, using reflected light, instead of sound, to create the image. Low-coherence light is scanned and concentrated on the ocular structure of interest using an internal lens. A second beam internal to the OCT unit is used as a reference, and a signal is formed by calculating the variation between the reference beam and the reflected beam. Detection of these beams depends on the time-domain or spectral-domain protocols [69].
OCT is the most powerful diagnostic tool for retinal diseases due to its noninvasive, unique, and high-resolution evaluation of tissue, with direct correspondence to the histological appearance of the retina, achieving axial resolution of up to 2–3 µm in tissue. OCT has other advantages that involve reproducibility, noninvasiveness, and repeatability. Additionally, OCT is obtainable across most media opacities, including vitreous hemorrhage, cataract, and silicone oil.
OCT provides a superior, noninvasive modality for evaluating DME [2]. In addition, the spectral domain (SD)-OCT is the gold standard for the most important macular diseases [70]. The introduction of OCT has altered the clinical management of several retinal diseases, involving AMD [71], DME [72], and RVO [73,74].
OCT technologies include (i) time-domain OCT (TD-OCT), (ii) spectral-domain OCT (SD-OCT), (iii) swept-source OCT (SS-OCT), (iv) high-speed ultra-high-resolution OCT, (v) optical coherence tomography angiography (OCTA), (vi) intraoperative optical coherence tomography, and (vii) functional optical coherence tomography.
  • TD-OCT is the first commercially offered OCT device based on time-domain detection that shows rather low scan rates of 400 A-scans per second. The key imitations in the clinical use of TD-OCT are the limited resolution and slow acquisition [75]. However, it is commonly accepted for the evaluation of several retinal diseases, such as macular edema, AMD, and glaucoma [76].
  • Spectral domain OCT (SD-OCT): Subsequently, spectral domain imaging technologies have significantly improved sampling speed and signal-to-noise ratio by using a high-speed spectrometer that measures the light interferences from all time delays simultaneously [77]. In commercially available SD-OCT devices, technical improvements have enabled scan rates of up to 250,000 Hz [78]. SD-OCT’s higher acquisition speeds allow for a shift from two-dimensional to three-dimensional images of ocular anatomy. In addition, SD-OCT is several orders of magnitude more sensitive than TD-OCT [75]. SD-OCT is used to diagnose DR and diabetic macular edema (DME).
  • SS-OCT technology has also improved imaging accuracy by using a swept laser light source that successively emits various frequencies in time and photodetectors to measure the interference [79]. SS-OCT devices employ a longer wavelength (>1050 nm) laser light source and have scan rates as fast as 200,000 Hz. The longer wavelengths are thought to enhance visualization of subretinal tissue and choroidal structures [80,81]. SS-OCT has been used to visualize a thick posterior hyaloids among eyes with diabetes compared to normal controls [82]. SS OCT can be used to reveal adhesion between the retina and detached posterior hyaloid in eyes with DR and DME, while this was not detected in eyes without diabetic eye disease [2].
  • High-speed ultra-high-resolution OCT (hsUHR-OCT) is another variation on SS-CT that provides a striking improvement in terms of cross-sectional image resolution and acquisition speed. The axial resolution of hsUHR-OCT is approximately 3.5 µm, compared with the 10 µm resolution in the standard OCT. This enables superior visualization of retinal morphology in retinal abnormalities. The imaging speed is approximately 75 times faster than that with standard SD-OCT. hsUHR-OCT improves visualization by obtaining high-transverse-pixel density and high-definition images [83,84].
  • OCTA is a relatively new modality for visualizing flow in the retinal and choroidal vasculature. Rapid scanning by SD-OCT or SS-OCT devices allows analysis of variation of reflectivity from retinal blood vessels, permitting the creation of microvascular flow maps. This technology enables clinicians to visualize the microvasculature without the need for an intravenous injection of fluorescein [2]. OCTA signifies progression of OCT technology, as motion contrast is used to create high-resolution, volumetric, angiographic flow images in a few minutes [85]. Neovascularization at the optic disc is obviously visualized on OCTA, and microaneurysms exist as focally distended saccular or fusiform capillaries on OCTA [86].
  • Intraoperative optical coherence tomography: Performing intraoperative OCT in the operating theater may offer supplementary data on retinal structures that were inaccessible preoperatively due to media opacity [2]. Prospective intraoperative and perioperative ophthalmic imaging with OCT study has been performed to assess the feasibility, utility, and safety of using intraoperative OCT through different vitreoretinal surgical procedures. The information achieved from intraoperative OCT permit surgeons to evaluate subtle details from a perspective distinctive from that of standard en face visualization, which can improve surgical decisions and patient outcome [87]. Intraoperative OCT revealed variable retinal abnormalities in patients who underwent pars plana vitrectomy for dense vitreous hemorrhage secondary to DR, including epiretinal membranes (60.9%), macular edema (60.9%) and retinal detachment (4.3%). The surgeons reported that intraoperative OCT impacts their surgical decision making, particularly when membrane peeling is accomplished [88].
  • Functional OCT makes it possible to perform noninvasive physiological evaluation of retinal tissue, with respect to factors such as its metabolism [89,90]. A transient intrinsic optical signal (IOS) is noted in retinal photoreceptors implying a distinctive biomarker for ocular disease detection. By developing high spatiotemporal resolution, OCT and using an algorithm for IOS processing, transient IOS could be recorded [89]. IOS imaging is a promising alternative for the measurement of retinal physiological functions [91]. Functional OCT provides a noninvasive method for the early disease detection and improved treatment of retinal diseases that cuase changes to retinal function and photoreceptor damage, such as DR and AMD, which can be detected using functional OCT as differences in IOS [2,89].
Functional extensions to OCT increase its clinical potential. For example, polarization-sensitive OCT (PS-OCT) delivers intrinsic, tissue-specific contrast of birefringent (e.g., retinal nerve fiber layer (RNFL)) and depolarizing (e.g., retinal pigment epithelium (RPE)) tissue with the use of polarized light. This allows PS-OCT to be helpful for the diagnosis of RPE disorders in some disease such as AMD [92].
Another extension is Doppler tomography, which allows depth-resolved imaging of flow by observing differences in phase between successive depth scans. This technology offers important data about blood flow patterns in the retina and choroid, granting absolute quantification of the flow within retinal vessels [93].

2.4.1. Application of OCT in DR

OCT has become the gold standard method for the evaluation and management of DME by visualizing changes in the retinal anatomy caused by DME and monitoring the response to treatment [2,10]. OCT is able to determine whether DME is center involving or noncenter involving, which affects the therapy plan [13].
DME causes several morphologic patterns; diffuse thickening of the retina, intra-retinal cystic spaces, vitreofoveal traction with loss of the foveal depression, and loss of the external limiting membrane. These patterns are correlated with the degree of visual impairment and the thickness of the retina [10,94,95]. In severe forms of DME, subretinal fluid and focal retinal detachment occur, appearing as voids or dark spaces between the retina and retinal pigment epithelium [10]. Hard exudate on OCT appears as hyperreflective punctate foci in the outer plexiform layer. As well as hemorrhage in different layers of the retina, cotton wool spots in the superficial layer of the retina may be visualized. The appearance of retinal neovascularization on OCT is a highly reflective spot on the inner surface of the retina [10]. Hyaloid traction and preretinal membranes cause distortion of the retinal architecture on OCT, and identification of this traction is crucial for the management of DME, as eyes will not respond to treatment and may require surgery for the resolution of DME. OCT cannot diagnose macular ischemia like FFA does, which in turn limits the ability of OCT to be correlated with anatomical changes with visual acuity [10].
There are multiple classifications for DME based on OCT findings. Patterns are described as diffuse or sponge-like retinal thickening, cystoid macular edema, serous subretinal fluid without posterior hyaloid traction, with posterior hyaloid traction, and mixed patterns [96]. Identification of these different patterns directly affects the diagnosis and treatment of DME [10].

2.4.2. Application of OCTA in DR

OCTA can be used to visualize abnormal flow patterns or irregular vessel geometries in DR, in order to diagnose retinal neovascularization, capillary nonperfusion and microaneurysms. Studies confirmed that OCTA can detect subtle changes in the retinal vasculature before the development of the clinical features of retinopathy allowing early detection of DR and help in the screening for DR among populations at risk [85].
With regard to the visualization of microaneurysms, OCTA can be used to detect the intra-retinal depth of extension of microaneurysms, although it appears less sensitive than FFA for the detection of microaneurysms [97]. Microaneurysms in OCTA appear as dilated capillary segments or loops, small neovascularization foci, or focal capillary dilatations in area adjacent to the capillary nonperfusion [97]. OCTA can create quantitative measurements of the avascular zone of the fovea, capillary nonperfusion areas, flow maps, and vessel density analysis. These quantitative data can provide more detailed and precise information than that obtained using FFA [98].

2.4.3. Application of OCT in AMD

OCT is one of the most suitable noninvasive imaging modalities for identifying and monitoring AMD. There are four hyperreflective bands detected in AMD patients using OCT, which are assumed to represent the external limiting membrane, the inner/outer segment of the photoreceptor, RPE, and Bruch’s membrane [99]. OCT is able to demonstrate AMD abnormalities such as drusen deposits, pseudodrusen, subretinal fluid, RPE detachment, and choroid NV. Drusen deposits present as low mounds underneath the RPE layer, while pseudodrusen presents as a hyperreflective deposit located beneath the retina layer [23].
The existence of pseudodrusen in AMD patients is associated with increased risk of GA or nAMD. OCT has the highest sensitivity and specificity for the detection of pseudodrusen among all of the other imaging modalities [100]. OCT is frequently used as a reference imaging method for evaluating the response of nAMD to anti-vascular endothelial growth factor therapy [101,102]. A recent study demonstrated that SD-OCT or FA combined with CFP had similar sensitivity and specificity, with no statistical difference for the primary diagnosis of NV secondary to AMD [103].
In GA, the RPE atrophy shows a feathered-like form projected deep into the RPE [36]. OCT additionally displays a progressive loss of retinal bands, which is related to the external limiting membrane, the inner/outer segments of the photoreceptor layer, the RPE layer, and the outer nuclear layer [33]. The enlargement of the atrophic region is linked with the gradual loss of the outer hyperreflective bands and the thinning of the outer nuclear layer, the outer plexiform layer, and the RPE membrane during 12 months of follow-up. Additionally, GA is related to a 14.09 μm increase in retinal thickness [104].
NV activity is assessed on OCT based on the accumulation of fluid at different levels of the retina. Subretinal fluid is depicted as a hyporeflective lesion situated above the RPE and below the retina [23]. RPE detachment looks like a dome shape on the RPE layer [36]. Exudative activity is one of the defining factors for nAMD treatment; increased choroid thickness might represent a possible choroid, but cannot distinguish between classic AMD and PCV [23]. Outer retinal tubules are another structural retinal abnormality in OCT that appears as a hyporeflective center with a hyperreflective border. It represents the degenerated photoreceptors, and thus does not represent exudative activity for NV and does not require treatment for nAMD [105].
OCT has drawbacks with respect to grading choroidal NV; however, these can be overwhelmed by performing FFA or OCTA in combination with OCT when indicated [30].

2.4.4. Application of OCTA in AMD

In contrast to both FFA and ICGA, OCTA is a noninvasive procedure for retinal vascular imaging, and has rapidly achieved acceptance for the detection and monitoring of nAMD [106]. The presence of a choroidal NV on OCTA links perfectly with findings on structural OCT and FFA [107]. The improved definition of NV on OCTA has led to an improved understanding of the structural evolution of these lesions with anti-angiogenic treatment [108]. Despite inactivity on FFA, a vascular network can remain persistent on OCTA [109].
OCTA has equivalent detection ability with respect to visualizing CNV to that of FFA and ICGA [30]. In nonexudative CNV, OCTA is valuable in visualizing choriocapillaris blood flow with a significant decrease in choriocapillaris flow in atrophic zone reaching outside the GA area in dry AMD [110]. OCTA revealed a significantly decreased vessel density—by 9%—in dry AMD patients in both superficial and deep vascular layers compared with healthy individuals [111].
Nonexudative CNV is often identified by OCTA in the eyes of patients with exudative CNV, with a high risk of exudation developing within the first year after detection. Those patients could benefit from close monitoring [84].
CNV is detected as a hyperfluorescent high flow network with variable depth of the retina involvement according to the degree of CNV [23]. Type I CNV is emerging as a minimal delineated vascularization developing from choriocapillaris and RPE penetrating the Brusch’s membrane, but does not penetrate the RPE layer with no evidence of NV in the outer retina [112,113]. Type II CNV seems like a sharp demarcated vascular change at the choroid, choriocapillaris, and RPE, and extending to the outer retina [113]. Type III CNV is emerging as a hyperreflective cluster located in outer retinal layer with interconnecting vessels and inner retinal circulation [106].
OCTA has a number of limitations, as subretinal hemorrhage diminishes its signal to detect CNV. In addition, OCTA has lower sensitivity compared to FFA for the detection of exudative AMD in cases with large subretinal hemorrhages [114]. Furthermore, OCTA may underestimate the CNV size compared to ICGA [115].

2.5. Adaptive Optics (AO)

AO is an adapted technology in which scanning laser ophthalmoscopy (SLO) and OCT are employed to resolve optical aberrations on the basis of retinal imaging. AO grants noninvasive visualization and quantification of retinal capillaries, as it can deliver high-resolution images of the foveal cones, dynamic images of the retinal vasculature, and calculate arterial wall measurements and blood flow speed [116,117]. However, AO is limited by its very small field of view—in the range of 1–2 degrees—which hinders its clinical benefit.

2.5.1. Application of AO in DR

In diabetic eyes, AO has been used to show the irregular branching of blood vessels, shunt vessels and narrowed perifoveal capillaries [118,119]. The diminished regularity of the cone photoreceptor arrangement determined with AO-SLO has been correlated with increasing DR severity and DME [120]. Additionally, an association between capillary nonperfusion in the deep capillary plexus and abnormalities in the photoreceptor layer in DR has been reported using both AO-SLO and OCTA [121].

2.5.2. Application of AO in AMD

AO promotes the correction of ocular aberrations, increases lateral resolution, and decreases artifacts. AO-OCT enhances the ability of OCT to grant early recognition of cellular pathology before visual changes occur [122]. AO-OCT reveals higher reflectivity and reduced speckle size in AMD compared to in OCT [123]. In GA, AO-OCT reveals detailed membrane loss, inner and outer segment loss, and RPE loss. Additionally, in advanced GA, the AO-OCT detects calcified drusen and drusenoid pigment epithelial detachment, thus allowing direct visualization of the photoreceptor destruction caused by drusen [63].

2.6. Ultrasound Imaging

Ophthalmic B-scan ultrasound is a rapid, noninvasive imaging technique that creates real-time high-resolution images of the eye with minimum discomfort [124].

Application of Ultrasonic Imaging in DR

In DR, B-scan ultrasound imaging can determine the status of the retina when visibility is obscured by hemorrhage or dense cataract. It can illustrate the causes of low vision in patients with DR, such as vitreous hemorrhage, asteroid hyalosis, and retinal detachment, with a better assessment of the complications that predict the visual outcome [125]. Ophthalmic ultrasound can also accurately illustrate ocular emergencies, such as retinal detachment and ocular trauma [126]. Additionally, ophthalmic ultrasound is very helpful for relieving the risk of vision loss associated with central retinal artery occlusion [127]. B-scan ultrasound imaging is not sensitive enough to evaluate for DME, and it has a restricted efficiency when the ocular media is clear [128].

3. Denoising of Retinal Images

To process retinal images using AI, denoising represents a preprocessing step that may help to improve the AI efficiency for the detection, diagnosis, and staging of retinal diseases. Noise sources in CFP include additive and multiplicative noise [129]. For OCT, noise sources include speckle noise, shot noise, and additive white Gaussian noise (AWGN) [130]. For OCTA, noise sources include speckle noise and AWGN [131]. For FFA, noise sources include the internal noise of sensitive components, optical material grain noise, thermal noise, transmission channel interference, and quantization noise [132]. Most popular denoising techniques include using a Gaussian filter, a median filter, a wavelet filter, and/or a spatial domain filter. More recently, deep autoencoders play a significant role in image denoising. However, recent directions of deep learning attempt to efficiently train the deep learning network to process directly noisy images without the need for preprocessing techniques.

4. The Role of AI in the Diagnosis of Retinal Diseases

Artificial intelligence (AI) is a field of knowledge that refers to the imitation of the way in which humans think and solve problems using artificially intelligent components. Machine learning is a basic part of AI. Machine learning depends on extracting features from the input database using different image processing tools, and either categorizing the data based on unsupervised learning or classifying the data into grades using supervised learning. Supervised learning refers to data classification based on supervision (i.e., through labeled input–output pairs; each pair contains an input associated with its desired ground truth output). Classifiers include supported vector machines (SVM), random forest, traditional neural networks (neural networks that are composed of two layers, where the traditional back propagation algorithm is used to adjust the weights), see Figure 4. Recently, deep learning, which is a part of machine learning, has gained a lot of popularity and potential applications in the medical field. The most popular deep learning networks are convolutional neural networks (CNNs). Unlike traditional neural networks, CNNs are composed of many convolutional and fully connected layers that perform both feature extraction and classification. On the other hand, unsupervised learning does not depend on supervision (labeled input output pairs) to perform data categorization. Instead, the patterns of the input data are used to efficiently categorize the data.
Nowadays, AI plays a major role in many applications, including in the detection, diagnosis, grading, and classification of eye diseases (see Figure 2). In this paper, we will briefly survey the different AI-based methods for the early detection, diagnosis, grading, and classification of eye diseases. We will focus on the two major eye diseases: DR and AMD.
To measure the performance of AI components, different metrics are used to solve medical problems, such as the early detection, diagnosis, and classification of eye diseases. In this section, we will provide a brief overview of these metrics.

Performance Metrics

Let TP indicate true positive, TN indicate true negative, FN denote false negative, and FP denote false positive. The following performance metrics are defined as follows:
  • Specificity:
    S p e f = N u m b e r   o f   t r u e   p o s t i v e   a s s e s m e n t s N u m b e r   o f   a l l   p o s t i v e   a s s e s m e n t s = T P T P + F N
  • Sensitivity (recall):
    S e n = N u m b e r   o f   t r u e   n e g a t i v e   a s s e s m e n t s N u m b e r   o f   a l l   n e g a t i v e   a s s e s m e n t s = T N T N + F P
  • Accuracy:
    A C C = N u m b e r   o f   c o r r e c t   a s s e s m e n t s N u m b e r   o f   a l l   a s s e s m e n t s = T P + T N T P + T N + F P + F N
  • F1-score:
    F 1 = T P T P + 0.5 F P + F N
  • Precision:
    P r e = N u m b e r   o f   t r u e   p o s t i v e   a s s e s m e n t s N u m b e r   o f   a l l   p o s t i v e   a s s e s m e n t s = T P T P + F P
  • Kappa:
k = p o p e 1 p e
where p o = N u m b e r   o f   A g r e e m e n t s   a m o n g   r a t e r s T o t a l and p e is the hypothetical probability of chance agreement.
  • AUC is the area under the curve of the receiver operating characteristics (ROC), a curve that relates the false positive rate (specificity, on the x-axis) to the true positive rate (sensitivity, on the y-axis). AUC is between 0 and 1. The closer the AUC to 1, the better the performance.
  • Confusion matrix, which is a summary of classification results based on highlighting the number of correct and incorrect predictions for each class.

5. The Role of AI in the Early Detection, Diagnosis, and Grading of DR

DR is an epidemic disease [133,134]. In this section, the automated methods for the detection, diagnosis, and staging of DR are outlined.

5.1. Traditional Machine Learning Methods

Traditional machine learning (ML) methods involve extracting features from input data using different image processing tools and using a separate classifier to perform classification. These methods may include a feature selection and reduction algorithm to select the most relevant features to the specific classification problem. In the literature, different ML methods have been applied for the purpose of the detection, diagnosis, and grading of DR. These methods are different with respect to the image modality used, the feature extracted, and the classifier used. The most used image modality is fundus imaging (10 out of 18 research studies), followed by OCT, and then OCTA. Please note that the OCT and OCTA modalities have more recently become the modalities of choice (i.e., between 2020 and 2022). Features include statistical features, texture feature and morphological (shape) features. The most used classifiers are the SVM and traditional neural networks. Figure 5 summarizes the traditional ML methods used for the job of detecting, diagnossing, and grading DR.
For fundus images, different features obtained using different image processing algorithms have been used. For example, Welikala et al. [135] used local morphology features with a genetic feature selection algorithm to select the most relevant features for the detection of new vessels from fundus images as an indication of PDR. The detection was performed using an SVM classifier. Prasad et al. [136] used 41 statistical and texture features followed by a Haar wavelet transform for feature selection and principal component analysis (PCA) for feature reduction. A back propagation neural network and one rule classifier were used for the detection of DR from fundus images. Mahendran et al. [137] used both statistical and texture features extracted using a gray-level co-occurrence matrix (GLCM) applied on segmented fundus images. They used SVM and neural networks to detect abnormal DR and then to classify abnormal DR into moderate NPDR or severe NPDR. Bhatkar et al. [138] used discrete cosine transform and statistical features to detect DR using fundus images. A multi-layer perceptron neural network was used for the discrimination of abnormal DR images from normal ones. Labhade et al. [139] classified the data into four classes: normal, mild NPDR, severe NPDR, and PDR using 40 statistical and GLCM texture features extracted from fundus images. Different classifiers have been investigated, including SVM, random forest, gradient boost, AdaBoost, and Gaussian naive Bayes, with the SVM classifier achieving the best performance. Rahim et al. [140] classified fundus images into five classes: no DR, mild NPDR, moderate NPDR, severe NPDR, and PDR. Three features were used: area, mean, and standard deviation of two extracted regions (i.e., retina and exudates), which were segmented using fuzzy techniques. An SVM with a radial basis function (RBF) kernel was used for classification. Islam et al. [141] discriminated between normal and DR fundus images using sped up robust features, followed by k-means, a bag of words approach, and SVM classifiers. Carrera et al. [142] classified nonproliferative DR into four grades using fundus images. They extracted features from isolated blood vessels, microaneurysms, and hard exudates, and an SVM was used to perform classification. Somasundaram and Alli [143] differentiated between NPDR and PDR. They extracted the candidate objects (blood vessels, optic nerve, neural tissue, neuroretinal rim, optic disc size, thickness and variance), and a bagging ensemble was used for classification. Costa et al. [144] graded DR using fundus images. They used a weakly supervised multiple instances learning framework based on joint optimization of the instance encoding and the image classification stages.
For OCT images, different methods have been applied. For example, Sharafeldeen et al. [145] detected DR from OCT images using features that were extracted from 12 retinal layers, including the thickness, tortuosity, and reflectivity of each layer. Two-level neural networks were used for classification. Wang et al. [146] extracted foveal avascular zone (FAZ) metrics, vessel density, extrafoveal avascular area, and vessel morphology metrics from OCT images. A multivariate regression analysis was used to identify the most discriminative features for grading DR. Abdelsalam et al. [147] used multifractal geometry and an SVM for early diagnosis of NPDR using OCTA. Elsharkawy et al. [148] applied majority voting on an ensemble of neural networks, where the input of the NN was the Gibbs energies extracted from the 12 layers of the retina. Table 1 summarizes the different traditional machine learning methods that have been used for DR detection, diagnosis, and grading since 2015.
For OCTA images, different techniques have been employed. For example, Eladawi et al. [149] achieved early detection of DR using OCTA. They extracted features like the density and appearance of the retinal blood vessels, and the distance map of the foveal avascular zone, and an SVM was used for classification. Alam et al. [150] achieved early detection of DR using OCTA images. Features are extracted including blood vessel tortuosity, blood vascular caliber, vessel perimeter index, blood vessel density, foveal avascular zone area, and foveal avascular zone contour irregularity; then, an SVM was used for classification. Liu et al. [151] detected DR using OCTA. A discrete wavelet transform was applied to extract texture features from each image. Different numbers of classifiers were investigated, including logistic regression, logistic regression regularized with the elastic net penalty, SVM, and the gradient boosting tree.
Mixed modalities have also been investigated. For example, Sandhu et al. [152] diagnosed NPDR using both OCT and OCTA. Features were extracted from both OCT and OCTA. From OCT, the curvature, reflectivity, and thickness of retinal layers were extracted. From OCTA, the area of the foveal avascular zone, the vascular caliber, the vessel density, and the number of bifurcation points were extracted. A random forest classifier was used for classification. Table 1 summarizes the traditional ML methods for early detection, diagnosis, and grading of DR.
Table 1. Traditional ML methods for early detection, diagnosis, and grading of DR.
Table 1. Traditional ML methods for early detection, diagnosis, and grading of DR.
StudyGoalFeaturesClassifierDatabase SizePerformance
Welikala et al. [135], 2015 Detection of new vessels from fundus images as an indication of PDR Local morphology features + genetic feature selection algorithm SVM60 Images from
MESSIDOR [153] and local Hospital
S e n = 1000
S p e c = 0.975
per image
Prasad et al. [136], 2015Detection of DR (two classes: non DR vs. DR) using fundus images 41-statistical and texture features+ Haar wavelet transform for feature selection + PCA for feature reductionBack propagation neural network and one rule classifier89 images from DIARETDB1 [154] A C C = 93.8% for back propagation neural network and
A C C = 97.75% for one rule classifier
Mahendran et al. [137], 2015Classification of the data into normal vs. abnormal followed by classification of abnormal into moderate NPDR or severe NPDR using fundus imagesStatistical and texture features using GLCM extracted from segmented imagesSVM and neural network1200 images from MESSIDOR database A C C = 97.8% (SVM) and A C C = 94.7%, (neural network)
Bhatkar et al. [138], 2015Detect DR using fundus imagesDiscrete Cosine transform and statistical featuresMulti-layer perceptron neural network130 images from DIARETDB0 database S p e f = 100%
S e n s = 100%
Labhade et al. [139], 2016Classification of the data into four classes: normal, mild NPDR, severe NPDR, and PDR using fundus images40 statistical and GLCM texture featuresSVM,
random forests, gradient boost, AdaBoost, Gaussian naive Bayes
1200 images from MESSIDOR databaseBest A C C = 88.71 using SVM
Rahim et al. [140], 2016Classification of the data into five classes: no DR, mild NPDR, moderate NPDR, severe NPDR, and PDR using fundus imagesThree features (area, mean, and standard deviation) of two extracted regions using fuzzy techniques (retina and exudates) SVM with RBF kernel600 images from 300 patients collected at the Hospital Melaka, MalaysiaACC = 93%, Spef = 93.62%, and Sen = 92.45%
Islam et al. [141], 2017Discriminate between normal and DR using fundus imagesSpeeded up robust featuresk-means, a bag of words approach, and SVM180 fundus imagesACC = 94.4%, Pre = 94%, F1 = 94% AUC = 95%
Carrera et al. [142], 2017Classifying nonproliferative DR into 4 grades using fundus imagesExtract features from isolates blood vessels, microaneurysms, and hard exudatesSVM400 images S e n = 95%
Somasundaram and Alli [143], 2017Differentiate between NPDR and PDRExtraction of the candidate objects (blood vessels, optic nerve, neural tissue, neuroretinal rim, optic disc size, thickness and variance)Bagging ensemble classifier89 colors fundus images A C C = 49% for DR detection
Eladawi et al. [149]Detecting early DR using OCTADensity, appearance of the retinal blood vessels, and distance map of the foveal avascular zoneSVM105 subjects A C C = 97.3%
Costa et al. [144]Grading DR using fundus imagesJoint optimization of the instance encoding and the image classification stagesWeakly supervised multiple instance learning framework 1200
(Messidor)
1077 (DR1)
5320 (DR2)
images
A U C = 90%
(Messidor)
A U C = 93 %
(DR1)
A U C = 96%
(DR2)
Alam et al. [150]Early detection of DR using OCTA imagesBlood vessel tortuosity, blood vascular caliber, vessel perimeter index, blood vessel density, foveal avascular zone area, and foveal avascular zone contour irregularitySVM120 images A U C = 94.41 %
(control vs. disease)
A U C = 92.96%
(control vs. mild)
Sandhu et al. [152], 2020Diagnosis of NPDR using OCT and OCTACurvature, reflectivity, and thickness of retinal layers (OCT),
Area of foveal avascular zone, vascular caliber,
vessel density, and number of bifurcation points (OCTA)
Random forest111 patients A C C = 96%, S e n = 100%, S p e c = 94%, A U C = 0.96
(OCT + OCTA)
Sharafeldeen et al. [145], 2021Detecting DR using OCTThickness, tortuosity, and reflectivity of 12 extracted retinal layersTwo-level neural networks260 images from 130 patients S e n = 96.15%,
S p e f = 99.23% F1 = 97.66%
A U C = 97.69%
Liu et al. [151], 2021Detecting DR using OCTAA discrete wavelet transform was applied to extract texture features from each imageLogistic regression, logistic regression regularized with the elastic net penalty, SVM, and the gradient boosting tree 114 DR images + 132 control images A C C = 82%
A U C = 0.84
(logistic regression)
Wang et al. [146], 2021Grading DR using OCT imagesFoveal avascular zone (FAZ) metrics, Vessel density, extrafoveal avascular area and vessel morphology metricsMultivariate regression analysis was used to identify the most discriminative features105 eyes from 105 patients S e n = 83.72%
S p e f = 78.38%
Abdelsalam et al. [147], 2021Diagnosis of early
NPDR using OCTA
Multifractal geometrySVM170 eye images A C C = 98.5%, S e n s = 100%, S p e f = 97.3%
Elsharkawy et al. [148], 2022Detection of DR using OCTGibbs energy extracted from 12 retinal layersMajority voting using an ensemble of Neural networks188 3D-OCT subjects A C C = 90.56%
(4-fold cross validation)

5.2. Deep Learning Methods

The most popular deep learning method is the CNN, which is composed of two types of layer: convolutional layers and fully connected layers. Convolutional layers are used to extract low- and high-level compact features, whereas the fully connected layers are used for classification. Different deep learning features may be applied, including transfer learning, data augmentation, and ensemble learning. Figure 6 summarizes the deep learning methods used for the detection, diagnosis, and grading of DR.
For fundus images, different deep learning methods have been applied. For example, Gulshan et al. [155] used an ensemble of 10 CNN networks for the grading of DR and DME using fundus images. The final decision of the ensemble was computed as the linear average of the predictions of the ensemble. Colas et al. [156] graded DR using a deep CNN network applied on fundus images. Their technique provided the location of the detected anomalies. Ghosh et al. [25] applied data augmentation, normalization, and denoising preprocessing stages. Then, a 28-layer CNN was applied for the grading of DR using fundus images. Takahashi et al. [157] differentiated between NPDR, severe NPDR, and PDR using fundus images. They applied a modified GoogleNet on the fundus scans to perform both feature extraction and classification. An ensemble of 26-layer ConvNets was used by Quellec et al. [158] for the grading of DR using fundus images. Ting et al. [134] identified DR and related eye diseases using an adapted VGGNet architecture. An ensemble of two networks was used for the detection of referable DR from fundus images. A zoom-in network was applied by Wang et al. [159] for the diagnosis of DR and the identification of suspicious regions using fundus images. Dutta et al. [160] compared back propagation NN, Deep NN, and VGG16-based CNN for the differentiation between mild NPDR, moderate NPDR, severe NPDR, and PDR using fundus images. The deep NN achieved the best performance. Zhang et al. [161] diagnosed the severity of DR using DR-Net with adaptive cross-entropy loss. Chakrabarty et al. [162] resized the grey-level fundus scans and input them to a nine-layer CNN in order to detect DR. Kwasigroch et al. [163] input the fundus images into a VGGNet in order to detect and stage DR. Li et al. [164] enhanced the contrast of fundus scans and input them into a transfer learning Inception-v3 CNN in order to detect referral DR. Nagasawa et al. [165] differentiated between nonPDR and PDR using ultrawide-field fundus images. Transfer learning of Inception-v3 CNN was used. Metan et al. [166] used ResNet for DR staging using color fundus images. Qummar et al. [167] used an ensemble of five CNNs, i.e., ResNet50, Inception-v3, Xception, Dense121, and Dense 169 to perform DR staging using fundus images. Sayres et al. [168] used Inception-v4 for DR staging using fundus images. Sengupta et al. [169] applied data preprocessing steps followed by an Inception-v3 CNN for DR staging using fundus images. Hathwar et al. [41] used a transfer learning Xception method for DR detection and staging using fundus images. Narayanan et al. [170] detected and graded the fundus images by investigating transfer learning of different networks, including AlexNet, VGG16, ResNet, Inception-v3, NASNet, DenseNet, and GoogleNet. Shankar et al. [171] applied histogram-based segmentation to extract the details of the fundus image. Synergic deep learning was performed for DR grading using fundus images. He et al. [172] graded DR using fundus images. They used CABNet, which is an attention module with a global attention block. They used DenseNet-121 as a backbone network of CABNet. Saeed et al. [173] applied transfer learning using two pretrained CNNs for DR grading using fundus images. Wang et al. [174] applied transfer learning using two networks, i.e., Inception-v3 and lesionNet, for DR grading. Hsieh et al. [175] used VeriSee™ software, which is based on a modified Inception-v4 model as a backbone network to perform DR grading using fundus images. Khan et al. [176] graded DR using a VGG-NiN model, which is formed by stacking VGG16, a spatial pyramid pooling layer and network-in-network. Zia et al. [177] applied feature selection and fusion steps. Then, the use of a CNN, including VGGNet and Inception-v3 models, was investigated for DR grading. Das et al. [178] detected and classified DR using fundus images. A CNN is built in which the number of layers was selected using a genetic algorithm. an SVM was used for classification. For grading DR, Tsai et al. [179] applied transfer learning using three models, i.e., Inception-v3, ResNet101, and DenseNet121.
Using FFA images, Gao et al. [180] graded DR by investigating three deep networks, i.e., VGG16, ResNet50, and DenseNet. VGG16 achieved the best performance, with an accuracy of 94.17%.
Using OCT images, Eltanboly et al. [181,182] detected and graded DR by extracting features, including the reflectivity, curvature, and thickness of twelve segmented retinal layers. Deep fusion of the features was performed using auto-encoders. Li et al. [183] applied a deep network, called OCTD_Net, for early detection of DR using OCT images. Ghazal et al. [184] developed early detection of NPDR using OCT images based on an AlexNet followed by an SVM for classification.
Using OCTA images, Heisler et al. [185] applied ensemble training based on majority voting or stacking techniques using four fine-tuned VGG19. A maximum accuracy of 92% was achieved using the majority voting techniques. Ryu et al. [186] used ResNet101 for early detection of DR using OCTA. Using both OCT and OCTA, Zang et al. [187] classified DR using a network called DcardNet. He achieved an A C C of 95.7% for the detection of referable DR. Table 2 summarizes the deep learning methods used for early detection, diagnosis, and grading of DR.

6. The Role of AI in the Early Detection, Diagnosis, and Grading of AMD

In developed countries, AMD is a common eye disease in elderly people. OCT and other imaging modalities are used to detect and diagnose AMD. Subjective diagnosis is tedious and depends on the operator. With the invention of machine and deep learning, systems for the early detection, diagnosis, and grading of AMD have been designed to aid radiologists. In this section, we will briefly provide an overview of these methods.

6.1. Traditional ML Methods

Different traditional ML methods based on image processing have been used for the early detection, diagnosis, and grading of ADM. Using color fundus images, García-Floriano et al. [190] differentiated normal from AMD with drusen. The image contrast was enhanced, followed by two morphological operations. Subsequently, invariant momenta were extracted and fed to an SVM, achieving an A C C of 92% for the identification of AMD with drusen from normal images.
Using OCT, Liu et al. [191] used an automated method to identify normal and three types of retinal diseases (macular hole, macular edema, and AMD). Each SD-OCT image was encoded using spatially distributed multiscale texture and shape features. Two SVM classifiers with a radial basis kernel were trained to identify the presence of normal macula and each of the three pathologies, separately. For AMD, they achieved an A U C of 0.941. Srinivasan et al. [192] used SD-OCT to identify normal and two retinal diseases: dry AMD and diabetic macular edema (DME). Features were extracted using multiscale histograms of the oriented gradient descriptor, and an SVM was used for classification. They achieved an A C C of 100% for the identification of cases with AMD. Fraccaro et al. [193] used OCT images to automatically diagnose AMD with the aid of patient features, such as patient age, gender, and clinical binary signs (i.e., the existence of soft drusen, retinal pigment epithelium, defects/pigment mottling, depigmentation area, subretinal hemorrhage, subretinal fluid, macula thickness, macular scar, and subretinal fibrosis). They used two types of classifier: white boxes (interpretable techniques, including logistic regression and decision tree) and black boxes (less interpretable techniques, including SVM, random forest, and AdaBoost). Both types of classifier performed well, with an A U C of 0.92 using random forest, logistic regression, and adabosoost, and an A U C of 0.9 for SVM and decision tree. Soft drusen and age were identified as the most discriminating variables. A summary of the traditional ML methods for AMD detection, diagnosis, and/or staging is presented on Figure 7 and Table 3.

6.2. Deep Learning Methods

More recently, deep learning methods have been used for the detection, diagnosis, and grading of AMD. Using OCT, Lee et al. [194] modified a VGG19 CNN by exchanging the last fully connected layer with a fully connected layer of two nodes to support binary classification (i.e., two-class problem). Based on their network, they were able to differentiate between normal and AMD cases with an A U C of 92.77%, with an A C C of 87.6%, S e n of 84.6% and S p e c of 91.5%, at the level of each image. Burlina et al. [195] used color fundus images to differentiate no or early AMD from intermediate or advanced AMD. They built an AlexNet architecture using a database of over 130,000 images from 4613 patients. They achieved an A C C from 88.4% to 91.6% and an A U C of 0.94 to 0.96. Teder et al. used transfer learning and Inception-v3 to detect exudative AMD from normal subjects using SD-OCT. Hassan et al. [196] segmented nine retinal layers and used SegNet followd by an AlexNet for the diagnosis of three retinal diseases (i.e., macular edema, central serous choriorentopathy, and AMD) using OCT. Motozawa et al. [197] used two 18-layer CNNs: A model to distinguish AMD from normal followed by a model to distinguish AMD with from AMD without exudative changes using SD-OCT scans. Li et al. [198] distinguished between normal, AMD, and diabetic macular edema using OCT images and transfer learning of a VGG-16 model.
Using color fundus images, Ting et al. [134] identified three retinal diseases: DR, glaucoma, and AMD. They used an ensemble of two networks for the classification of each eye disease based on an adapted VGGNet architecture. They used a validation dataset of 71.896 images from 14,880 patients, achieving a S e n of 93.2% and S p e f of 88.7% for identifying AMD. Tan et al. [199] achieved early detection of AMD using fundus images by applying data augmentation and a14-layer CNN model. An et al. [193] built two classifiers, one to detect AMD from normal and the other to detect AMD with fluid from AMD without fluid. They used two VGG16 models: a model to distinguish AMD from normal followed by a model to distinguish AMD with from AMD without fluid. Hwang et al. [200] distinguished between normal, dry (drusen), active wet, and inactive wet AMD using a cloud computing website [67]. The website was built using ResNet50, Inception-v3, and VGG16 networks. Figure 8 and Table 4 summarize the deep learning tools used for AMD detection and diagnosis.

7. Discussion and Future Trends

Artificial intelligence (AI) has demonstrated proof-of-concept in medical fields such as radiology and pathology, which have stunning similarities to ophthalmology, as they are intensely embedded in diagnostic imaging, the leading application of AI in healthcare [203,204,205]. The rapid expansion of AI facilities and their broad application continue to expand technological boundaries [206].
In ophthalmology, deep learning has been applied to automated diagnosis, segmentation, data analysis, and outcome predictions [1]. Several recent studies have used deep learning to diagnose and segment features of AMD [199,207] and DR, performing comparably to human experts [208,209].
One of the vital AI-based applications in ophthalmology is OCT image assessment, as the noninvasive, standardized, and rapid visualization of retinal pathology by OCT holds potential for the application of AI-based analyses [206]. AI not only allows knowledge to be generated based on large, multidimensional datasets, it is also able to capture individual variability in disease and function more efficiently than traditional methods [210]. We can summarize the findings in this survey as follows:
  • Currently, FFA is the gold standard for assessing retinal vasculature, the most affected part of the retina in the diabetic eye. For early detection of DR, OCTA can detect changes in the retinal vasculature before developing DR clinical features.
  • FFA and OCT are the gold standards for wet nAMD diagnosis [7,8].
  • Currently, FAF and OCT are the basic methods for diagnosing and monitoring dry AMD. NIA, FFA and OCTA can provide complementary data [24].
  • OCT is used to identify and monitor AMD and its abnormalities, such as drusen deposits, pseudodrusen, subretinal fluid, RPE detachment, and choroid NV [23].
  • Using different medical image modalities, AI components have demonstrated outstanding capabilities to provide assisting automated early detection, diagnosis, and staging of DR and AMD diseases.
  • Traditional ML methods are different with respect to the imaging modality used, the features extracted, and the classifiers used. For DR detection, diagnosis, and staging, fundus imaging, OCT, and OCTA have been used in the literature. For AMD detection, diagnosis, and staging, fundus imaging, FFA, OCT and OCTA have been used.
  • Deep learning methods (mainly CNNs) have recently been introduced for the automated detection, diagnosis, and staging of DR and AMD diseases, achieving improved performance and representing the state of the art for the upcoming years. For DR detection, diagnosis, and staging, fundus imaging, OCT, and OCTA have been used. For AMD detection, diagnosis, and staging, fundus imaging and OCT have been used.
The future holds advances in technology:
  • Using mixed image modalities for the eye will provide more information about the pathology, diagnosis, and proper treatments.
  • Automated image interpretation using AI will play a dominant role in the early detection, diagnosis, and staging of retinal diseases, especially DR and AMD.
  • Mobile applications are emerging, and can provide a fast, mobile solution for the early detection and diagnosis of retinal diseases.
  • Large data sets will be acquired and available online for users. Quantification of large datasets will help to find reliable solutions.
  • Further investigation into the relationship between retinal function and structure are required.

8. Conclusions

The current paper provided an in depth overview of the ophthalmic imaging modalities and their different types and different technologies in order to detect, diagnose, classify, and stage different retinal diseases, and more specifically, DR and AMD. In addition, the role of AI systems was surveyed from 1995 to 2022. Overall, AI systems are capable of assisting clinicians and providing an automated tool for the early detection, diagnosis, classification, and grading of DR and AMD. In the future, AI-based mobile solutions will be available.

Author Contributions

Conceptualization, G.A.S., N.M.B., S.H., A.E. and F.K.; methodology, G.A.S., N.M.B., A.E. and A.E.-B.; validation, H.S., writing—original draft preparation, G.A.S., N.M.B., S.H., A.E., F.T. and A.E.-B.; writing—review and editing, G.A.S., N.M.B., A.E., R.F., H.S. and A.E.-B.; visualization, A.E.; supervision, A.E.-B.; project administration, M.A.M., A.S. and A.E.-B.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received external funding from the Academy of Scientific Research and Technology (ASRT) in Egypt (Project No. JESOR 5246).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schmidt-Erfurth, U.; Sadeghipour, A.; Gerendas, B.S.; Waldstein, S.M.; Bogunović, H. Artificial intelligence in retina. Prog. Retin. Eye Res. 2018, 67, 1–29. [Google Scholar] [CrossRef] [PubMed]
  2. Tan, C.S.H.; Chew, M.C.Y.; Lim, L.W.Y.; Sadda, S.R. Advances in retinal imaging for diabetic retinopathy and diabetic macular edema. Indian J. Ophthalmol. 2016, 64, 76. [Google Scholar] [PubMed]
  3. Bagetta, G.; Scuteri, D.; Vero, A.; Zito, M.; Naturale, M.D.; Nucci, C.; Tonin, P.; Corasaniti, M.T. Diabetic retinopathy and age-related macular degeneration: A survey of pharmacoutilization and cost in Calabria, Italy. Neural Regen. Res. 2019, 14, 1445. [Google Scholar] [CrossRef] [PubMed]
  4. Yau, J.W.; Rogers, S.L.; Kawasaki, R.; Lamoureux, E.L.; Kowalski, J.W.; Bek, T.; Chen, S.-J.; Dekker, J.M.; Fletcher, J.; Grauslund, J.; et al. Global prevalence and major risk factors of diabetic retinopathy. Diabetes Care 2012, 35, 556–564. [Google Scholar] [CrossRef] [Green Version]
  5. Harding, S. Neovascular age-related macular degeneration: Decision making and optimal management. Eye 2010, 24, 497–505. [Google Scholar] [CrossRef]
  6. Schwartz, R.; Loewenstein, A. Early detection of age related macular degeneration: Current status. Int. J. Retin. Vitr. 2015, 1, 1–8. [Google Scholar] [CrossRef] [Green Version]
  7. Cohen, S.Y.; Mrejen, S. Imaging of exudative age-related macular degeneration: Toward a shift in the diagnostic paradigm? Retina 2017, 37, 1625–1629. [Google Scholar] [CrossRef]
  8. Jung, J.J.; Chen, C.Y.; Mrejen, S.; Gallego-Pinazo, R.; Xu, L.; Marsiglia, M.; Boddu, S.; Freund, K.B. The incidence of neovascular subtypes in newly diagnosed neovascular age-related macular degeneration. Am. J. Ophthalmol. 2014, 158, 769.e2–779.e2. [Google Scholar] [CrossRef]
  9. Ryan, S.J.; Hinton, D.R.; Schachat, A.P. Retina; Elsevier Health Sciences: Amsterdam, The Netherlands, 2012. [Google Scholar]
  10. Baumal, C.R.; Duker, J.S. Current Management of Diabetic Retinopathy; Elsevier Health Sciences: Amsterdam, The Netherlands, 2017. [Google Scholar]
  11. Li, H.K.; Hubbard, L.D.; Danis, R.P.; Esquivel, A.; Florez-Arango, J.F.; Krupinski, E.A. Monoscopic versus stereoscopic retinal photography for grading diabetic retinopathy severity. Investig. Ophthalmol. Vis. Sci. 2010, 51, 3184–3192. [Google Scholar] [CrossRef] [Green Version]
  12. Kernt, M.; Hadi, I.; Pinter, F.; Seidensticker, F.; Hirneiss, C.; Haritoglou, C.; Kampik, A.; Ulbig, M.W.; Neubauer, A.S. Assessment of diabetic retinopathy using nonmydriatic ultra-widefield scanning laser ophthalmoscopy (Optomap) compared with ETDRS 7-field stereo photography. Diabetes Care 2012, 35, 2459–2463. [Google Scholar] [CrossRef] [Green Version]
  13. Salz, D.A.; Witkin, A.J. Imaging in diabetic retinopathy. Middle East Afr. J. Ophthalmol. 2015, 22, 145–150. [Google Scholar] [PubMed]
  14. Pascolini, D.; Mariotti, S.P. Global estimates of visual impairment: 2010. Br. J. Ophthalmol. 2012, 96, 614–618. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Colijn, J.M.; Buitendijk, G.H.; Prokofyeva, E.; Alves, D.; Cachulo, M.L.; Khawaja, A.P.; Cougnard-Gregoire, A.; Merle, B.M.; Korb, C.; Erke, M.G.; et al. Prevalence of age-related macular degeneration in Europe: The past and the future. Ophthalmology 2017, 124, 1753–1763. [Google Scholar] [CrossRef] [Green Version]
  16. Garrity, S.T.; Sarraf, D.; Freund, K.B.; Sadda, S.R. Multimodal imaging of nonneovascular age-related macular degeneration. Investig. Ophthalmol. Vis. Sci. 2018, 59, AMD48–AMD64. [Google Scholar] [CrossRef] [Green Version]
  17. Cunningham, J. Recognizing age-related macular degeneration in primary care. J. Am. Acad. PAs 2017, 30, 18–22. [Google Scholar] [CrossRef] [PubMed]
  18. Al-Zamil, W.M.; Yassin, S.A. Recent developments in age-related macular degeneration: A review. Clin. Interv. Aging 2017, 12, 1313. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Rickman, C.B.; Farsiu, S.; Toth, C.A.; Klingeborn, M. Dry age-related macular degeneration: Mechanisms, therapeutic targets, and imaging. Investig. Ophthalmol. Vis. Sci. 2013, 54, ORSF68–ORSF80. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Kaszubski, P.; Ami, T.B.; Saade, C.; Smith, R.T. Geographic atrophy and choroidal neovascularization in the same eye: A review. Ophthalmic Res. 2016, 55, 185–193. [Google Scholar] [CrossRef] [Green Version]
  21. Jonasson, F.; Fisher, D.E.; Eiriksdottir, G.; Sigurdsson, S.; Klein, R.; Launer, L.J.; Harris, T.; Gudnason, V.; Cotch, M.F. Five-year incidence, progression, and risk factors for age-related macular degeneration: The age, gene/environment susceptibility study. Ophthalmology 2014, 121, 1766–1772. [Google Scholar] [CrossRef] [Green Version]
  22. Chew, E.Y.; Clemons, T.E.; Agrón, E.; Sperduto, R.D.; Sangiovanni, J.P.; Kurinij, N.; Davis, M.D.; Age-Related Eye Disease Study Research Group. Long-term effects of vitamins C and E, β-carotene, and zinc on age-related macular degeneration: AREDS report no. 35. Ophthalmology 2013, 120, 1604.e4–1611.e4. [Google Scholar] [CrossRef] [Green Version]
  23. Talks, S.J.; Aftab, A.M.; Ashfaq, I.; Soomro, T. The role of new imaging methods in managing age-related macular degeneration. Asia-Pac. J. Ophthalmol. 2017, 6, 498–507. [Google Scholar]
  24. Yuzawa, M.; Mori, R.; Kawamura, A. The origins of polypoidal choroidal vasculopathy. Br. J. Ophthalmol. 2005, 89, 602–607. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Kanclerz, P.; Tuuminen, R.; Khoramnia, R. Imaging Modalities Employed in Diabetic Retinopathy Screening: A Review and Meta-Analysis. Diagnostics 2021, 11, 1802. [Google Scholar] [CrossRef]
  26. Rasmussen, M.L.; Broe, R.; Frydkjaer-Olsen, U.; Olsen, B.S.; Mortensen, H.B.; Peto, T.; Grauslund, J. Comparison between Early Treatment Diabetic Retinopathy Study 7-field retinal photos and non-mydriatic, mydriatic and mydriatic steered widefield scanning laser ophthalmoscopy for assessment of diabetic retinopathy. J. Diabetes Complicat. 2015, 29, 99–104. [Google Scholar] [CrossRef]
  27. Diabetic Retinopathy Study Research Group. Diabetic retinopathy study report number 6. Design, methods, and baseline results. Report number 7. A modification of the Airlie House classification of diabetic retinopathy. Prepared by the diabetic retinopathy. Investig. Ophthalmol. Vis. Sci. 1981, 21, 1–226. [Google Scholar]
  28. Diabetic Retinopathy Study Research Group. Fundus photographic risk factors for progression of diabetic retinopathy: ETDRS report number 12. Ophthalmology 1991, 98, 823–833. [Google Scholar] [CrossRef]
  29. Diabetic Retinopathy Study Research Group. Grading diabetic retinopathy from stereoscopic color fundus photographs—An extension of the modified Airlie House classification: ETDRS report number 10. Ophthalmology 1991, 98, 786–806. [Google Scholar] [CrossRef]
  30. Victor, A.A. The Role of Imaging in Age-Related Macular Degeneration. In Visual Impairment and Blindness-What We Know and What We Have to Know; IntechOpen: London, UK, 2019. [Google Scholar]
  31. Wong, C.W.; Yanagi, Y.; Lee, W.-K.; Ogura, Y.; Yeo, I.; Wong, T.Y.; Cheung, C.M.G. Age-related macular degeneration and polypoidal choroidal vasculopathy in Asians. Prog. Retin. Eye Res. 2016, 53, 107–139. [Google Scholar] [CrossRef]
  32. Seddon, J.M.; Sharma, S.; Adelman, R.A. Evaluation of the clinical age-related maculopathy staging system. Ophthalmology 2006, 113, 260–266. [Google Scholar] [CrossRef]
  33. Göbel, A.P.; Fleckenstein, M.; Schmitz-Valckenberg, S.; Brinkmann, C.K.; Holz, F.G. Imaging geographic atrophy in age-related macular degeneration. Ophthalmologica 2011, 226, 182–190. [Google Scholar] [CrossRef]
  34. Mokwa, N.F.; Ristau, T.; Keane, P.A.; Kirchhof, B.; Sadda, S.R.; Liakopoulos, S. Grading of age-related macular degeneration: Comparison between color fundus photography, fluorescein angiography, and spectral domain optical coherence tomography. J. Ophthalmol. 2013, 2013, 1–6. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Costanzo, E.; Miere, A.; Querques, G.; Capuano, V.; Jung, C.; Souied, E.H. Type 1 choroidal neovascularization lesion size: Indocyanine green angiography versus optical coherence tomography angiography. Investig. Ophthalmol. Vis. Sci. 2016, 57, OCT307–OCT313. [Google Scholar] [CrossRef] [PubMed]
  36. Gess, A.J.; Fung, A.E.; Rodriguez, J.G. Imaging in neovascular age-related macular degeneration. In Seminars in Ophthalmology; Taylor & Francis: Oxfordshire, UK, 2011; pp. 225–233. [Google Scholar]
  37. Keane, P.A.; Sim, D.A.; Sadda, S.R. Advances in imaging in age-related macular degeneration. Curr. Ophthalmol. Rep. 2013, 1, 1–11. [Google Scholar] [CrossRef] [Green Version]
  38. Wessel, M.M.; Nair, N.; Aaker, G.D.; Ehrlich, J.R.; D’Amico, D.J.; Kiss, S. Peripheral retinal ischaemia, as evaluated by ultra-widefield fluorescein angiography, is associated with diabetic macular oedema. Br. J. Ophthalmol. 2012, 96, 694–698. [Google Scholar] [CrossRef]
  39. Friberg, T.R.; Gupta, A.; Yu, J.; Huang, L.; Suner, I.; Puliafito, C.A.; Schwartz, S.D. Ultrawide angle fluorescein angiographic imaging: A comparison to conventional digital acquisition systems. Ophthalmic Surg. Lasers Imaging Retin. 2008, 39, 304–311. [Google Scholar] [CrossRef]
  40. Ohno-Matsui, K.; Ikuno, Y.; Lai, T.Y.; Cheung, C.M.G. Diagnosis and treatment guideline for myopic choroidal neovascularization due to pathologic myopia. Prog. Retin. Eye Res. 2018, 63, 92–106. [Google Scholar] [CrossRef]
  41. Moreno, J.M.R.; Barquet, L.A. Manual De Retina SERV; Elsevier Health Sciences: Amsterdam, The Netherlands, 2019. [Google Scholar]
  42. Karampelas, M.; Malamos, P.; Petrou, P.; Georgalas, I.; Papaconstantinou, D.; Brouzas, D. Retinal pigment epithelial detachment in age-related macular degeneration. Ophthalmol. Ther. 2020, 9, 739–756. [Google Scholar] [CrossRef]
  43. Donati, M.C.; Carifi, G.; Virgili, G.; Menchini, U. Retinal angiomatous proliferation: Association with clinical and angiographic features. Ophthalmologica 2006, 220, 31–36. [Google Scholar] [CrossRef]
  44. Pfau, M.; Goerdt, L.; Schmitz-Valckenberg, S.; Mauschitz, M.M.; Mishra, D.K.; Holz, F.G.; Lindner, M.; Fleckenstein, M. Green-light autofluorescence versus combined blue-light autofluorescence and near-infrared reflectance imaging in geographic atrophy secondary to age-related macular degeneration. Investig. Ophthalmol. Vis. Sci. 2017, 58, BIO121–BIO130. [Google Scholar] [CrossRef]
  45. Holz, F.G.; Sadda, S.R.; Staurenghi, G.; Lindner, M.; Bird, A.C.; Blodi, B.A.; Bottoni, F.; Chew, E.Y.; Chakravarthy, U.; Schmitz-Valckenberg, S. Imaging protocols in clinical studies in advanced age-related macular degeneration: Recommendations from classification of atrophy consensus meetings. Ophthalmology 2017, 124, 464–478. [Google Scholar] [CrossRef] [Green Version]
  46. Zarbin, M.A.; Casaroli-Marano, R.P.; Rosenfeld, P.J. Age-related macular degeneration: Clinical findings, histopathology and imaging techniques. Cell-Based Ther. Retin. Degener. Dis. 2014, 53, 1–32. [Google Scholar]
  47. Gross, N.E.; Aizman, A.; Brucker, A.; James, M. Klancnik, J.R.; Yannuzzi, L.A. Nature and risk of neovascularization in the fellow eye of patients with unilateral retinal angiomatous proliferation. Retina 2005, 25, 713–718. [Google Scholar] [CrossRef] [PubMed]
  48. Fleckenstein, M.; Schmitz-Valckenberg, S.; Holz, F.G. Autofluorescence imaging. In Retina; Elsevier: Amsterdam, The Netherlands, 2013; pp. 111–132. [Google Scholar]
  49. Ly, A.; Nivison-Smith, L.; Assaad, N.; Kalloniatis, M. Fundus autofluorescence in age-related macular degeneration. Optom. Vis. Sci. 2017, 94, 246–259. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Keilhauer, C.N.; Delori, F.C. Near-infrared autofluorescence imaging of the fundus: Visualization of ocular melanin. Investig. Ophthalmol. Vis. Sci. 2006, 47, 3556–3564. [Google Scholar] [CrossRef]
  51. Kellner, U.; Kellner, S.; Weinitz, S. Fundus autofluorescence (488 NM) and near-infrared autofluorescence (787 NM) visualize different retinal pigment epithelium alterations in patients with age-related macular degeneration. Retina 2010, 30, 6–15. [Google Scholar] [CrossRef]
  52. Acton, J.H.; Cubbidge, R.P.; King, H.; Galsworthy, P.; Gibson, J.M. Drusen detection in retro-mode imaging by a scanning laser ophthalmoscope. Acta Ophthalmol. 2011, 89, e404–e411. [Google Scholar] [CrossRef]
  53. Delori, F.C.; Dorey, C.K.; Staurenghi, G.; Arend, O.; Goger, D.G.; Weiter, J.J. In vivo fluorescence of the ocular fundus exhibits retinal pigment epithelium lipofuscin characteristics. Investig. Ophthalmol. Vis. Sci. 1995, 36, 718–729. [Google Scholar]
  54. Webb, R.H.; Hughes, G.W.; Delori, F.C. Confocal scanning laser ophthalmoscope. Appl. Opt. 1987, 26, 1492–1499. [Google Scholar] [CrossRef]
  55. Schmitz-Valckenberg, S.; Fleckenstein, M.; Scholl, H.P.; Holz, F.G. Fundus autofluorescence and progression of age-related macular degeneration. Surv. Ophthalmol. 2009, 54, 96–117. [Google Scholar] [CrossRef]
  56. Delori, F.C.; Fleckner, M.R.; Goger, D.G.; Weiter, J.J.; Dorey, C.K. Autofluorescence distribution associated with drusen in age-related macular degeneration. Investig. Ophthalmol. Vis. Sci. 2000, 41, 496–504. [Google Scholar]
  57. Friberg, T.R.; Pandya, A.; Eller, A.W. Non-Mydriatic Panoramic Fundus Imaging Using a Non-Contact Scanning Laser-Based System; Slack Incorporated: Thorofare, NJ, USA, 2003; Volume 34, pp. 488–497. [Google Scholar]
  58. Vujosevic, S.; Casciano, M.; Pilotto, E.; Boccassini, B.; Varano, M.; Midena, E. Diabetic macular edema: Fundus autofluorescence and functional correlations. Investig. Ophthalmol. Vis. Sci. 2011, 52, 442–448. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Chung, H.; Park, B.; Shin, H.J.; Kim, H.C. Correlation of fundus autofluorescence with spectral-domain optical coherence tomography and vision in diabetic macular edema. Ophthalmology 2012, 119, 1056–1065. [Google Scholar] [CrossRef] [PubMed]
  60. Pece, A.; Isola, V.; Holz, F.; Milani, P.; Brancato, R. Autofluorescence imaging of cystoid macular edema in diabetic retinopathy. Ophthalmologica 2010, 224, 230–235. [Google Scholar] [CrossRef] [PubMed]
  61. Bessho, K.; Gomi, F.; Harino, S.; Sawa, M.; Sayanagi, K.; Tsujikawa, M.; Tano, Y. Macular autofluorescence in eyes with cystoid macula edema, detected with 488 nm-excitation but not with 580 nm-excitation. Graefe’s Arch. Clin. Exp. Ophthalmol. 2009, 247, 729–734. [Google Scholar] [CrossRef]
  62. Sparrow, J.R.; Boulton, M. RPE lipofuscin and its role in retinal pathobiology. Exp. Eye Res. 2005, 80, 595–606. [Google Scholar] [CrossRef]
  63. Panorgias, A.; Zawadzki, R.J.; Capps, A.G.; Hunter, A.A.; Morse, L.S.; Werner, J.S. Multimodal assessment of microscopic morphology and retinal function in patients with geographic atrophy. Investig. Ophthalmol. Vis. Sci. 2013, 54, 4372–4384. [Google Scholar] [CrossRef] [Green Version]
  64. Batoglu, F.; Demirel, S.; Özmert, E.; Oguz, Y.G.; Özyol, P. Autofluorescence patterns as a predictive factor for neovascularization. Optom. Vis. Sci. 2014, 91, 950–955. [Google Scholar] [CrossRef]
  65. Cachulo, L.; Silva, R.; Fonseca, P.; Pires, I.; Carvajal-Gonzalez, S.; Bernardes, R.; Cunha-Vaz, J.G. Early markers of choroidal neovascularization in the fellow eye of patients with unilateral exudative age-related macular degeneration. Ophthalmologica 2011, 225, 144–149. [Google Scholar] [CrossRef]
  66. Yung, M.; Klufas, M.A.; Sarraf, D. Clinical applications of fundus autofluorescence in retinal disease. Int. J. Retin. Vitr. 2016, 2, 1–25. [Google Scholar] [CrossRef] [Green Version]
  67. Horani, M.; Mahmood, S.; Aslam, T.M. Macular atrophy of the retinal pigment epithelium in patients with neovascular age-related macular degeneration: What is the link? Part I: A review of disease characterization and morphological associations. Ophthalmol. Ther. 2019, 8, 235–249. [Google Scholar] [CrossRef] [Green Version]
  68. Pilotto, E.; Sportiello, P.; Alemany-Rubio, E.; Vujosevic, S.; Segalina, S.; Fregona, I.; Midena, E. Confocal scanning laser ophthalmoscope in the retromode imaging modality in exudative age-related macular degeneration. Graefe’s Arch. Clin. Exp. Ophthalmol. 2013, 251, 27–34. [Google Scholar] [CrossRef] [PubMed]
  69. Fujimoto, J.G.; Drexler, W.; Schuman, J.S.; Hitzenberger, C.K. Optical Coherence Tomography (OCT) in ophthalmology: Introduction. Opt. Express 2009, 17, 3978–3979. [Google Scholar] [CrossRef] [PubMed]
  70. Pennington, K.L.; DeAngelis, M.M. Epidemiology of age-related macular degeneration (AMD): Associations with cardiovascular disease phenotypes and lipid factors. Eye Vis. 2016, 3, 1–20. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  71. Lalwani, G.A.; Rosenfeld, P.J.; Fung, A.E.; Dubovy, S.R.; Michels, S.; Feuer, W.; Feuer, W.; Davis, J.L.; Flynn, H.W., Jr.; Esquiabro, M. A variable-dosing regimen with intravitreal ranibizumab for neovascular age-related macular degeneration: Year 2 of the PrONTO Study. Am. J. Ophthalmol. 2009, 148, 43.e1–58.e1. [Google Scholar] [CrossRef]
  72. Virgili, G.; Menchini, F.; Casazza, G.; Hogg, R.; Das, R.R.; Wang, X.; Michelessi, M. Optical coherence tomography (OCT) for detection of macular oedema in patients with diabetic retinopathy. Cochrane Database Syst. Rev. 2015, 1, CD008081. [Google Scholar] [CrossRef] [Green Version]
  73. Costa, R.A.; Jorge, R.; Calucci, D.; Luiz, J.R.M.; Cardillo, J.A.; Scott, I.U. Intravitreal bevacizumab (avastin) for central and hemicentral retinal vein occlusions: IBeVO study. Retina 2007, 27, 141–149. [Google Scholar] [CrossRef]
  74. Prager, F.; Michels, S.; Kriechbaum, K.; Georgopoulos, M.; Funk, M.; Geitzenauer, W.; Polak, K.; Schmidt-Erfurth, U. Intravitreal bevacizumab (Avastin®) for macular oedema secondary to retinal vein occlusion: 12-month results of a prospective clinical trial. Br. J. Ophthalmol. 2009, 93, 452–456. [Google Scholar] [CrossRef] [Green Version]
  75. Kiernan, D.F.; Mieler, W.F.; Hariprasad, S.M. Spectral-domain optical coherence tomography: A comparison of modern high-resolution retinal imaging systems. Am. J. Ophthalmol. 2010, 149, 18.e2–31.e2. [Google Scholar] [CrossRef]
  76. Hee, M.R.; Izatt, J.A.; Swanson, E.A.; Huang, D.; Schuman, J.S.; Lin, C.P.; Puliafito, C.A.; Fujimoto, J.G. Optical coherence tomography of the human retina. Arch. Ophthalmol. 1995, 113, 325–332. [Google Scholar] [CrossRef]
  77. Müller, P.L.; Wolf, S.; Dolz-Marco, R.; Tafreshi, A.; Schmitz-Valckenberg, S.; Holz, F.G. Ophthalmic diagnostic imaging: Retina. In High Resolution Imaging in Microscopy and Ophthalmology; Springer: Cham, Switzerkand, 2019; pp. 87–106. [Google Scholar]
  78. Cereda, M.G.; Corvi, F.; Cozzi, M.; Pellegrini, M.; Staurenghi, G. Optical coherence tomography 2: Diagnostic tool to study peripheral vitreoretinal pathologies. Retina 2019, 39, 415–421. [Google Scholar] [CrossRef]
  79. Choma, M.A.; Sarunic, M.V.; Yang, C.; Izatt, J.A. Sensitivity advantage of swept source and Fourier domain optical coherence tomography. Opt. Express 2003, 11, 2183–2189. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  80. Margolis, R.; Spaide, R.F. A pilot study of enhanced depth imaging optical coherence tomography of the choroid in normal eyes. Am. J. Ophthalmol. 2009, 147, 811–815. [Google Scholar] [CrossRef] [PubMed]
  81. An, L.; Li, P.; Lan, G.; Malchow, D.; Wang, R.K. High-resolution 1050 nm spectral domain retinal optical coherence tomography at 120 kHz A-scan rate with 6.1 mm imaging depth. Biomed. Opt. Express 2013, 4, 245–259. [Google Scholar] [CrossRef] [PubMed]
  82. Adhi, M.; Badaro, E.; Liu, J.J.; Kraus, M.F.; Baumal, C.R.; Witkin, A.J.; Hornegger, J.; Fujimoto, J.G.; Duker, J.S.; Waheed, N.K. Three-dimensional enhanced imaging of vitreoretinal interface in diabetic retinopathy using swept-source optical coherence tomography. Am. J. Ophthalmol. 2016, 162, 140.e1–149.e1. [Google Scholar] [CrossRef] [PubMed]
  83. Miller, D.; Kocaoglu, O.; Wang, Q.; Lee, S. Adaptive optics and the eye (super resolution OCT). Eye 2011, 25, 321–330. [Google Scholar] [CrossRef] [Green Version]
  84. Sakamoto, A.; Hangai, M.; Yoshimura, N. Spectral-domain optical coherence tomography with multiple B-scan averaging for enhanced imaging of retinal diseases. Ophthalmology 2008, 115, 1071.e7–1078.e7. [Google Scholar] [CrossRef] [Green Version]
  85. Talisa, E.; Chin, A.T.; Bonini Filho, M.A.; Adhi, M.; Branchini, L.; Salz, D.A.; Baumal, C.R.; Crawford, C.; Reichel, E.; Witkin, A.J.; et al. Detection of microvascular changes in eyes of patients with diabetes but not clinical diabetic retinopathy using optical coherence tomography angiography. Retina 2015, 35, 2364–2370. [Google Scholar]
  86. Ishibazawa, A.; Nagaoka, T.; Takahashi, A.; Omae, T.; Tani, T.; Sogawa, K.; Yokota, H.; Yoshida, A. Optical coherence tomography angiography in diabetic retinopathy: A prospective pilot study. Am. J. Ophthalmol. 2015, 160, 35.e1–44.e1. [Google Scholar] [CrossRef] [Green Version]
  87. Ehlers, J.P.; Goshe, J.; Dupps, W.J.; Kaiser, P.K.; Singh, R.P.; Gans, R.; Eisengart, J.; Srivastava, S.K. Determination of feasibility and utility of microscope-integrated optical coherence tomography during ophthalmic surgery: The DISCOVER Study RESCAN Results. JAMA Ophthalmol. 2015, 133, 1124–1132. [Google Scholar] [CrossRef]
  88. Ehlers, J.P.; Dupps, W.; Kaiser, P.; Goshe, J.; Singh, R.P.; Petkovsek, D.; Srivastava, S.K. The prospective intraoperative and perioperative ophthalmic imaging with optical coherence tomography (PIONEER) study: 2-year results. Am. J. Ophthalmol. 2014, 158, 999.e1–1007.e1. [Google Scholar] [CrossRef] [Green Version]
  89. Zhang, Q.; Lu, R.; Wang, B.; Messinger, J.D.; Curcio, C.A.; Yao, X. Functional optical coherence tomography enables in vivo physiological assessment of retinal rod and cone photoreceptors. Sci. Rep. 2015, 5, 1–10. [Google Scholar] [CrossRef] [PubMed]
  90. Drexler, W. Cellular and functional optical coherence tomography of the human retina the Cogan lecture. Investig. Ophthalmol. Vis. Sci. 2007, 48, 5340–5351. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  91. Zhang, Q.-X.; Lu, R.-W.; Curcio, C.A.; Yao, X.-C. In vivo confocal intrinsic optical signal identification of localized retinal dysfunction. Investig. Ophthalmol. Vis. Sci. 2012, 53, 8139–8145. [Google Scholar] [CrossRef] [Green Version]
  92. Michels, S.; Pircher, M.; Geitzenauer, W.; Simader, C.; Gotzinger, E.; Findl, O.; Schmidt-Erfurth, U.; Hitzenberger, C. Value of polarisation-sensitive optical coherence tomography in diseases affecting the retinal pigment epithelium. Br. J. Ophthalmol. 2008, 92, 204–209. [Google Scholar] [CrossRef] [PubMed]
  93. Yazdanfar, S.; Rollins, A.M.; Izatt, J.A. Imaging and velocimetry of the human retinal circulation with color Doppler optical coherence tomography. Opt. Lett. 2000, 25, 1448–1450. [Google Scholar] [CrossRef] [PubMed]
  94. Kim, B.Y.; Smith, S.D.; Kaiser, P.K. Optical coherence tomographic patterns of diabetic macular edema. Am. J. Ophthalmol. 2006, 142, 405.e1–412.e1. [Google Scholar] [CrossRef] [PubMed]
  95. Kothari, A.R.; Raman, R.P.; Sharma, T.; Gupta, M.; Laxmi, G. Is there a correlation between structural alterations and retinal sensitivity in morphological patterns of diabetic macular edema? Indian J. Ophthalmol. 2013, 61, 230–232. [Google Scholar] [CrossRef] [PubMed]
  96. Kim, N.R.; Kim, Y.J.; Chin, H.S.; Moon, Y.S. Optical coherence tomographic patterns in diabetic macular oedema: Prediction of visual outcome after focal laser photocoagulation. Br. J. Ophthalmol. 2009, 93, 901–905. [Google Scholar] [CrossRef]
  97. Salz, D.A.; De Carlo, T.E.; Adhi, M.; Moult, E.M.; Choi, W.; Baumal, C.R.; Witkin, A.J.; Duker, J.S.; Fujimoto, J.G.; Waheed, N.K. Select features of diabetic retinopathy on swept-source optical coherence tomographic angiography compared with fluorescein angiography and normal eyes. JAMA Ophthalmol. 2016, 134, 644–650. [Google Scholar] [CrossRef]
  98. Dimitrova, G.; Chihara, E.; Takahashi, H.; Amano, H.; Okazaki, K. Quantitative retinal optical coherence tomography angiography in patients with diabetes without diabetic retinopathy. Investig. Ophthalmol. Vis. Sci. 2017, 58, 190–196. [Google Scholar] [CrossRef]
  99. Pircher, M.; Götzinger, E.; Findl, O.; Michels, S.; Geitzenauer, W.; Leydolt, C.; Schmidt-Erfurth, U.; Hitzenberger, C.K. Human macula investigated in vivo with polarization-sensitive optical coherence tomography. Investig. Ophthalmol. Vis. Sci. 2006, 47, 5487–5494. [Google Scholar] [CrossRef] [PubMed]
  100. Ueda-Arakawa, N.; Ooto, S.; Tsujikawa, A.; Yamashiro, K.; Oishi, A.; Yoshimura, N. Sensitivity and specificity of detecting reticular pseudodrusen in multimodal imaging in Japanese patients. Retina 2013, 33, 490–497. [Google Scholar] [CrossRef] [PubMed]
  101. Schmidt-Erfurth, U.; Klimscha, S.; Waldstein, S.; Bogunović, H. A view of the current and future role of optical coherence tomography in the management of age-related macular degeneration. Eye 2017, 31, 26–44. [Google Scholar] [CrossRef]
  102. Schmidt-Erfurth, U.; Kaiser, P.K.; Korobelnik, J.F.; Brown, D.M.; Chong, V.; Nguyen, Q.D.; Ho, A.C.; Ogura, Y.; Simader, Y.; Heier, J.S.; et al. Intravitreal aflibercept injection for neovascular age-related macular degeneration: Ninety-six–week results of the VIEW studies. Ophthalmology 2014, 121, 193–201. [Google Scholar] [CrossRef] [PubMed]
  103. Gualino, V.; Tadayoni, R.; Cohen, S.Y.; Erginay, A.; Fajnkuchen, F.; Haouchine, B.; Krivosic, V.; Quentel, G.; Vicaut, E.; Gaudric, A. Optical coherence tomography, fluorescein angiography, and diagnosis of choroidal neovascularization in age-related macular degeneration. Retina 2019, 39, 1664–1671. [Google Scholar] [CrossRef] [PubMed]
  104. Fleckenstein, M.; Schmitz-Valckenberg, S.; Adrion, C.; Krämer, I.; Eter, N.; Helb, H.M.; Brinkmann, C.K.; Issa, P.C.; Mansmann, U.; Holz, F.G. Tracking progression with spectral-domain optical coherence tomography in geographic atrophy caused by age-related macular degeneration. Investig. Ophthalmol. Vis. Sci. 2010, 51, 3846–3852. [Google Scholar] [CrossRef] [PubMed]
  105. Zweifel, S.A.; Engelbert, M.; Laud, K.; Margolis, R.; Spaide, R.F.; Freund, K.B. Outer retinal tubulation: A novel optical coherence tomography finding. Arch. Ophthalmol. 2009, 127, 1596–1602. [Google Scholar] [CrossRef]
  106. Chalam, K.; Sambhav, K. Optical coherence tomography angiography in retinal diseases. J. Ophthalmic Vis. Res. 2016, 11, 84–92. [Google Scholar] [CrossRef]
  107. Gong, J.; Yu, S.; Gong, Y.; Wang, F.; Sun, X. The diagnostic accuracy of optical coherence tomography angiography for neovascular age-related macular degeneration: A comparison with fundus fluorescein angiography. J. Ophthalmol. 2016, 2016, 1–8. [Google Scholar] [CrossRef] [Green Version]
  108. Perrott-Reynolds, R.; Cann, R.; Cronbach, N.; Neo, Y.N.; Ho, V.; McNally, O.; Madi, H.; Cochran, C.; Chakravarthy, U. The diagnostic accuracy of OCT angiography in naive and treated neovascular age-related macular degeneration: A review. Eye 2019, 33, 274–282. [Google Scholar] [CrossRef]
  109. Liang, M.C.; De Carlo, T.E.; Baumal, C.R.; Reichel, E.; Waheed, N.K.; Duker, J.S.; Witkin, A.J. Correlation of spectral domain optical coherence tomography angiography and clinical activity in neovascular age-related macular degeneration. Retina 2016, 36, 2265–2273. [Google Scholar] [CrossRef] [PubMed]
  110. Kvanta, A.; de Salles, M.C.; Amrén, U.; Bartuma, H. Optical coherence tomography angiography of the foveal microvasculature in geographic atrophy. Retina 2017, 37, 936–942. [Google Scholar] [CrossRef]
  111. Toto, L.; Borrelli, E.; di Antonio, L.; Carpineto, P.; Mastropasqua, R. Retinal Vascular Plexuses’changes in Dry Age-Related Macular Degeneration, Evaluated by Means of Optical Coherence Tomography Angiography. Retina 2016, 36, 1566–1572. [Google Scholar] [CrossRef]
  112. Cohen, S.Y.; Creuzot-Garcher, C.; Darmon, J.; Desmettre, T.; Korobelnik, J.F.; Levrat, F.; Quentel, G.; Palies, S.; Sanchez, A.; De Gendre, A.S.; et al. Types of choroidal neovascularisation in newly diagnosed exudative age-related macular degeneration. Br. J. Ophthalmol. 2007, 91, 1173–1176. [Google Scholar] [CrossRef]
  113. Farecki, M.-L.; Gutfleisch, M.; Faatz, H.; Rothaus, K.; Heimes, B.; Spital, G.; Lommatzsch, A.; Pauleikhoff, D. Characteristics of type 1 and 2 CNV in exudative AMD in OCT-Angiography. Graefe’s Arch. Clin. Exp. Ophthalmol. 2017, 255, 913–921. [Google Scholar] [CrossRef]
  114. Faridi, A.; Jia, Y.; Gao, S.S.; Huang, D.; Bhavsar, K.V.; Wilson, D.J.; Sill, A.; Flaxel, C.J.; Hwang, T.S.; Lauer, A.K.; et al. Sensitivity and specificity of OCT angiography to detect choroidal neovascularization. Ophthalmol. Retin. 2017, 1, 294–303. [Google Scholar] [CrossRef] [PubMed]
  115. Told, R.; Sacu, S.; Hecht, A.; Baratsits, M.; Eibenberger, K.; Kroh, M.E.; Rezar-Dreindl, S.; Schlanitz, F.G.; Weigert, G.; Pollreisz, A.; et al. Comparison of SD-optical coherence tomography angiography and indocyanine green angiography in type 1 and 2 neovascular age-related macular degeneration. Investig. Ophthalmol. Vis. Sci. 2018, 59, 2393–2400. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  116. Schmoll, T.; Singh, A.S.G.; Blatter, C.; Schriefl, S.; Ahlers, C.; Schmidt-Erfurth, U.; Leitgeb, R.A. Imaging of the parafoveal capillary network and its integrity analysis using fractal dimension. Biomed. Opt. Express 2011, 2, 1159–1168. [Google Scholar] [CrossRef] [Green Version]
  117. Tam, J.; Dhamdhere, K.; Tiruveedhula, P.; Manzanera, S.; Barez, S.; Bearse, M.A.; Adams, A.J.; Roorda, A. Disruption of the retinal parafoveal capillary network in type 2 diabetes before the onset of diabetic retinopathy. Investig. Ophthalmol. Vis. Sci. 2011, 52, 9257–9266. [Google Scholar] [CrossRef]
  118. Burns, S.; Elsner, A.E.; Chui, T.Y.; VanNasdale, D.A.; Clark, C.A.; Gast, T.J.; Malinovsky, V.E.; Phan, A.-D.T. In vivo adaptive optics microvascular imaging in diabetic patients without clinically severe diabetic retinopathy. Biomed. Opt. Express 2014, 5, 961–974. [Google Scholar] [CrossRef] [Green Version]
  119. Lombardo, M.; Parravano, M.; Serrao, S.; Ducoli, P.; Stirpe, M.; Lombardo, G. Analysis of retinal capillaries in patients with type 1 diabetes and nonproliferative diabetic retinopathy using adaptive optics imaging. Retina 2013, 33, 1630–1639. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  120. Lammer, J.; Prager, S.G.; Cheney, M.C.; Ahmed, A.; Radwan, S.H.; Burns, S.A.; Silva, P.S.; Sun, J.K. Cone photoreceptor irregularity on adaptive optics scanning laser ophthalmoscopy correlates with severity of diabetic retinopathy and macular edema. Investig. Ophthalmol. Vis. Sci. 2016, 57, 6624–6632. [Google Scholar] [CrossRef] [Green Version]
  121. Nesper, P.L.; Scarinci, F.; Fawzi, A.A. Adaptive optics reveals photoreceptor abnormalities in diabetic macular ischemia. PLoS ONE 2017, 12, e0169926. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  122. Jonnal, R.S.; Kocaoglu, O.P.; Zawadzki, R.J.; Liu, Z.; Miller, D.T.; Werner, J.S. A review of adaptive optics optical coherence tomography: Technical advances, scientific applications, and the future. Investig. Ophthalmol. Vis. Sci. 2016, 57, OCT51–OCT68. [Google Scholar] [CrossRef]
  123. Sudo, K.; Cense, B. Adaptive optics-assisted optical coherence tomography for imaging of patients with age related macular degeneration. Ophthalmic Technol. XXIII 2013, 8567, 172–178. [Google Scholar] [CrossRef]
  124. ChaudhuRy, M.; PaRida, B.; Panigrahi, S.K. Diagnostic accuracy of B-scan ultrasonography for posterior segment eye disorders: A Cross-sectional Study. J. Clin. Diagn. Res. 2021, 15, TC07–TC012. [Google Scholar] [CrossRef]
  125. Mohamed, I.E.; Mohamed, M.A.; Yousef, M.; Mahmoud, M.Z.; Alonazi, B. Use of ophthalmic B-scan ultrasonography in determining the causes of low vision in patients with diabetic retinopathy. Eur. J. Radiol. Open 2018, 5, 79–86. [Google Scholar] [CrossRef] [Green Version]
  126. Shinar, Z.; Chan, L.; Orlinsky, M. Use of ocular ultrasound for the evaluation of retinal detachment. J. Emerg. Med. 2011, 40, 53–57. [Google Scholar] [CrossRef]
  127. Yuzurihara, D.; Iijima, H. Visual outcome in central retinal and branch retinal artery occlusion. Jpn. J. Ophthalmol. 2004, 48, 490–492. [Google Scholar] [CrossRef]
  128. Bhagat, N.; Grigorian, R.A.; Tutela, A.; Zarbin, M.A. Diabetic macular edema: Pathogenesis and treatment. Surv. Ophthalmol. 2009, 54, 1–32. [Google Scholar] [CrossRef]
  129. Palanisamy, G.; Shankar, N.B.; Ponnusamy, P.; Gopi, V.P. A hybrid feature preservation technique based on luminosity and edge based contrast enhancement in color fundus images. Biocybern. Biomed. Eng. 2020, 40, 752–763. [Google Scholar] [CrossRef]
  130. Chen, Q.; de Sisternes, L.; Leng, T.; Rubin, D.L. Application of improved homogeneity similarity-based denoising in optical coherence tomography retinal images. J. Digit. Imaging 2015, 28, 346–361. [Google Scholar] [CrossRef] [Green Version]
  131. Liu, H.; Lin, S.; Ye, C.; Yu, D.; Qin, J.; An, L. Using a dual-tree complex wavelet transform for denoising an optical coherence tomography angiography blood vessel image. OSA Contin. 2020, 3, 2630–2645. [Google Scholar] [CrossRef]
  132. Cui, D.; Liu, M.; Hu, L.; Liu, K.; Guo, Y.; Jiao, Q. The Application of Wavelet-Domain Hidden Markov Tree Model in Diabetic Retinal Image Denoising. Open Biomed. Eng. J. 2015, 9, 194. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  133. Stitt, A.W.; Curtis, T.M.; Chen, M.; Medina, R.J.; McKay, G.J.; Jenkins, A.; Gardiner, T.A.; Lyons, T.J.; Hammes, H.-P.; Simó, R.; et al. The progress in understanding and treatment of diabetic retinopathy. Prog. Retin. Eye Res. 2016, 51, 156–186. [Google Scholar] [CrossRef]
  134. Ting, D.S.W.; Cheung, C.Y.-L.; Lim, G.; Tan, G.S.W.; Quang, N.D.; Gan, A.; Hamzah, H.; Garcia-Franco, R.; Yeo, I.Y.S.; Lee, S.Y.; et al. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. JAMA 2017, 318, 2211–2223. [Google Scholar] [CrossRef]
  135. Welikala, R.; Fraz, M.; Dehmeshki, J.; Hoppe, A.; Tah, V.; Mann, S.; Williamson, T.; Barman, S. Genetic algorithm based feature selection combined with dual classification for the automated detection of proliferative diabetic retinopathy. Comput. Med. Imaging Graph. 2015, 43, 64–77. [Google Scholar] [CrossRef] [Green Version]
  136. Prasad, D.K.; Vibha, L.; Venugopal, K. Early detection of diabetic retinopathy from digital retinal fundus images. In Proceedings of the 2015 IEEE Recent Advances in Intelligent Computational Systems, Trivandrum, Kerala, India, 10–12 December 2015; pp. 240–245. [Google Scholar]
  137. Mahendran, G.; Dhanasekaran, R. Investigation of the severity level of diabetic retinopathy using supervised classifier algorithms. Comput. Electr. Eng. 2015, 45, 312–323. [Google Scholar] [CrossRef]
  138. Bhatkar, A.P.; Kharat, G. Detection of diabetic retinopathy in retinal images using MLP classifier. In Proceedings of the 2015 IEEE International Symposium on Nanoelectronic and Information Systems, Bhopal, India, 21–23 December 2015; pp. 331–335. [Google Scholar]
  139. Labhade, J.D.; Chouthmol, L.; Deshmukh, S. Diabetic retinopathy detection using soft computing techniques. In Proceedings of the 2016 International Conference on Automatic Control and Dynamic Optimization Techniques (ICACDOT), Pune, India, 9–10 September 2016; pp. 175–178. [Google Scholar]
  140. Rahim, S.S.; Palade, V.; Shuttleworth, J.; Jayne, C. Automatic screening and classification of diabetic retinopathy and maculopathy using fuzzy image processing. Brain Inform. 2016, 3, 249–267. [Google Scholar] [CrossRef]
  141. Islam, M.; Dinh, A.V.; Wahid, K.A. Automated diabetic retinopathy detection using bag of words approach. J. Biomed. Sci. Eng. 2017, 10, 86–96. [Google Scholar] [CrossRef] [Green Version]
  142. Carrera, E.V.; González, A.; Carrera, R. Automated detection of diabetic retinopathy using SVM. In Proceedings of the 2017 IEEE XXIV International Conference on Electronics, Electrical Engineering and Computing (INTERCON), Cusco, Peru, 15–18 August 2017; pp. 1–4. [Google Scholar]
  143. Somasundaram, S.K.; Alli, P. A machine learning ensemble classifier for early prediction of diabetic retinopathy. J. Med. Syst. 2017, 41, 1–12. [Google Scholar]
  144. Costa, P.; Galdran, A.; Smailagic, A.; Campilho, A. A weakly-supervised framework for interpretable diabetic retinopathy detection on retinal images. IEEE Access 2018, 6, 18747–18758. [Google Scholar] [CrossRef]
  145. Sharafeldeen, A.; Elsharkawy, M.; Khalifa, F.; Soliman, A.; Ghazal, M.; AlHalabi, M.; Yaghi, M.; Alrahmawy, M.; Elmougy, S.; Sandhu, H.S.; et al. Precise higher-order reflectivity and morphology models for early diagnosis of diabetic retinopathy using OCT images. Sci. Rep. 2021, 11, 1–16. [Google Scholar] [CrossRef] [PubMed]
  146. Wang, X.; Han, Y.; Sun, G.; Yang, F.; Liu, W.; Luo, J.; Cao, X.; Yin, P.; Myers, F.L.; Zhou, L. Detection of the Microvascular Changes of Diabetic Retinopathy Progression Using Optical Coherence Tomography Angiography. Transl. Vis. Sci. Technol. 2021, 10, 31. [Google Scholar] [CrossRef] [PubMed]
  147. Abdelsalam, M.M.; Zahran, M. A novel approach of diabetic retinopathy early detection based on multifractal geometry analysis for OCTA macular images using support vector machine. IEEE Access 2021, 9, 22844–22858. [Google Scholar] [CrossRef]
  148. Elsharkawy, M.; Sharafeldeen, A.; Soliman, A.; Khalifa, F.; Ghazal, M.; El-Daydamony, E.; Atwan, A.; Sandhu, H.S.; El-Baz, A. A Novel Computer-Aided Diagnostic System for Early Detection of Diabetic Retinopathy Using 3D-OCT Higher-Order Spatial Appearance Model. Diagnostics 2022, 12, 461. [Google Scholar] [CrossRef]
  149. Eladawi, N.; Elmogy, M.; Fraiwan, L.; Pichi, F.; Ghazal, M.; Aboelfetouh, A.; Riad, A.; Keynton, R.; Schaal, S.; El-Baz, A. Early diagnosis of diabetic retinopathy in octa images based on local analysis of retinal blood vessels and foveal avascular zone. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 3886–3891. [Google Scholar]
  150. Alam, M.; Zhang, Y.; Lim, J.I.; Chan, R.V.; Yang, M.; Yao, X. Quantitative optical coherence tomography angiography features for objective classification and staging of diabetic retinopathy. Retina 2020, 40, 322–332. [Google Scholar] [CrossRef]
  151. Liu, Z.; Wang, C.; Cai, X.; Jiang, H.; Wang, J. Discrimination of Diabetic Retinopathy From Optical Coherence Tomography Angiography Images Using Machine Learning Methods. IEEE Access 2021, 9, 51689–51694. [Google Scholar] [CrossRef]
  152. Sandhu, H.S.; Elmogy, M.; Sharafeldeen, A.T.; Elsharkawy, M.; El-Adawy, N.; Eltanboly, A.; Shalaby, A.; Keynton, R.; El-Baz, A. Automated diagnosis of diabetic retinopathy using clinical biomarkers, optical coherence tomography, and optical coherence tomography angiography. Am. J. Ophthalmol. 2020, 216, 201–206. [Google Scholar] [CrossRef]
  153. Decencière, E.; Zhang, X.; Cazuguel, G.; Lay, B.; Cochener, B.; Trone, C.; Gain, P.; Ordóñez-Varela, J.-R.; Massin, P.; Erginay, A.; et al. Feedback on a publicly distributed image database: The Messidor database. Image Anal. Stereol. 2014, 33, 231–234. [Google Scholar] [CrossRef] [Green Version]
  154. Kauppi, T.; Kalesnykiene, V.; Kamarainen, J.-K.; Lensu, L.; Sorri, I.; Raninen, A.; Voutilainen, R.; Uusitalo, H.; Kalviainen, H.; Pietila, J. The diaretdb1 diabetic retinopathy database and evaluation protocol. In Proceedings of the BMVC, University of Warwick, Coventry, UK, 10–13 September 2007; Volume 1, pp. 1–10. [Google Scholar]
  155. Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J.; et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 2016, 316, 2402–2410. [Google Scholar] [CrossRef] [PubMed]
  156. Colas, E.; Besse, A.; Orgogozo, A.; Schmauch, B.; Meric, N.; Besse, E. Deep learning approach for diabetic retinopathy screening. Acta Ophthalmol. 2016, 94, 1. [Google Scholar] [CrossRef]
  157. Takahashi, H.; Tampo, H.; Arai, Y.; Inoue, Y.; Kawashima, H. Applying artificial intelligence to disease staging: Deep learning for improved staging of diabetic retinopathy. PLoS ONE 2017, 12, e0179790. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  158. Quellec, G.; Charrière, K.; Boudi, Y.; Cochener, B.; Lamard, M. Deep image mining for diabetic retinopathy screening. Med. Image Anal. 2017, 39, 178–193. [Google Scholar] [CrossRef] [Green Version]
  159. Wang, Z.; Yin, Y.; Shi, J.; Fang, W.; Li, H.; Wang, X. Zoom-in-net: Deep mining lesions for diabetic retinopathy detection. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Quebec, Canada, 10–14 September 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 267–275. [Google Scholar]
  160. Dutta, S.; Manideep, B.; Basha, S.M.; Caytiles, R.D.; Iyengar, N. Classification of diabetic retinopathy images by using deep learning models. Int. J. Grid Distrib. Comput. 2018, 11, 89–106. [Google Scholar] [CrossRef]
  161. Zhang, X.; Zhang, W.; Fang, M.; Xue, J.; Wu, L. Automatic classification of diabetic retinopathy based on convolutional neural networks. In Proceedings of the 2018 International Conference on Image and Video Processing, and Artificial Intelligence, Shanghai, China, 15–17 August 2018; International Society for Optics and Photonics: Bellingham, DC, USA, 2018; Volume 10836, p. 1083608. [Google Scholar]
  162. Chakrabarty, N. A deep learning method for the detection of diabetic retinopathy. In Proceedings of the 2018 5th IEEE Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON), Gorakhpur, India, 2–4 November 2018; pp. 1–5. [Google Scholar]
  163. Kwasigroch, A.; Jarzembinski, B.; Grochowski, M. Deep CNN based decision support system for detection and assessing the stage of diabetic retinopathy. In Proceedings of the 2018 International Interdisciplinary PhD Workshop (IIPhDW), Swinoujscie, Poland, 9–12 May 2018; pp. 111–116. [Google Scholar]
  164. Li, F.; Liu, Z.; Chen, H.; Jiang, M.; Zhang, X.; Wu, Z. Automatic detection of diabetic retinopathy in retinal fundus photographs based on deep learning algorithm. Transl. Vis. Sci. Technol. 2019, 8, 4. [Google Scholar] [CrossRef] [Green Version]
  165. Nagasawa, T.; Tabuchi, H.; Masumoto, H.; Enno, H.; Niki, M.; Ohara, Z.; Yoshizumi, Y.; Ohsugi, H.; Mitamura, Y. Accuracy of ultrawide-field fundus ophthalmoscopy-assisted deep learning for detecting treatment-naïve proliferative diabetic retinopathy. Int. Ophthalmol. 2019, 39, 2153–2159. [Google Scholar] [CrossRef] [Green Version]
  166. Metan, A.C.; Lambert, A.; Pickering, M. Small Scale Feature Propagation Using Deep Residual Learning for Diabetic Retinopathy Classification. In Proceedings of the 2019 IEEE 4th International Conference on Image, Vision and Computing (ICIVC), Xiamen, China, 5–7 July 2019; pp. 392–396. [Google Scholar]
  167. Qummar, S.; Khan, F.G.; Shah, S.; Khan, A.; Shamshirband, S.; Rehman, Z.U.; Khan, L.A.; Jadoon, W. A deep learning ensemble approach for diabetic retinopathy detection. IEEE Access 2019, 7, 150530–150539. [Google Scholar] [CrossRef]
  168. Sayres, R.; Taly, A.; Rahimy, E.; Blumer, K.; Coz, D.; Hammel, N.; Krause, J.; Narayanaswamy, A.; Rastegar, Z.; Wu, D.; et al. Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy. Ophthalmology 2019, 126, 552–564. [Google Scholar] [CrossRef] [Green Version]
  169. Sengupta, S.; Singh, A.; Zelek, J.; Lakshminarayanan, V. Cross-domain diabetic retinopathy detection using deep learning. In Applications of Machine Learning; International Society for Optics and Photonics: Bellingham, DC, USA, 2019; Volume 11139, p. 111390V. [Google Scholar]
  170. Narayanan, B.N.; Hardie, R.C.; de Silva, M.S.; Kueterman, N.K. Hybrid machine learning architecture for automated detection and grading of retinal images for diabetic retinopathy. J. Med. Imaging 2020, 7, 034501. [Google Scholar] [CrossRef]
  171. Shankar, K.; Sait, A.R.W.; Gupta, D.; Lakshmanaprabu, S.; Khanna, A.; Pandey, H.M. Automated detection and classification of fundus diabetic retinopathy images using synergic deep learning model. Pattern Recognit. Lett. 2020, 133, 210–216. [Google Scholar] [CrossRef]
  172. He, A.; Li, T.; Li, N.; Wang, K.; Fu, H. CABNet: Category attention block for imbalanced diabetic retinopathy grading. IEEE Trans. Med. Imaging 2020, 40, 143–153. [Google Scholar] [CrossRef] [PubMed]
  173. Saeed, F.; Hussain, M.; Aboalsamh, H.A. Automatic diabetic retinopathy diagnosis using adaptive fine-tuned convolutional neural network. IEEE Access 2021, 9, 41344–41359. [Google Scholar] [CrossRef]
  174. Wang, Y.; Yu, M.; Hu, B.; Jin, X.; Li, Y.; Zhang, X.; Zhang, Y.; Gong, D.; Wu, C.; Zhang, B.; et al. Deep learning-based detection and stage grading for optimising diagnosis of diabetic retinopathy. Diabetes/Metab. Res. Rev. 2021, 37, e3445. [Google Scholar] [CrossRef] [PubMed]
  175. Hsieh, Y.-T.; Chuang, L.-M.; Jiang, Y.-D.; Chang, T.-J.; Yang, C.-M.; Yang, C.-H.; Chan, L.-W.; Kao, T.-Y.; Chen, T.-C.; Lin, H.-C.; et al. Application of deep learning image assessment software VeriSee™ for diabetic retinopathy screening. J. Formos. Med. Assoc. 2021, 120, 165–171. [Google Scholar] [CrossRef] [PubMed]
  176. Khan, Z.; Khan, F.G.; Khan, A.; Rehman, Z.U.; Shah, S.; Qummar, S.; Ali, S.; Pack, S. Diabetic Retinopathy Detection Using VGG-NIN a Deep Learning Architecture. IEEE Access 2022, 9, 61408–61416. [Google Scholar] [CrossRef]
  177. Zia, F.; Irum, I.; Qadri, N.N.; Nam, Y.; Khurshid, K.; Ali, M.; Ashraf, I.; Khan, M.A. A Multilevel Deep Feature Selection Framework for Diabetic Retinopathy Image Classification. Comput. Mater. Contin. 2022, 70, 2261–2276. [Google Scholar] [CrossRef]
  178. Das, S.; Saha, S.K. Diabetic retinopathy detection and classification using CNN tuned by genetic algorithm. Multimed. Tools Appl. 2022, 81, 8007–8020. [Google Scholar] [CrossRef]
  179. Tsai, C.-Y.; Chen, C.-T.; Chen, G.-A.; Yeh, C.-F.; Kuo, C.-T.; Hsiao, Y.-C.; Hu, H.-Y.; Tsai, I.-L.; Wang, C.-H.; Chen, J.-R.; et al. Necessity of Local Modification for Deep Learning Algorithms to Predict Diabetic Retinopathy. Int. J. Environ. Res. Public Health 2022, 19, 1204. [Google Scholar] [CrossRef]
  180. Gao, Z.; Jin, K.; Yan, Y.; Liu, X.; Shi, Y.; Ge, Y.; Pan, X.; Lu, Y.; Wu, J.; Wang, Y.; et al. End-to-end diabetic retinopathy grading based on fundus fluorescein angiography images using deep learning. Graefe’s Arch. Clin. Exp. Ophthalmol. 2022, 260, 1663–1673. [Google Scholar] [CrossRef]
  181. ElTanboly, A.; Ismail, M.; Shalaby, A.; Switala, A.; El-Baz, A.; Schaal, S.; Gimel’Farb, G.; El-Azab, M.; Switala, A.; El-Bazy, A. A computer-aided diagnostic system for detecting diabetic retinopathy in optical coherence tomography images. Med. Phys. 2017, 44, 914–923. [Google Scholar] [CrossRef] [PubMed]
  182. ElTanboly, A.; Ghazal, M.; Khalil, A.; Shalaby, A.; Mahmoud, A.; Switala, A.; El-Azab, M.; Schaal, S.; El-Baz, A. An integrated framework for automatic clinical assessment of diabetic retinopathy grade using spectral domain OCT images. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 1431–1435. [Google Scholar]
  183. Li, X.; Shen, L.; Shen, M.; Tan, F.; Qiu, C.S. Deep learning based early stage diabetic retinopathy detection using optical coherence tomography. Neurocomputing 2019, 369, 134–144. [Google Scholar] [CrossRef]
  184. Ghazal, M.; Ali, S.S.; Mahmoud, A.H.; Shalaby, A.M.; El-Baz, A. Accurate detection of non-proliferative diabetic retinopathy in optical coherence tomography images using convolutional neural networks. IEEE Access 2020, 8, 34387–34397. [Google Scholar] [CrossRef]
  185. Heisler, M.; Karst, S.; Lo, J.; Mammo, Z.; Yu, T.; Warner, S.; Maberley, D.; Beg, M.F.; Navajas, E.V.; Sarunic, M.V. Ensemble deep learning for diabetic retinopathy detection using optical coherence tomography angiography. Transl. Vis. Sci. Technol. 2020, 9, 20. [Google Scholar] [CrossRef] [Green Version]
  186. Ryu, G.; Lee, K.; Park, D.; Park, S.H.; Sagong, M. A deep learning model for identifying diabetic retinopathy using optical coherence tomography angiography. Sci. Rep. 2021, 11, 1–9. [Google Scholar] [CrossRef]
  187. Zang, P.; Gao, L.; Hormel, T.T.; Wang, J.; You, Q.; Hwang, T.S.; Jia, Y. DcardNet: Diabetic retinopathy classification at multiple levels based on structural and angiographic optical coherence tomography. IEEE Trans. Biomed. Eng. 2020, 68, 1859–1870. [Google Scholar] [CrossRef]
  188. Ghosh, R.; Ghosh, K.; Maitra, S. Automatic detection and classification of diabetic retinopathy stages using CNN. In Proceedings of the 2017 4th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, Delhi-NCR, India, 2–3 February 2017; pp. 550–554. [Google Scholar]
  189. Hathwar, S.B.; Srinivasa, G. Automated grading of diabetic retinopathy in retinal fundus images using deep learning. In Proceedings of the 2019 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), Kuala Lumpur, Malaysia, 17–19 September 2019; pp. 73–77. [Google Scholar]
  190. García-Floriano, A.; Ferreira-Santiago, Á.; Camacho-Nieto, O.; Yáñez-Márquez, C. A machine learning approach to medical image classification: Detecting age-related macular degeneration in fundus images. Comput. Electr. Eng. 2019, 75, 218–229. [Google Scholar] [CrossRef]
  191. Liu, Y.-Y.; Ishikawa, H.; Chen, M.; Wollstein, G.; Duker, J.S.; Fujimoto, J.G.; Schuman, J.S.; Rehg, J. Computerized macular pathology diagnosis in spectral domain optical coherence tomography scans based on multiscale texture and shape features. Investig. Ophthalmol. Vis. Sci. 2011, 52, 8316–8322. [Google Scholar] [CrossRef] [Green Version]
  192. Srinivasan, P.P.; Kim, L.; Mettu, P.S.; Cousins, S.W.; Comer, G.M.; Izatt, J.A.; Farsiu, S. Fully automated detection of diabetic macular edema and dry age-related macular degeneration from optical coherence tomography images. Biomed. Opt. Express 2014, 5, 3568–3577. [Google Scholar] [CrossRef] [Green Version]
  193. Fraccaro, P.; Nicolo, M.; Bonetto, M.; Giacomini, M.; Weller, P.; Traverso, C.E.; Prosperi, M.; Osullivan, D. Combining macula clinical signs and patient characteristics for age-related macular degeneration diagnosis: A machine learning approach. BMC Ophthalmol. 2015, 15, 10. [Google Scholar] [CrossRef] [Green Version]
  194. Lee, C.S.; Baughman, D.M.; Lee, A.Y. Deep learning is effective for classifying normal versus age-related macular degeneration OCT images. Ophthalmol. Retin. 2017, 1, 322–327. [Google Scholar] [CrossRef]
  195. Burlina, P.M.; Joshi, N.; Pekala, M.; Pacheco, K.D.; Freund, D.E.; Bressler, N.M. Automated grading of age-related macular degeneration from color fundus images using deep convolutional neural networks. JAMA Ophthalmol. 2017, 135, 1170–1176. [Google Scholar] [CrossRef] [PubMed]
  196. Hassan, T.; Akram, M.U.; Akhtar, M.; Khan, S.A.; Yasin, U. Multilayered deep structure tensor delaunay triangulation and morphing based automated diagnosis and 3D presentation of human macula. J. Med. Syst. 2018, 42, 1–17. [Google Scholar] [CrossRef] [PubMed]
  197. Motozawa, N.; An, G.; Takagi, S.; Kitahata, S.; Mandai, M.; Hirami, Y.; Yokota, H.; Akiba, M.; Tsujikawa, A.; Takahashi, M.; et al. Optical coherence tomography-based deep-learning models for classifying normal and age-related macular degeneration and exudative and non-exudative age-related macular degeneration changes. Ophthalmol. Ther. 2019, 8, 527–539. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  198. Li, F.; Chen, H.; Liu, Z.; Zhang, X.; Wu, Z. Fully automated detection of retinal disorders by image-based deep learning. Graefe’s Arch. Clin. Exp. Ophthalmol. 2019, 257, 495–505. [Google Scholar] [CrossRef] [PubMed]
  199. Tan, J.H.; Bhandary, S.V.; Sivaprasad, S.; Hagiwara, Y.; Bagchi, A.; Raghavendra, U.; Rao, A.K.; Raju, B.; Shetty, N.S.; Gertych, A.; et al. Age-related macular degeneration detection using deep convolutional neural network. Future Gener. Comput. Syst. 2018, 87, 127–135. [Google Scholar] [CrossRef]
  200. Hwang, D.-K.; Hsu, C.-C.; Chang, K.-J.; Chao, D.; Sun, C.-H.; Jheng, Y.-C.; Yarmishyn, A.A.; Wu, J.-C.; Tsai, C.-Y.; Wang, M.-L.; et al. Artificial intelligence-based decision-making for age-related macular degeneration. Theranostics 2019, 9, 232–245. [Google Scholar] [CrossRef] [PubMed]
  201. Treder, M.; Lauermann, J.L.; Eter, N. Automated detection of exudative age-related macular degeneration in spectral domain optical coherence tomography using deep learning. Graefe’s Arch. Clin. Exp. Ophthalmol. 2018, 256, 259–265. [Google Scholar] [CrossRef]
  202. An, G.; Yokota, H.; Motozawa, N.; Takagi, S.; Mandai, M.; Kitahata, S.; Hirami, Y.; Takahashi, M.; Kurimoto, Y.; Akiba, M. Deep learning classification models built with two-step transfer learning for age related macular degeneration diagnosis. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 2049–2052. [Google Scholar]
  203. Jiang, F.; Jiang, Y.; Zhi, H.; Dong, Y.; Li, H.; Ma, S.; Wang, Y.; Dong, Q.; Shen, H.; Wang, Y. Artificial intelligence in healthcare: Past, present and future. Stroke Vasc. Neurol. 2017, 2, 230–243. [Google Scholar] [CrossRef]
  204. Tekkeşin, A. A rtificial intelligence in healthcare: Past, present and future. Anatol. J. Cardiol. 2019, 22, 8–9. [Google Scholar]
  205. Alksas, A.; Shehata, M.; Saleh, G.A.; Shaffie, A.; Soliman, A.; Ghazal, M.; Khelifi, A.; Abu Khalifeh, H.; Razek, A.A.; Giridharan, G.A.; et al. A novel computer-aided diagnostic system for accurate detection and grading of liver tumors. Sci. Rep. 2021, 11, 1–18. [Google Scholar] [CrossRef] [PubMed]
  206. Yanagihara, R.T.; Lee, C.S.; Ting, D.S.W.; Lee, A.Y. Methodological challenges of deep learning in optical coherence tomography for retinal diseases: A review. Transl. Vis. Sci. Technol. 2020, 9, 11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  207. Schlegl, T.; Waldstein, S.M.; Bogunovic, H.; Endstraßer, F.; Sadeghipour, A.; Philip, A.-M.; Podkowinski, D.; Gerendas, B.S.; Langs, G.; Schmidt-Erfurth, U. Fully automated detection and quantification of macular fluid in OCT using deep learning. Ophthalmology 2018, 125, 549–558. [Google Scholar] [CrossRef] [Green Version]
  208. Abràmoff, M.D.; Lavin, P.T.; Birch, M.; Shah, N.; Folk, J.C. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit. Med. 2018, 1, 1–8. [Google Scholar] [CrossRef]
  209. De Fauw, J.; Ledsam, J.R.; Romera-Paredes, B.; Nikolov, S.; Tomasev, N.; Blackwell, S.; Askham, H.; Glorot, X.; O’Donoghue, B.; Visentin, D.; et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med. 2018, 24, 1342–1350. [Google Scholar] [CrossRef] [PubMed]
  210. Michl, M.; Fabianska, M.; Seeböck, P.; Sadeghipour, A.; Najeeb, B.H.; Bogunovic, H.; Schmidt-Erfurth, U.M.; Gerendas, B.S. Automated quantification of macular fluid in retinal diseases and their response to anti-VEGF therapy. Br. J. Ophthalmol. 2022, 106, 113–120. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Common retinal diseases.
Figure 1. Common retinal diseases.
Bioengineering 09 00366 g001
Figure 2. Analysis of retinal images.
Figure 2. Analysis of retinal images.
Bioengineering 09 00366 g002
Figure 3. Medical image modalities for the detection, diagnosis, and staging of DR and AMD.
Figure 3. Medical image modalities for the detection, diagnosis, and staging of DR and AMD.
Bioengineering 09 00366 g003
Figure 4. Components of artificial intelligence (AI).
Figure 4. Components of artificial intelligence (AI).
Bioengineering 09 00366 g004
Figure 5. Summary of traditional ML methods for DR detection, diagnosis, and/or staging.
Figure 5. Summary of traditional ML methods for DR detection, diagnosis, and/or staging.
Bioengineering 09 00366 g005
Figure 6. Summary of deep learning methods for DR detection, diagnosis, and/or staging.
Figure 6. Summary of deep learning methods for DR detection, diagnosis, and/or staging.
Bioengineering 09 00366 g006
Figure 7. Summary of traditional ML methods for AMD detection, diagnosis, and/or staging.
Figure 7. Summary of traditional ML methods for AMD detection, diagnosis, and/or staging.
Bioengineering 09 00366 g007
Figure 8. Summary of deep learning methods for AMD detection, diagnosis, and/or staging.
Figure 8. Summary of deep learning methods for AMD detection, diagnosis, and/or staging.
Bioengineering 09 00366 g008
Table 2. Deep learning methods for early detection, diagnosis, and grading of DR.
Table 2. Deep learning methods for early detection, diagnosis, and grading of DR.
StudyGoalDeep NetworkOther Features Database SizePerformance
Gulshan et al. [155], 2016Grading of DR and DME using fundus imagesEnsemble of 10 CNN networksFinal decision was computed as the linear average of the predictions of the ensemble128,175 + 9963 from EyePACS-1 +1748 from MESSIDOR-2 A U C = 99.1% (EyePACS-1) A U C = 99%
(Messidor-2)
Colas et al. [156], 2016Grading of DR using fundus imagesDeep CNN networkTheir technique provides the location of the detected anomalies 70,000 image (training) +10,000 (test)AUC = 94.6%, Sen = 96.2%, Spef = 66.6%
Ghosh et al. [188], 2017Grading of DR using fundus images28-layer CNNData augmentation, normalization denoising were applied before the CNN 30,000 Kaggle imagesACC = 95% (two-class)
ACC = 85% (five-class)
Eltanboly et al. [181], 2017DR detection using OCT imagesDeep fusion classifier using auto-encodersFeatures are: reflectivity, curvature, and thickness of twelve segmented retinal layers52 scansACC = 92%
Sen = 83%, and Spef = 100%
Takahashi et al. [157], 2017Differentiate between NPDR, Severe NPDR, and PDR using fundus imagesModified GoogleNetFundus scans are the inputs to the Modified GoogleNet9939 scans from 2740 patients A C C = 81%
Quellec et al. [158], 2017Grading DR using fundus images26-layer ConvNetsAn ensemble of ConvNet was used 88,702 scans (Kaggle) +107,799 images (e-optha)AUC = 0.954 (Kaggle)
AUC = 0.949 (e-optha)
Ting et al. [134], 2017Identifying DR and related eye diseases using fundus imagesAdapted VGGNet architecture An ensemble of two networks for detecting referable DR494,661 imagesSen = 90.5% Spef = 91.6% for detecting referable DR
Wang et al. [159]Diagnosing DR and identifying suspicious regions using fundus imagesZoom-in-NetInception-Resnet for the backbone network35k/11k/43k for train/val/test (EyePACS) and 1.2k (Messidor)AUC = 0.95
(Messidor)
AUC = 0.92 (EyePACS)
Dutta et al. [160], 2018Differentiate between mild NPDR, moderate NPDR, severe NPDR, and PDRBack propagation NN, Deep NN, and CNNCNN used VGG16 model35,000 training and 15,000 test images
(Kaggle)
ACC = 86.3% (DNN)
ACC = 78.3% (VGGNet) ACC = 42% (back propagation NN)
Eltanboly et al. [182], 2018Grading of nonproliferative DR using OCT imagesTwo-stage deep fusion classifier using autoencoder Features are: reflectivity, curvature, and thickness of twelve segmented retinal layers74 OCT imagesACC = 93% Sen = 91%, Spef = 97%
(for detecting DR)
ACC = 98% (for detecting early stage from mild/moderate DR)
Zhang et al. [161], 2018Diagnose the severity of diabetic retinopathy (DR)DR-Net with an adaptive cross-entropy lossData augmentation is applied88,702 images from EyePACS dataset A C C = 82.1%
Chakrabarty et al. [162], 2018DR detection using fundus images9-layer CNNResized grey-level Fundus scans are the inputs to the CNN300 images A C C = 100%
S e n = 100%
Kwasigroch et al. [163], 2018DR detection and staging using fundus imagesVGGNetFundus scans are the inputs to the CNN88,000 imagesACC = 82% (DR detection) ACC = 51% (DR staging)
Li et al. [164], 2019Detection of referral DR using fundus imagesInception-v3Enhanced contrast scans are the inputs to the CNN, Transfer learning is applied19,233 images from 5278 patientsACC = 93.49% Sen = 96.93% Spef = 93.45% AUC = 0.9905
Nagasawa et al. [165], 2019Differentiate between nonPDR and PDR using ultrawide-field fundus imagesInception-v3Transfer learning is applied378 scansSen = 94.7% Spec = 97.2%
AUC = 0.969
Metan et al. [166], 2019DR staging using fundus imagesResNetColor fundus images are the inputs to the CNN88,702
(EyePacks)
A C C = 91%
Qummar et al. [167], 2019DR staging using fundus imagesFive CNNs: ResNet50, Inception-v3, Xception, Dense121, and Dense 169Ensemble of five CNN88,702
(EyePacks)
ACC = 80.80%, Recall = 51.50%, Spef = 86.72%, F1 = 53.74%
Sayres et al. [168], 2019DR staging using fundus imagesInception-v4Fundus images are the inputs to the CNN1769 images from 1612 patients A C C = 88.4%
Sengupta et al. [169], 2019DR staging using fundus imagesInception-v3Data preprocessing is appliedKaggle EYEPACS and Messidor datasetsSen = 90% Spef = 91.94% ACC = 90.4
Hathwar et al. [189], 2019DR detection and staging using fundus imagesXceptionTransfer learning is applied35,124 images (EyePACS) 413 images (IDRiD) S e n = 94.3%
(DR detection)
Li et al. [183], 2019Early detection of DR using OCT imagesOCTD_NetData augmentation is applied4168 OCT imagesACC = 92% Spef = 95%
Sen = 92%
Heisler et al. [185], 2020Classifying DR Using OCTA imagesFour fine-tuned VGG19Ensemble training is applied based on majority voting or stacking463 volumes from 360 eyes A C C = 92%
(majority voting)
A C C = 90%
(stacking)
Zang et al. [187], 2020Classifying DR Using OCT and OCTA imagesDcardNetData augmentation is applied303 eyes from 250 participants A C C = 95.7%
(detecting referable DR)
Ghazal et al. [184], 2020Early detection of NPDR using OCT imagesAlexNetSVM was used for classification52 subjects A C C = 94%
Narayanan et al. [170], 2020detect and grade the fundus imagesAlexNet, VGG16, ResNet, Inception-v3, NASNet, DenseNet, GoogleNetTransfer Learning is applied for each network3661 images A C C = 98.4% (detection)
A C C = 96.3%
(grading)
Shankar et al. [171], 2020DR grading using fundus imagesSynergic deep learningHistogram-based segmentation was applied to extract the details of the fundus image1200 images
(MESSIDOR dataset)
ACC = 99.28%, Sen = 98%, Spef = 99%
Ryu et al. [186], 2021Early detection of DR using OCTAResNet101OCTA images are the inputs to the CNN496 eyesACC = 91–98% Sen = 86–97%, Spef = 94–99%, AUC = 0.919–0.976.
He et al. [172], 2021Grading DR using fundus imagesCABNet with DenseNet-121 as a backbone networkCABNet is an attention module with global attention block1200 images
(MESSIDOR), 88,702 (EyePACS)
A C C = 93.1%
A U C = 0.969
P e r = 92.9%
Saeed et al. [173], 2021 Grading DR using fundus imagesTwo pretrained CNNsTransfer Learning is applied1200 images
(MESSIDOR), 88,702 (EyePACS)
A C C = 99.73%
A U C = 89%
(EyePACS)
Wang et al. [174], 2021Grading DR using fundus imagesInception-v3 + lesionNetTransfer Learning is applied12,252 images + 565 (external test set) A U C = 94.3%
S e n = 90.6%
S p e f = 80.7%
Hsieh et al. [175]Grading DR using fundus imagesVeriSee™ softwareModified Inception-v4 model as backbone network7524 images S e n = 92.2%
S p e c = 89.5%
A U C = 0.955
(detecting DR)
Khan et al. [176]Grading DR using fundus imagesVGG-NiN modelVGG16, spatial pyramid pooling layer and network-in-network are stacked to form VGG-NiN model25,810 images A U C = 0.838
Gao et al. [180], 2022Grading DR using fundus fluorescein angiography imagesVGG16, ResNet50, DenseNetImages are the inputs to the CNNs11,214 images from 705 patients A C C = 94.17%
(VGG16)
Zia et al. [177], 2022Grading DR using fundus imagesVGGNet and Inception-v3Applied a feature fusion and selection steps35,126 Kaggle dataset A C C = 96.4%
Das et al. [178], 2022Detecting and classifying DR using fundus imagesA CNN is used with several layers that is optimized using a genetic algorithmSVM was used for classification1200 images (Messidor dataset) A C C = 98.67%
A U C = 0.9933
Tsai et al. [179], 2022Grading DR using fundus imagesInception-v3, ResNet101, and DenseNet121Transfer Learning is applied88,702 images (EyePACS) 4038 images A C C = 84.64% (Kaggle)
A C C = 83.80
(Taiwanese dataset)
Table 3. Traditional ML methods for early detection, diagnosis, and grading of AMD.
Table 3. Traditional ML methods for early detection, diagnosis, and grading of AMD.
StudyGoalFeaturesClassifierDatabase SizePerformance
Liu et al. [191], 2011Identify normal and three retinal diseases using OCT images: AMD, macular hole, and macular edemaSpatial and shape featuresSVMTrain: 326 scans from 136 subject (193 eyes)
Test:131 scans from 37 subjects (58 eyes)
A U C = 0.975; to identify AMD from normal subjects
Srinivasan et al. [192], 2014Identify normal and two retinal diseases using SD-OCT: dry AMD and diabetic macular edema (DME)Multiscale histograms of oriented gradient descriptors SVM45 subjects: 15 normal, 15 with dry AMD, and 15 with DME A C C = 100% for identifying cases with AMD
Fraccaro et al. [193], 2015To diagnose AMD using OCT imagesPatient age, gender, and clinical binary attributes White boxes (e.g., logistic regression & decision tree) and black boxes (e.g., SVM & random forest)487 patients (912 eyes): 50 bootstrap test A U C = 0.92
García-Floriano et al. [190], 2019To differentiate normal from AMD with drusen using color fundus imagesInvariant momentums extracted from contrast enhanced, morphological processed imagesSVM70 images: 37 healthy and 33 AMD with drusen A C C = 92%
Table 4. Deep learning methods for early detection, diagnosis, and grading of AMD.
Table 4. Deep learning methods for early detection, diagnosis, and grading of AMD.
StudyGoalCNNOther Features Database SizePerformance
Lee et al. [194], 2017To differentiate between normal and AMD cases using OCTModified VGG19A modified VGG19 DCNN with changing the last fully connected layer with a two-nodes layer80,839 images for training and 20,163 images for testAUC = 92.77%, ACC = 87.6%, Sen = 84.6% Spef = 91.5%
Ting et al. [134], 2017Identify three retinal diseases: DR, glaucoma, AMD using color fundus imagesAdapted VGGNet modelAn ensemble of two networks is used for the classification of each eye diseaseValidation dataset of 71,896 images; from 14,880 patients S e n = 93.2%
S p e f = 88.7
Burlina et al. [195], 2017Identify no or early AMD from intermediate or advanced AMD using fundus imagesAlexNet Solving two-class problem130,000 images from 4613 patients A C C = 88.4% to 91.6%
A U C = 0.94 to 0.96
Treder et al. [201], 2018Detect exudative AMD from normal subjects using SD-OCTInception-v3Transfer learning1012 SD-OCT scans A C C = 96%
S e n = 100%
S p e f = 92%
Tan et al. [199], 2018Early detect AMD using fundus images14-layer CNN modelData augmentation402 normal, 583 early, intermediate AMD, or GA, and 125 wet AMD eyes A C C = 95%
S e n = 96%
S p e f = 94%
10-fold cross-validation
Hassan et al. [196], 2018Diagnosis of three retinal diseases (i.e., macular edema, central serous choriorentopathy, and AMD) using OCTSegNet followed by an AlexNet Segmenting nine retinal layers41,921 retinal OCT scans for testing and 4992 for training A C C = 96%
An et al. [202], 2019Two classifiers: AMD vs. normal and AMD with fluid vs. AMD without fluidTwo VGG16 modelsA model to distinguish AMD from normal followed by a model to distinguish AMD with from AMD without fluid1234 training data and 391 test data A C C = 99.2%
A U C = 0.999 to identify AMD from normal.
A C C = 95.1%
A U C = 0.992 to distinguish AMD with from AMD without fluid
Motozawa et al. [197], 2019Two classifiers: AMD vs. normal and AMD with exudative changes vs. AMD without exudative changes using SD-OCT imagesTwo 18-layer CNNA model to distinguish AMD from normal followed by a model to distinguish AMD with from AMD without exudative changes1621 images A C C = 99%
s e n = 100%
S p e f = 91.8% to identify AMD from normal.
A C C = 93.9%
S e n = 98.4%
S p e f = 88.3% to identify AMD with from AMD without exudative changes
Hwang et al. [200], 2019Distinguish between normal, Dry (drusen), active wet, and inactive wet AMDResNet50, Inception-v3, and VGG16A cloud computing website [196] wasss developed based on their algorithm35,900 images A C C = 91.40% (VGG16), 92.67% (Inception-v3), and 90.73% (ResNet50)
Li et al. [198], 2019Distinguish between normal, AMD, and diabetic macular edema using OCT imagesVGG-16Transfer learning207,130 imagesACC = 98.6%, Sen = 97.8%, Spef = 99.4%
AUC = 100%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Saleh, G.A.; Batouty, N.M.; Haggag, S.; Elnakib, A.; Khalifa, F.; Taher, F.; Mohamed, M.A.; Farag, R.; Sandhu, H.; Sewelam, A.; et al. The Role of Medical Image Modalities and AI in the Early Detection, Diagnosis and Grading of Retinal Diseases: A Survey. Bioengineering 2022, 9, 366. https://doi.org/10.3390/bioengineering9080366

AMA Style

Saleh GA, Batouty NM, Haggag S, Elnakib A, Khalifa F, Taher F, Mohamed MA, Farag R, Sandhu H, Sewelam A, et al. The Role of Medical Image Modalities and AI in the Early Detection, Diagnosis and Grading of Retinal Diseases: A Survey. Bioengineering. 2022; 9(8):366. https://doi.org/10.3390/bioengineering9080366

Chicago/Turabian Style

Saleh, Gehad A., Nihal M. Batouty, Sayed Haggag, Ahmed Elnakib, Fahmi Khalifa, Fatma Taher, Mohamed Abdelazim Mohamed, Rania Farag, Harpal Sandhu, Ashraf Sewelam, and et al. 2022. "The Role of Medical Image Modalities and AI in the Early Detection, Diagnosis and Grading of Retinal Diseases: A Survey" Bioengineering 9, no. 8: 366. https://doi.org/10.3390/bioengineering9080366

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop