Next Article in Journal
Analysis of Clinical Trials and Review of Recent Advances in Therapy Decisions for Locally Advanced Prostate Cancer
Next Article in Special Issue
Retinal Findings and Cardiovascular Risk: Prognostic Conditions, Novel Biomarkers, and Emerging Image Analysis Techniques
Previous Article in Journal
Improving Functional Capacity and Quality of Life in Parkinson’s Disease Patients through REAC Neuromodulation Treatments for Mood and Behavioral Disorders
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optical Identification of Diabetic Retinopathy Using Hyperspectral Imaging

1
Department of Ophthalmology, Dalin Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Chiayi 62247, Taiwan
2
Department of Mechanical Engineering, National Chung Cheng University, Chiayi 62102, Taiwan
3
Department of Ophthalmology, Kaohsiung Armed Forces General Hospital, Kaohsiung 80284, Taiwan
4
Director of Technology Development, Hitspectra Intelligent Technology Co., Ltd., Kaohsiung 80661, Taiwan
*
Authors to whom correspondence should be addressed.
J. Pers. Med. 2023, 13(6), 939; https://doi.org/10.3390/jpm13060939
Submission received: 6 April 2023 / Revised: 23 May 2023 / Accepted: 30 May 2023 / Published: 1 June 2023

Abstract

:
The severity of diabetic retinopathy (DR) is directly correlated to changes in both the oxygen utilization rate of retinal tissue as well as the blood oxygen saturation of both arteries and veins. Therefore, the current stage of DR in a patient can be identified by analyzing the oxygen content in blood vessels through fundus images. This enables medical professionals to make accurate and prompt judgments regarding the patient’s condition. However, in order to use this method to implement supplementary medical treatment, blood vessels under fundus images need to be determined first, and arteries and veins then need to be differentiated from one another. Therefore, the entire study was split into three sections. After first removing the background from the fundus images using image processing, the blood vessels in the images were then separated from the background. Second, the method of hyperspectral imaging (HSI) was utilized in order to construct the spectral data. The HSI algorithm was utilized in order to perform analysis and simulations on the overall reflection spectrum of the retinal image. Thirdly, principal component analysis (PCA) was performed in order to both simplify the data and acquire the major principal components score plot for retinopathy in arteries and veins at all stages. In the final step, arteries and veins in the original fundus images were separated using the principal components score plots for each stage. As retinopathy progresses, the difference in reflectance between the arteries and veins gradually decreases. This results in a more difficult differentiation of PCA results in later stages, along with decreased precision and sensitivity. As a consequence of this, the precision and sensitivity of the HSI method in DR patients who are in the normal stage and those who are in the proliferative DR (PDR) stage are the highest and lowest, respectively. On the other hand, the indicator values are comparable between the background DR (BDR) and pre-proliferative DR (PPDR) stages due to the fact that both stages exhibit comparable clinical-pathological severity characteristics. The results indicate that the sensitivity values of arteries are 82.4%, 77.5%, 78.1%, and 72.9% in the normal, BDR, PPDR, and PDR, while for veins, these values are 88.5%, 85.4%, 81.4%, and 75.1% in the normal, BDR, PPDR, and PDR, respectively.

1. Introduction

All the anatomical structures need to be examined and understood for the prognosis of all the relevant diseases in the retina [1,2]. Specifically, classifying the retinal vessels into veins and arteries will act as an important biomarker for various eyesight-related disorders [3]. Some cardiovascular diseases, diabetes, and various other diseases will cause variations in the natural diameters of these veins and arteries in the early stages of these diseases [4,5]. However, if these diseases are diagnosed in their early stages, the survival rate of the patients can be drastically increased [6]. During the early stages of the classification of arteries and veins, traditional image processing techniques were used along with artificial neural networks (ANN) [7,8,9,10]. After the introduction of deep learning, convolution neural networks (CNN) greatly increased the accuracy of the models [11,12,13]. However, in recent years, fully convolution neural networks (FCNN) have begun being employed along with various architectures such as U-net, Res-net, and Fusion-net for segmentation of arteries and veins in fundus imaging [14,15,16,17].
However, one of the optical imaging applications that have not been employed in fundus imaging is hyperspectral imaging (HSI). In HSI, the data from the image are not only processed in the three visible colors but in the whole electromagnetic spectrum. HSI has been previously employed in agriculture [18], astronomy [19], military [20], biosensors [21,22,23], air pollution detection [24,25], remote sensing [26], dental imaging [27], environment monitoring [28], satellite photography [29], cancer detection [30,31,32], forestry monitoring [33], security [34], food security [35], natural resources surveying [36], vegetation observation [37], and geological mapping [38]. DR (DR) is one of the major complications of microvascular injury [39]. In its initial stages, DR is asymptomatic. However, as in its latter stages, it can cause complete loss of vision. Currently, almost 28 million people around the world have lost their vision due to DR and by the year 2030, 51 million people out of the 155 million people affected by DR will lose their vision [40]. One of the major reasons for the loss of vision is DR in the working ages between 20 and 60 [41]. Therefore, early diagnosis and medication of DR are necessary to avoid vision loss. DR can be usually classified into three stages: BDR, PPDR and PDR.
This study proposes a novel technique of using hyperspectral imaging to collect the database of spectra combined with a Gabor filter for vessel classification; finally, principal component analysis is carried out to train the model to distinguish between the different stages of DR. In this study, the images are classified into four different categories based on the recommendations of the International Council of Ophthalmology (ICO); normal, background DR (BDR), pre-proliferative DR (PPDR), and proliferative DR (PDR) [42,43].

2. Materials and Methods

In this study, a total of 91 patients with 28 BDR, 28 PPDR, 35 PDR and 35 images with normal vision (without associated systemic or ophthalmological pathology) were utilized. Out of the 126 images, 58 images are from male patients (15 in normal, 13 in BDR, 10 in PPDR and 20 in PDR), while 68 images are from female patients (20 in normal, 15 in BDR, 18 in PPDR and 15 in PDR). The major exclusion criterion was images being blurred or having noise caused by the light source, leading these to be omitted from the study. The second exclusion criterion is that the patients who had cataract-related or any other forms of surgery are excluded. The major inclusion criteria for this study is that patients within the age range of 20 and 60 years treated with DR are accepted. In this study, all the patients had type II diabetes, which is relatively more common than type I [44]. In this study, the longest time a patient had been diagnosed with diabetes was 25 years and the shortest was 1 year, while the median was 7 years, thereby encompassing different years since the diagnosis of diabetes. The data from the test set constituted thirty percent of the total data in the study. The images of the fundus were captured with the help of a Zeiss fundus camera (Carl Zeiss AG, Jena, Germany). The principle of retinal fundus camera is shown in Supplementary Materials Figure S1 [45]. Imaging, lighting, and observational systems are all contained within the fundus camera. The design of the imaging system consisted of three components: the eye objective lens, the imaging objective lens, and the negative film. These components were designed in a manner that was distinct from the imaging mode of conventional cameras. This was carried out to prevent the influence of other factors, such as reflected light on the cornea. The lighting system comprised two different light sources, the first of which was used for lighting the fundus while the camera was being focused, and the second of which was the flash, which was used to increase the brightness of the fundus lighting when the picture was being taken. Other methods used to observe fundus images include fundus fluorescein angiography, and optical coherence tomography as shown in Supplementary Materials Sections S2 and S3 [46].
The experimental flow diagram for this study can be found in Figure 1. First, the blood vessels in the fundus images were captured, and then the background was removed from the image so that the blood vessels could be seen more clearly. After that, spectral data were constructed with the help of the HSI technique. At the same time, PCA was utilized to obtain principal components score plots for retinopathy in arteries and veins at all stages. In the final step, arteries and veins in the original fundus images were separated using the principal components score plots for each stage.
In this particular study, Matlab (The MathWorks, Inc., Natick, MA, USA) was utilized for the pre-processing of images, while other parts were performed in Python 3.11, (the Python Software Foundation, Wilmington, DE, USA), which was utilized for PCA training. In order to obtain only the required data, it is necessary to process the image first in order to obtain the distribution diagram of all blood vessels in the retina so as to obtain the spectral data of the arteries and veins in the fundus image. The entirety of the experimental procedure can essentially be broken down into three distinct stages. The first component is the algorithm for processing images of the retina. After the blood vessels in the fundus image have been preprocessed, the blood vessels of the fundus will be segmented by removing the background. Following this, HSI technology will be used to obtain the spectral data in order to analyze and simulate the overall reflection spectrum of the retinal image. A linear transformation of the reflection spectrum that was obtained in the second part is then used in the final step of the PCA process, which is used to reduce the original feature dimension while retaining as much as possible. After obtaining the arterial and venous principal component score maps of each stage of retinopathy using the differences in the original features, the arterial and veinous structures in the original fundus image were differentiated using the principal component score maps of each stage.
Figure 2 demonstrates that the algorithm can be broken down into these four distinct steps. In the first step of the process, pre-processing was used to emphasize the blood vessels, small noise was binarized and removed (as it is in RGB images), and the green component of the image was extracted for pre-processing. This was carried out because the green component of the image has obvious contrasting characteristics in comparison to the other two different components. In the second step of the process, a Gabor Filter was applied to the image in order to improve the data on blood vessels so that they could be seen more clearly. The schematic diagram of a two-dimensional Gabor Filter was shown in Supplementary Materials Section S4 [47]. Gabor filter is a linear filter commonly used for edge detection. A two-dimensional Gabor filter is a Gaussian function modulated by sinusoidal plane waves. Given that its representation in the spatial domain and frequency domain is similar to human biological vision specificity, it can intuitively describe the structural information, such as the spatial position of images, directional selectivity, and spatial frequency.
The mathematical equation of the Gabor filter can be expressed as Equations (1) and (2).
g x , y = 1 2 π σ x σ y e x p 1 2 x 2 σ x 2 + y 2 σ y 2 cos 2 π f 0 x
g x , y = 1 2 π σ x σ y e x p 1 2 x 2 σ x 2 + y 2 σ y 2 sin 2 π f 0 x
In particular, g x , y is the response value of the image that is processed by the Gabor filter; f 0 is the frequency of the curve; and σ x and σ y are the standard deviations in the x and y directions, respectively.
For the edge extraction of fundus images, a series of filter responses can be obtained from the results of all angles in the range of π 2 , π 2 through the Gabor filter. The results of the coordinate transformation formula at different angles of rotation are shown in Equations (3) and (4).
x   =   x cos θ   +   y sin θ
y   =   x sin θ   +   y cos θ
In this equation, ( x , y ) are the corresponding coordinates after the rotation of θ values. Various filter responses G θ x , y can be obtained using Gabor filters g x , y with different θ   v a l u e s in fundus images. Then, the absolute values of the real and imaginary components of each angle are added, and the square root is taken, as shown in Equation (5).
s q r t ( G γ θ 2 + G i θ 2 )   =   G θ
In the equation, G γ θ is the real component response, and G i θ is the imaginary component response. In order to obtain the vascular position more easily, only the maximum response R x , y is retained in each pixel point x , y , as shown in Equation (6).
R x , y   =   M a x ( G θ x , y ) ,   θ   =   [ π 2 : π 180 : π 2 ]
In the third step of the process, an iterative binarization technique was utilized to locate image thresholds and differentiate vascular regions. In the end, some small white spots and noise were removed by filtering the image using the equations that were provided by the program. In this particular investigation, an iterative-based binarization method is utilized in order to locate the image’s threshold (Supplementary Section S5). Because the grayscale image ranges from 0 to 255, a threshold needs to be selected that not only isolates the area containing the blood vessels but also minimizes the influence of the background and any noise that may be present. If the grayscale distribution of the pixels in the target area is uniform, and the pixels in the background area are also uniformly distributed in another grayscale, then the corresponding gray value in the middle of these two peaks are chosen to be used as the threshold value, and the iterative method is to obtain this result by repeatedly applying the fixed sigma value. When the image to be processed consists of a target with a large difference in gray value and the rest of the background, then the initial threshold value is calculated by averaging the gray value of the entire image to get a starting point. Secondly, by dividing the image into two groups, the average gray value of each group can be calculated separately and these can then be averaged to obtain a new starting point. Finally, the previous steps are repeated in order to continuously generate a new threshold to obtain the difference between the thresholds that have come before. The final threshold is obtained when the difference is smaller than a parameter that has been set in advance. It is unavoidable that there will be some small white spots due to uneven brightness or poor quality in the image when setting the threshold value in the fourth step. Because of this, it will be divided into the same area as the blood vessels, which will have an effect on the identification that is ultimately performed. This will have an effect on the final result. This function can be used to set the connected objects with a number of pixels that is lower than the number of many pixels to delete, and the pixel connectivity is present when the function is first called.
After that, the hyperspectral image processing was carried out in three distinct stages, as can be seen in Figure 3. In the first step toward obtaining the relationship matrix between the fundus camera and the spectrometer, the data of the light source spectrum for the two instruments as well as the data of the reflection spectrum for the light source and 24 color-checker in the visible band (380–780 nm) were collected. This was carried out so that the relationship matrix could be obtained. In addition to the data, the standard 24-point color checker was used as the subject for measurement, which contributed to the increased accuracy of the transformation. Second, spectral analysis was carried out, and PCA was utilized in order to simplify the data. For the purpose of determining the eigenvalues, the eigenvectors of the first six groups were chosen to serve as the reference spectrum. After that, the linear regression technique was applied in order to determine the nature of the connection that exists between the eigenvalues and the fundus images. In the end, a transformation matrix was constructed in order to simulate the spectrum of each pixel value present on the image.
PCA is a method that is frequently employed in the field of multivariate statistics. Even though multivariate data undoubtedly provides a wealth of information for a study, if each indicator is analyzed on its own, much of the useful information that the data contains will typically not be fully utilized and will be lost. This will result in incorrect conclusions. As a result, the purpose of PCA is to transform high-dimensional data into low-dimensional data in order to lessen the amount of computation required while maintaining the same level of data accuracy.
Following the completion of the procedures described in the previous section of the experiment, the final step of the experiment was carried out, which consisted of differentiating arteries and veins using the principal components score plots. Following the application of PCA to the data in order to lessen the number of dimensions, a principal component score plot for the first and second principal components of each stage was generated, as is demonstrated in Figure 4. The manually classified images of arteries and veins were contrasted with the unclassified data points on the fundus image and the principal components’ score plot. The veins were shown in blue, and the arteries were shown in red so that the data points could be differentiated. After PCA, the data were normalized so that the distribution could fall between zero and one. This was carried out in order to make the subsequent differentiation of arteries and veins easier. The distribution points of arteries and veins could be distinguished from one another once a threshold was established on the x-axis to serve as the distinguishing condition. After that, the arteries and veins were differentiated based on the conditions for differentiation, and the results were presented on the initial fundus image. It was determined that the portion of the picture that was blue represented a vein, while the portion that was red represented an artery.

3. Results

Average Reflectance Spectrum

In this study, hyperspectral imaging technology was used to analyze the average reflection spectrum of arteries and veins in the retina of patients with DR at different stages, as shown in Figure 5a, based on the degree of lesion (normal, BDR/PPDR, PDR). The results of this analysis were compared to those of patients without DR at the same stages. The average reflectance spectrum of retinal veins can be seen in Figure 5b, which depicts the progression of DR through its various stages. The spectral reflectance intensity of all stages is relatively lower as a result of the difference in thickness between the arteries and veins, which causes the veins to appear darker in fundus images. According to the findings of previous studies, it was observed through other spectral measurement methods that as DR becomes more severe, the venous oxygen saturation will increase in a manner that is proportional to the progression of the condition. From what can be seen in Figure 5b, the difference is most readily apparent during the PDR stage. It is possible to draw the conclusion that the blood vessel blockage causes a reduction in the amount of oxygen that is delivered to the retinal tissue, and that this results in an increase in the blood oxygen concentration of the venous return. As a consequence of this, the overall reflectance trend will gradually increase as the severity of the lesion increases within the red-light frequency band.
The principal components score plots of DR at each stage were used as the basis for distinguishing between arteries and veins, as shown in Figure 6. One of the fundus images of the lesions at each stage was selected to illustrate the results of classifying arteries and veins, as shown in Figure 7. As the severity of the condition worsens, there is a corresponding decrease in the difference in PCA results between the arteries and veins. As a result, it is challenging to determine a threshold that can accurately differentiate between arteries and veins. In addition, when the results of the normal and PDR categories were compared, it was found that there were significant differences between them. These varying results were then analyzed based on three parameters: sensitivity, precision, and F1-scores were utilized in order to evaluate the distinguishment results.
Sensitivity represents the hit rate of the correct judgment, also known as true positive rate (TPR), which is defined in Equation (7).
Sensitivity = T h e   n u m b e r   o f   p i x e l s   j u d g e d   a s   a r t e r i e s / v e i n s   c o r r e c t l y T h e   o r i g i n a l   n u m b e r   o f   p i x e l s   o f   a r t e r i e s v e i n s   i n   t h e   f u n d u s   i m a g e s
Precision is also known as positive predictive value (PPV), whose values represent the proportions where the locations of arteries and veins are judged correctly. The computation method is shown in Equation (8).
Precision = T h e   n u m b e r   o f   p i x e l s   j u d g e d   a s   a r t e r i e s   a n d   v e i n s   c o r r e c t l y T h e   n u m b e r   o f   a l l   p i x e l s   j u d g e d   a s a r t e r i e s   a n d   v e i n s
F1-score is a harmonic mean, and the computation method is shown in Equation (9).
F 1 s c o r e = 2 S e n s i t i v i t y P r e c i s i o n S e n s i t i v i t y + P r e c i s i o n
The findings of the investigation are presented in Table 1. The level of DR that was considered normal, BDR, PPDR, and PDR were taken into consideration when selecting the analysis indicator. In the normal, BDR, PPDR, and PDR conditions, the sensitivity values of the arteries are, respectively, 82.4%, 77.5%, 78.1%, and 72.9%. The sensitivities of veins are as follows: 88.5% in the normal range; 85.4% in the BDR range; 81.4% in the PPDR range; and 75.1% in the PDR range.

4. Discussion

In this study, the retinal images are classified into four different categories based on a novel HSI algorithm into normal, BDR, PPDR, and PDR. The artery and veins had a precision of more than 80% in the BDR and PDR category, while there was a drop in a precision to 73.4% in PDR in arteries and to 74.6% in veins. There is a general trend toward a gradual decrease in the reflection spectrum as the degree of lesions increases. The development of lesions in the retinal arteries can be attributed to diabetes, which is the cause of this phenomenon [48]. Long-term hyperglycemia in blood vessels causes hemoglobin and platelet aggregation to block blood vessels [49]. Additionally, blood vessels will gradually lose pericytes, basement membranes will gradually thicken, and endothelial cells will be damaged and proliferated as a result of the condition [50]. As a consequence of this, the general trend of spectral reflectance will show a significant decrease in the middle wavelength range (495–570 nm) as the severity of the lesion increases. In addition, it has been discovered that the reflectance of PDR is significantly higher in the higher wavelength (620–780 nm). Because the retinal artery remains in an environment with high blood sugar for an extended period of time, the oxygenated hemoglobin in the blood coagulates, which lowers its oxygen carrying capacity. On the other hand, the non-oxygenated hemoglobin’s capacity to absorb light is improved in the red band. Therefore, as the severity of the disease progresses, there is an increase in both the blood glucose concentration in blood vessels and the proportion of hypoxic hemoglobin. This results in a decrease in the spectral reflectance of retinal arteries in PDR patients within the red band. As the severity of the retinopathy increases, the difference in reflectance between the arteries and veins gradually decreases. This makes it increasingly difficult to differentiate between the two using the PCA results in later stages, as both their precision and sensitivity are reduced. As a direct consequence of this, the precision and sensitivity of HSI algorithm in DR patients who are still in the normal stage or in the PDR stage are, respectively, the highest and lowest levels. In the meantime, the indicator values are comparable between the BDR stage and the PPDR stage because the clinical-pathological severity characteristics of both stages are comparable to one another. In the same stages, the precision of arteries is typically higher than that of veins. This is because arteries contain more hypoxic hemoglobin, which gives them a relatively dark color in fundus images. Veins, on the other hand, contain less of this type of hemoglobin. These factors are easily influenced by the camera, which also results in a poor quality of images captured by the fundus. Additionally, some non-ideal dark areas, which appear in arteries and decrease the value of the hyperspectral algorithm, are misjudged as vein areas. This occurs because arteries tend to have darker areas than ideal. Therefore, the precision of the arteries is superior to that of the veins in each of the four stages of the process. However, due to the relatively higher precision of arteries in comparison to veins, the number of pixels judged as veins in the area of arteries in fundus images is greater than the number of pixels judged as arteries in the area of veins in fundus images. This is because arteries have a relatively higher precision than veins do. Because of this, the sensitivity of arteries is significantly lower compared to that of veins. The number of pictures that were used to train our model is one of the aspects of the study that can be considered a limitation. A total of only 126 images were used in this research. The expansion of the dataset in the future is one of the objectives of this study, which will ultimately result in improvements to the model’s sensitivity and precision.

5. Conclusions

In conclusion, the HSI method can be used to distinguish between arteries and veins in retinal images, as well as locate the distribution of arteries and veins within the retina itself. This is possible because of the method’s ability to locate the distribution of arteries and veins within the retina. This function contributes to the establishment of a database for the shifts in blood oxygen concentration that take place across the board in DR’s progression through its various stages. According to the findings, the sensitivity values of arteries are as follows: 82.4% in the normal condition; 77.5% in the BDR condition; 78.1% in the PPDR condition; and 72.9% in the PDR condition. On the other hand, the sensitivity values of veins are as follows: 88.5% in the normal condition; 85.4% in the BDR condition; 81.4% in the PPDR condition; and 75.1% in the PDR condition.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/jpm13060939/s1. Figure S1 is a schematic diagram of the Zeiss fundus camera. Xe refers to the xenon flash; L1 refers to the concentrator; S1 refers to the light source of the observation system; L2–7 refers to the optical element of the imaging system; BS refers to the spectroscope; f refers to the filter; M1–4 refers to the reflector; F refers to the negative film of the camera; R refers to the aiming target; the dotted line is the input path of the light source, and the solid line is the observation path; Figure S2.The Schematic Diagram of a Two-dimensional Gabor Filter; Section S1. The principle of retinal fundus camera; Section S2. Gabor filter; Section S3. Equations used for evaluating the metrics; Section S4. Principal Component Analysis; Section S5. Binarization.

Author Contributions

Conceptualization, C.-Y.W. and H.-C.W.; methodology, A.M.; Software, H.-C.W. and A.M.; validation, C.-Y.W., Y.-M.T. and H.-C.W.; formal analysis, C.-Y.W. and H.-C.W.; investigation, W.-S.F. and H.-C.W.; resources, Y.-M.T. and H.-C.W.; data curation, Y.-S.L., A.M. and H.-C.W.; writing—original draft preparation, A.M.; writing—review and editing, A.M. and F.-C.L.; supervision, W.-S.F., F.-C.L. and H.-C.W.; project administration, W.-S.F. and H.-C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Ministry of Science and Technology, the Republic of China under grants MOST 111-2221-E-194-007. This work was financially or partially supported by the Kaohsiung Armed Forces General Hospital research project MND-MAB-110-141 and Dalin Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation-National Chung Cheng University Joint Research Program DTCRD112-C-11 in Taiwan.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of Dalin Tzu Chi General Hospital (B11201035).

Informed Consent Statement

Written informed consent was waived in this study because of the retrospective anonymized nature of study design.

Data Availability Statement

The data presented in this study are available in this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kanski, J.J. Clinical Ophthalmology: A Systematic Approach; Elsevier Brasil: Amsterdam, The Netherlands, 2007. [Google Scholar]
  2. Abràmoff, M.D.; Garvin, M.K.; Sonka, M. Retinal imaging and image analysis. IEEE Rev. Biomed. Eng. 2010, 3, 169–208. [Google Scholar] [CrossRef] [PubMed]
  3. Galdran, A.; Meyer, M.; Costa, P.; Campilho, A. Uncertainty-aware artery/vein classification on retinal images. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 556–560. [Google Scholar]
  4. Girard, F.; Kavalec, C.; Cheriet, F. Joint segmentation and classification of retinal arteries/veins from fundus images. Artif. Intell. Med. 2019, 94, 96–109. [Google Scholar] [CrossRef] [PubMed]
  5. da Silva, A.V.B.; Gouvea, S.A.; da Silva, A.P.B.; Bortolon, S.; Rodrigues, A.N.; Abreu, G.R.; Herkenhoff, F.L. Changes in retinal microvascular diameter in patients with diabetes. Int. J. Gen. Med. 2015, 8, 267. [Google Scholar] [PubMed]
  6. Morano, J.; Hervella, Á.S.; Novo, J.; Rouco, J. Simultaneous segmentation and classification of the retinal arteries and veins from color fundus images. Artif. Intell. Med. 2021, 118, 102116. [Google Scholar] [CrossRef]
  7. Staal, J.; Abràmoff, M.D.; Niemeijer, M.; Viergever, M.A.; Van Ginneken, B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23, 501–509. [Google Scholar] [CrossRef]
  8. Jiang, X.; Mojon, D. Adaptive local thresholding by verification-based multithreshold probing with application to vessel detection in retinal images. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 131–137. [Google Scholar] [CrossRef]
  9. Nain, D.; Yezzi, A.; Turk, G. Vessel segmentation using a shape driven flow. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Saint-Malo, France, 26–29 September 2004; pp. 51–59. [Google Scholar]
  10. Tolias, Y.A.; Panas, S.M. A fuzzy vessel tracking algorithm for retinal images based on fuzzy clustering. IEEE Trans. Med. Imaging 1998, 17, 263–273. [Google Scholar] [CrossRef]
  11. Sinthanayothin, C.; Boyce, J.F.; Cook, H.L.; Williamson, T.H. Automated localisation of the optic disc, fovea, and retinal blood vessels from digital colour fundus images. Br. J. Ophthalmol. 1999, 83, 902–910. [Google Scholar] [CrossRef]
  12. Marín, D.; Aquino, A.; Gegúndez-Arias, M.E.; Bravo, J.M. A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features. IEEE Trans. Med. Imaging 2010, 30, 146–158. [Google Scholar] [CrossRef]
  13. Liskowski, P.; Krawiec, K. Segmenting retinal blood vessels with deep neural networks. IEEE Trans. Med. Imaging 2016, 35, 2369–2380. [Google Scholar] [CrossRef]
  14. Fu, H.; Xu, Y.; Lin, S.; Kee Wong, D.W.; Liu, J. Deepvessel: Retinal vessel segmentation via deep learning and conditional random field. In Proceedings of the International conference on medical image computing and computer-assisted intervention, Athens, Greece, 17–21 October 2016; pp. 132–139. [Google Scholar]
  15. Fu, H.; Xu, Y.; Wong, D.W.K.; Liu, J. Retinal vessel segmentation via deep learning network and fully-connected conditional random fields. In Proceedings of the 2016 IEEE 13th international symposium on biomedical imaging (ISBI), Prague, Czech Republic, 13–16 April 2016; pp. 698–701. [Google Scholar]
  16. Dasgupta, A.; Singh, S. A fully convolutional neural network based structured prediction approach towards the retinal vessel segmentation. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, VIC, Australia, 18–21 April 2017; pp. 248–251. [Google Scholar]
  17. Jiang, Z.; Zhang, H.; Wang, Y.; Ko, S.-B. Retinal blood vessel segmentation using fully convolutional network with transfer learning. Comput. Med. Imaging Graph. 2018, 68, 1–15. [Google Scholar] [CrossRef]
  18. Lu, B.; Dao, P.D.; Liu, J.; He, Y.; Shang, J. Recent advances of hyperspectral imaging technology and applications in agriculture. Remote Sens. 2020, 12, 2659. [Google Scholar] [CrossRef]
  19. Mukundan, A.; Patel, A.; Saraswat, K.D.; Tomar, A.; Kuhn, T. Kalam Rover. In Proceedings of the AIAA SCITECH 2022 Forum, San Diego, CA, USA, 3–7 January 2022; p. 1047. [Google Scholar]
  20. Gross, W.; Queck, F.; Vögtli, M.; Schreiner, S.; Kuester, J.; Böhler, J.; Mispelhorn, J.; Kneubühler, M.; Middelmann, W. A multi-temporal hyperspectral target detection experiment: Evaluation of military setups. In Proceedings of the Target and Background Signatures VII, Online, Spain, 13–18 September 2021; pp. 38–48. [Google Scholar]
  21. Mukundan, A.; Feng, S.-W.; Weng, Y.-H.; Tsao, Y.-M.; Artemkina, S.B.; Fedorov, V.E.; Lin, Y.-S.; Huang, Y.-C.; Wang, H.-C. Optical and Material Characteristics of MoS2/Cu2O Sensor for Detection of Lung Cancer Cell Types in Hydroplegia. Int. J. Mol. Sci. 2022, 23, 4745. [Google Scholar] [CrossRef] [PubMed]
  22. Hsiao, Y.-P.; Mukundan, A.; Chen, W.-C.; Wu, M.-T.; Hsieh, S.-C.; Wang, H.-C. Design of a Lab-On-Chip for Cancer Cell Detection through Impedance and Photoelectrochemical Response Analysis. Biosensors 2022, 12, 405. [Google Scholar] [CrossRef]
  23. Mukundan, A.; Tsao, Y.-M.; Artemkina, S.B.; Fedorov, V.E.; Wang, H.-C. Growth Mechanism of Periodic-Structured MoS2 by Transmission Electron Microscopy. Nanomaterials 2021, 12, 135. [Google Scholar] [CrossRef] [PubMed]
  24. Chen, C.-W.; Tseng, Y.-S.; Mukundan, A.; Wang, H.-C. Air Pollution: Sensitive Detection of PM2. 5 and PM10 Concentration Using Hyperspectral Imaging. Appl. Sci. 2021, 11, 4543. [Google Scholar] [CrossRef]
  25. Mukundan, A.; Huang, C.-C.; Men, T.-C.; Lin, F.-C.; Wang, H.-C. Air Pollution Detection Using a Novel Snap-Shot Hyperspectral Imaging Technique. Sensors 2022, 22, 6231. [Google Scholar] [CrossRef]
  26. Gerhards, M.; Schlerf, M.; Mallick, K.; Udelhoven, T. Challenges and future perspectives of multi-/Hyperspectral thermal infrared remote sensing for crop water-stress detection: A review. Remote Sens. 2019, 11, 1240. [Google Scholar] [CrossRef]
  27. Lee, C.-H.; Mukundan, A.; Chang, S.-C.; Wang, Y.-L.; Lu, S.-H.; Huang, Y.-C.; Wang, H.-C. Comparative Analysis of Stress and Deformation between One-Fenced and Three-Fenced Dental Implants Using Finite Element Analysis. J. Clin. Med. 2021, 10, 3986. [Google Scholar] [CrossRef]
  28. Stuart, M.B.; McGonigle, A.J.; Willmott, J.R. Hyperspectral imaging in environmental monitoring: A review of recent developments and technological advances in compact field deployable systems. Sensors 2019, 19, 3071. [Google Scholar] [CrossRef]
  29. Mukundan, A.; Wang, H.-C. Simplified Approach to Detect Satellite Maneuvers Using TLE Data and Simplified Perturbation Model Utilizing Orbital Element Variation. Appl. Sci. 2021, 11, 10181. [Google Scholar] [CrossRef]
  30. Tsai, C.-L.; Mukundan, A.; Chung, C.-S.; Chen, Y.-H.; Wang, Y.-K.; Chen, T.-H.; Tseng, Y.-S.; Huang, C.-W.; Wu, I.-C.; Wang, H.-C. Hyperspectral Imaging Combined with Artificial Intelligence in the Early Detection of Esophageal Cancer. Cancers 2021, 13, 4593. [Google Scholar] [CrossRef]
  31. Tsai, T.-J.; Mukundan, A.; Chi, Y.-S.; Tsao, Y.-M.; Wang, Y.-K.; Chen, T.-H.; Wu, I.-C.; Huang, C.-W.; Wang, H.-C. Intelligent Identification of Early Esophageal Cancer by Band-Selective Hyperspectral Imaging. Cancers 2022, 14, 4292. [Google Scholar] [CrossRef]
  32. Fang, Y.-J.; Mukundan, A.; Tsao, Y.-M.; Huang, C.-W.; Wang, H.-C. Identification of Early Esophageal Cancer by Semantic Segmentation. J. Pers. Med. 2022, 12, 1204. [Google Scholar] [CrossRef] [PubMed]
  33. Vangi, E.; D’Amico, G.; Francini, S.; Giannetti, F.; Lasserre, B.; Marchetti, M.; Chirici, G. The new hyperspectral satellite PRISMA: Imagery for forest types discrimination. Sensors 2021, 21, 1182. [Google Scholar] [CrossRef]
  34. Huang, S.-Y.; Mukundan, A.; Tsao, Y.-M.; Kim, Y.; Lin, F.-C.; Wang, H.-C. Recent Advances in Counterfeit Art, Document, Photo, Hologram, and Currency Detection Using Hyperspectral Imaging. Sensors 2022, 22, 7308. [Google Scholar] [CrossRef]
  35. Zhang, X.; Han, L.; Dong, Y.; Shi, Y.; Huang, W.; Han, L.; González-Moreno, P.; Ma, H.; Ye, H.; Sobeih, T. A deep learning-based approach for automated yellow rust disease detection from high-resolution hyperspectral UAV images. Remote Sens. 2019, 11, 1554. [Google Scholar] [CrossRef]
  36. Hennessy, A.; Clarke, K.; Lewis, M. Hyperspectral classification of plants: A review of waveband selection generalisability. Remote Sens. 2020, 12, 113. [Google Scholar] [CrossRef]
  37. Terentev, A.; Dolzhenko, V.; Fedotov, A.; Eremenko, D. Current State of Hyperspectral Remote Sensing for Early Plant Disease Detection: A Review. Sensors 2022, 22, 757. [Google Scholar] [CrossRef]
  38. De La Rosa, R.; Tolosana-Delgado, R.; Kirsch, M.; Gloaguen, R. Automated Multi-Scale and Multivariate Geological Logging from Drill-Core Hyperspectral Data. Remote Sens. 2022, 14, 2676. [Google Scholar] [CrossRef]
  39. Yao, H.-Y.; Tseng, K.-W.; Nguyen, H.-T.; Kuo, C.-T.; Wang, H.-C. Hyperspectral Ophthalmoscope Images for the Diagnosis of Diabetic Retinopathy Stage. J. Clin. Med. 2020, 9, 1613. [Google Scholar] [CrossRef] [PubMed]
  40. Whiting, D.R.; Guariguata, L.; Weil, C.; Shaw, J. IDF diabetes atlas: Global estimates of the prevalence of diabetes for 2011 and 2030. Diabetes Res. Clin. Pract. 2011, 94, 311–321. [Google Scholar] [CrossRef] [PubMed]
  41. Fong, D.; Aiello, L.; Ferris, F. Klein RK Diabetic retinopathy. Ophthalmol. L 2004, 27, 2540–2553. [Google Scholar] [CrossRef]
  42. Wong, T.Y.; Sun, J.; Kawasaki, R.; Ruamviboonsuk, P.; Gupta, N.; Lansingh, V.C.; Maia, M.; Mathenge, W.; Moreker, S.; Muqit, M.M. Guidelines on diabetic eye care: The international council of ophthalmology recommendations for screening, follow-up, referral, and treatment based on resource settings. Ophthalmology 2018, 125, 1608–1622. [Google Scholar] [CrossRef] [PubMed]
  43. Solomon, S.D.; Chew, E.; Duh, E.J.; Sobrin, L.; Sun, J.K.; VanderBeek, B.L.; Wykoff, C.C.; Gardner, T.W. Diabetic retinopathy: A position statement by the American Diabetes Association. Diabetes Care 2017, 40, 412–418. [Google Scholar] [CrossRef] [PubMed]
  44. Simó, R.; Hernández, C. Advances in the medical treatment of diabetic retinopathy. Diabetes Care 2009, 32, 1556–1562. [Google Scholar] [CrossRef]
  45. Joussen, A.M.; Poulaki, V.; Le, M.L.; Koizumi, K.; Esser, C.; Janicki, H.; Schraermeyer, U.; Kociok, N.; Fauser, S.; Kirchhof, B. A central role for inflammation in the pathogenesis of diabetic retinopathy. FASEB J. 2004, 18, 1450–1452. [Google Scholar] [CrossRef]
  46. Li, N.; Liu, Y.; Yun, A.; Song, S. Correlation of Platelet Function with Postpartum Hemorrhage and Venous Thromboembolism in Patients with Gestational Hypertension Complicated with Diabetes. Comput. Math. Methods Med. 2022, 2022, 2423333. [Google Scholar] [CrossRef]
  47. Einer, S.G. Gradual painless visual loss: Retinal causes. Clin. Geriatr. Med. 1999, 15, 25–46. [Google Scholar] [CrossRef]
  48. Littmann, H. Die Zeiss Funduskamera Ber. 59. Zusammenkunft Deutsch. Ophthalmolog. Gesellsch., Heidelberg 1955; Verlag Bergmann: Munchen, Germany, 1995; p. 318. [Google Scholar]
  49. Morgner, U.; Drexler, W.; Kärtner, F.X.; Li, X.D.; Pitris, C.; Ippen, E.P.; Fujimoto, J.G. Spectroscopic optical coherencetomography. Opt. Lett. 2000, 25, 111–113. [Google Scholar] [CrossRef]
  50. Siddalingaswamy, P.C.; Prabhu, K.G. Automatic detection of multiple oriented blood vessels in retinal images. J. Biomed. Sci. Eng. 2010, 3, 101–107. [Google Scholar] [CrossRef]
Figure 1. Experimental flow diagram.
Figure 1. Experimental flow diagram.
Jpm 13 00939 g001
Figure 2. Flow diagram for retinal image processing.
Figure 2. Flow diagram for retinal image processing.
Jpm 13 00939 g002
Figure 3. Algorithm flow diagram of the hyperspectral fundus camera.
Figure 3. Algorithm flow diagram of the hyperspectral fundus camera.
Jpm 13 00939 g003
Figure 4. Flow diagram for distinguishing arteries and veins.
Figure 4. Flow diagram for distinguishing arteries and veins.
Jpm 13 00939 g004
Figure 5. (a,b) the average reflectance spectrum of normal, BDR, PPDR and PDR DR arteries and veins, respectively.
Figure 5. (a,b) the average reflectance spectrum of normal, BDR, PPDR and PDR DR arteries and veins, respectively.
Jpm 13 00939 g005
Figure 6. Principle component score plot. (ad) show the principal component score of normal, BDR, PPDR, and PDR plots.
Figure 6. Principle component score plot. (ad) show the principal component score of normal, BDR, PPDR, and PDR plots.
Jpm 13 00939 g006
Figure 7. Shows the classification results of blood vessels. (ad) show the classification results of blood vessels in the normal, BDR, PPDR, and PDR categories.
Figure 7. Shows the classification results of blood vessels. (ad) show the classification results of blood vessels in the normal, BDR, PPDR, and PDR categories.
Jpm 13 00939 g007
Table 1. The analysis indicator results of the distinguishment test for arteries and veins.
Table 1. The analysis indicator results of the distinguishment test for arteries and veins.
ArterySensitivity (%)Precision (%)F1-Score (%)
NORMAL82.488.485.3
BDR77.583.580.3
PPDR78.180.779.4
PDR72.973.473.1
VeinsSensitivity (%)Precision (%)F1-Score (%)
NORMAL88.582.685.4
BDR85.481.283.2
PPDR81.479.280.3
PDR75.174.674.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, C.-Y.; Mukundan, A.; Liu, Y.-S.; Tsao, Y.-M.; Lin, F.-C.; Fan, W.-S.; Wang, H.-C. Optical Identification of Diabetic Retinopathy Using Hyperspectral Imaging. J. Pers. Med. 2023, 13, 939. https://doi.org/10.3390/jpm13060939

AMA Style

Wang C-Y, Mukundan A, Liu Y-S, Tsao Y-M, Lin F-C, Fan W-S, Wang H-C. Optical Identification of Diabetic Retinopathy Using Hyperspectral Imaging. Journal of Personalized Medicine. 2023; 13(6):939. https://doi.org/10.3390/jpm13060939

Chicago/Turabian Style

Wang, Ching-Yu, Arvind Mukundan, Yu-Sin Liu, Yu-Ming Tsao, Fen-Chi Lin, Wen-Shuang Fan, and Hsiang-Chen Wang. 2023. "Optical Identification of Diabetic Retinopathy Using Hyperspectral Imaging" Journal of Personalized Medicine 13, no. 6: 939. https://doi.org/10.3390/jpm13060939

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop