Next Article in Journal
Unconventional Myosins from Caenorhabditis elegans as a Probe to Study Human Orthologues
Previous Article in Journal
Portability of a Small-Molecule Binding Site between Disordered Proteins
Previous Article in Special Issue
Developing an Improved Survival Prediction Model for Disease Prognosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploiting the Generative Adversarial Network Approach to Create a Synthetic Topography Corneal Image

1
Computer Science Department, University of Raparin, Rania 46012, Iraq
2
Department of Natural and Mathematical Sciences, Engineer Faculty, Tarsus University, Tarsus 33402, Turkey
3
Biomedical Engineering Department, Al-Khawarezmi Eng. College, University of Baghdad, Baghdad 10011, Iraq
4
Computer Science and Engineering Department, University of Kurdistan Hewlêr, Erbil 44001, Iraq
5
Computer Science Department, Dijlah University College, Al-Dora, Baghdad 00964, Iraq
6
College of Engineering, University of Warith Al-Anbiyaa, Karbala 56001, Iraq
7
Faculty of Data Science & Information Technology, INTI International University, Persiaran Perdana BBN, Nilai 71800, Negeri Sembilan, Malaysia
*
Authors to whom correspondence should be addressed.
Current adrresss: Department of Communication Technology Engineering, College of Information Technology, Imam Ja’afar Al-Sadiq University, Baghdad 10011, Iraq.
Biomolecules 2022, 12(12), 1888; https://doi.org/10.3390/biom12121888
Submission received: 19 September 2022 / Revised: 4 November 2022 / Accepted: 12 November 2022 / Published: 16 December 2022
(This article belongs to the Special Issue Big Data Analysis in Human Disease)

Abstract

:
Corneal diseases are the most common eye disorders. Deep learning techniques are used to perform automated diagnoses of cornea. Deep learning networks require large-scale annotated datasets, which is conceded as a weakness of deep learning. In this work, a method for synthesizing medical images using conditional generative adversarial networks (CGANs), is presented. It also illustrates how produced medical images may be utilized to enrich medical data, improve clinical decisions, and boost the performance of the conventional neural network (CNN) for medical image diagnosis. The study includes using corneal topography captured using a Pentacam device from patients with corneal diseases. The dataset contained 3448 different corneal images. Furthermore, it shows how an unbalanced dataset affects the performance of classifiers, where the data are balanced using the resampling approach. Finally, the results obtained from CNN networks trained on the balanced dataset are compared to those obtained from CNN networks trained on the imbalanced dataset. For performance, the system estimated the diagnosis accuracy, precision, and F1-score metrics. Lastly, some generated images were shown to an expert for evaluation and to see how well experts could identify the type of image and its condition. The expert recognized the image as useful for medical diagnosis and for determining the severity class according to the shape and values, by generating images based on real cases that could be used as new different stages of illness between healthy and unhealthy patients.

1. Introduction

Medical image datasets are one of the most important problems facing researchers in the field of machine learning [1]. The limited amount of medical data comes from the difficulty of capturing it [2]. With the problem of final ethical approval, the acquisition and labelling of medical images are time-consuming, and considerable effort needs to be spent by both researchers and specialists [3,4]. Several studies tried to overcome the dataset scarcity challenge through the famous task in computer vision, a method called data augmentation [5]. Using classic data augmentation can give a simple extra feature where it involves simple modifications, such as rotation, translation, scaling, and flipping [6]. On the other hand, some researchers employed innovative techniques for data augmentation to improve the system training process, based on synthesizing high-quality sample images using a generative model known as generative adversarial networks (GANs) [7,8,9].
The GANs involved two networks; the first generates a real image from the input with the help of the noise, and the other discriminates between real and fake (generated by the first network) images. This model has been used in many studies hoping to generate realistic images, especially for medical imaging applications, such as image-to-image translation [10], image inpainting [11], segmentation-to-image translation [12], medical cross-modality translations [13], and label-to-segmentation translation [14].
Exploiting the GAN models by researchers led to the creation of cross-modality images, such as a PET scan, which was generated from a CT scan of the abdomen to show the presence of liver lesions. The GAN model of image inpainting has served as inspiration for many studies. Costa et al. [15] used a fully convolutional network to learn retinal vessel segmentation images. The binary vessel tree was then translated into a new retinal image. By using chest X-ray images, Dai et al. [16] generated lung and heart image segmentation by training a GAN model. Xu et al. [17] trained a model to translate brain MRI images into binary segmentation maps for brain tumour images. Nie et al. [18] trained a patch-based GAN to translate between brain CT and MRI images. As a step of image refinement, they recommended using an auto-context model. Schlegl et al. [19] trained a GAN model on normal retinal. To detect anomalies in retinal images, the model was tested on normal and abnormal data.
Based on what was mentioned above, the scarcity of data needs to be resolved so that researchers can use it more freely to analyze that data and produce results that serve the scientific process. The latter motivated the authors of this paper to use GAN models with the ability to synthesize real images, increase the existing data, and overcome the problem of lacking data. In this work, high-quality corneal images based on GAN models are synthesized for a specific task of corneal disease diagnosis to improve the clinical decision by introducing different stages and predicted shapes for images with illness. As an illustrated sample of manipulation for the imaging in the cornea, the different stages of keratoconus are, in most cases, unclear in borderlines. From a clinical perspective, overlapping features between stages of keratoconus lead to a controversial approach to treatment. To decide the severity and clinical or surgical procedure of work per patient clinically, considerable evidence is collected from different images per case to reach the final approach. The possibility of studying the effect and weight of this evidence per case is an attractive medical training to produce a final highly medical sensation and observation for the trained physician. In more detail, thinning in pachymetry images with its location, steepening in the inferior or superior position of the tangential mapping, and the isolated land or tongue shape that may appear in elevation front and back maps, with the astigmatism axis and obliqueness of the bowtie, would improve the effectiveness of the final diagnosis.
The cornea, which protects the eye from external substances and helps to control visual focus, is stiff but very sensitive to touch [20]. There are many corneal disorders, for instance, bullous keratopathy, Cogan syndrome, corneal ulcer, herpes simplex keratitis, herpes zoster ophthalmicus, etc. [21]. Any disorders in the cornea may cause ripping, discomfort, and dwindling vision clarity and, finally, may lead to blindness. On the other hand, any action on the cornea, such as vision correction, requires a diagnosis of the cornea’s health before treatment [22]. Clinical decisions on the human cornea require reviewing numerous aspects, and ophthalmologists must handle this revision. Corneal topographical parameters are so extensive that it is difficult for surgeons or ophthalmologists to remember them all and make decisions [23]. As a consequence, based on deep learning models, we also proposed to build a sophisticated medical system using the original and the generated images (using the GAN model) for diagnosing corneal cases, to aid clinicians in the interpretation of medical images and improve clinical decision-making.
Many researchers used a variety of complex and diverse medical devices to collect data, as well as a variety of diagnostic approaches. Salih and Hussein (2018) used 732 submaps as inputs to the deep learning network; a kind of deep learning technology called the VGG-16 network was utilized to predict corneal abnormalities and normality [24]. The detection of the keratoconus eyes dataset and recognition of the normal cornea was the focus of a group of authors who used 145 normal cornea cases and 312 keratoconus cases from a database of photographs. As a classification tool, they used support vector machine (SVM) and multilayer perceptron methods. The features were extracted from the input images, then passed to the classifiers [25]. A group of researchers used a compilation of data from both Placido and Scheimpug as a feature vector. The prototype was tested with and without a posterior corneal surface, and it performed well in both situations. The thickness and posterior characteristics were found to be critical tools for predicting corneal keratoconus and avoiding corneal ectasia surgery in patients with early corneal ectasia disease [26]. Researchers employed machine learning techniques, such as ANN, SVM, regression analysis, and decision tree algorithms, to identify the disease. The information was gathered from a group of patients; in total, 23 cases of ectasia after LASIK were discovered, as well as 266 stable post-LASIK cases with over a year of follow-up. They concluded that this study method still needed to be validated [27]. Samer et al. presented a method known as SWFT for diagnosing the corneal image by extracting features from the corneal image using a Wavelet and diagnosing it using an SVM classifier [28]. In 2021, Samer and his participants designed an LIP algorithm to extract corneal image features, and they evaluated their method using many classifiers. Thus, they could train a system capable of automatically classifying corneal diseases [22]. We used deep learning techniques in the current study to diagnose corneal diseases. GAN networks were used as a tool to generate realistic corneal images. On the other hand, pre-trained convolutional neural networks (CNN) [29,30,31] are employed in diagnosing corneal diseases, which have recently been used in many medical imaging studies and have been reported to improve performance for a broad range of medical tasks.
This paper has made the following contributions:
(1) Using the GAN model for creating high-quality corneal images from topographical images to solve the scarcity of the cornea dataset.
(2) Examining various transfer learning methods as a based solution for the corneal diagnosis task.
(3) Augmentation of the dataset to be used in training the networks, using the generated synthetic data for improved clinical decisions.
(4) Solving the issue of time consumption that is suffered by deep learning networks.

2. Corneal Diseases Diagnosis

This section begins by describing the data and its features. The architecture of the GAN model for cornea image creation is discussed after that. Due to the restricted quantity of data available for training transfer learning networks, we have presented a method for augmenting synthesized images.

2.1. Dataset

The dataset is made up of images taken by scanning the cornea with a device called Pentacam, which generates various images and parameters known as corneal topography. Ophthalmologists use corneal topography to check eye conditions in clinics. Each patient’s eye data, which includes four corneal maps (sagittal, corneal thickness (CT), elevation front (EF), and elevation back maps (EB)) with a set of parameters, are saved independently [32] (see Figure 1). The data were gathered using a Pentacam (OCULUS, Germany), an image Scheimpflug instrument. The camera scans the eye from many angles in a circular pattern, producing maps with information about the anterior and posterior parts of the cornea and a quick screening report. The Pentacam can be upgraded and altered to meet the user’s requirements [33].
It is worth noting that the data were obtained from the Al-Amal center in Baghdad, Iraq, and the data were labelled with the help of eye specialists, Dr. Nebras H. Gareb, an Ophthalmic Consultant, and Dr. Sohaib A. Mohammed and Dr. Ali A. Al-Razaq, Senior Specialist Ophthalmologists. The images were categorized based on all four corneal maps, and each map was treated separately and labelled as normal or abnormal. As such, we have eight categories of cornea cases. The collected data contains 3448 images of the four maps that have been scientifically collected and classified. The number of images for each class is 248 Normal_Sagittal, 460 Abnormal_Sagittal, 338 Normal_Corneal Thickness, 548 Abnormal_Corneal Thickness, 765 Normal_Elevation Front, 167 Abnormal_Elevation Front, 693 Normal_Elevation Back, and 229 Abnormal_ Elevation Back maps.

2.2. Transfer Learning Models

There are numerous common transfer learning models available in computer vision that are typically utilized as a tool for the categorization of medical images; however, in this study, the MobileNetv2 [34], Resnet50 [35], Xception [36], Vision Transformer (ViT) [37], Co-scale conv-attentional image Transformers (CoaT) [38], and Swin transformer (Swin-T) [39] models have been used, which are trained by the original and synthesized images to evaluate the system’s effectiveness for diagnosing corneal instances. The models demonstrate the influence of synthesized and imbalanced datasets on the corneal diagnosis task; the data were manipulated, and varied numbers of data were used for training and testing. To be balanced, the data were processed using the resample method (oversampling and downsampling). After training each transfer learning model, the results are compared to the results of other approaches see Tables 2 and 3.
The Resnet50 forecasts the delta required to get from one layer to the next and arrive at the final prediction. It addresses the vanishing gradient problem by enabling the gradient to flow through an additional shortcut path. It enables the model to skip over a CNN weight layer if it is not required. This helps to avoid the difficulty of overfitting the training set. ResNet50 is a 50-layer network [36]. The MobileNetv2 is a convolutional architecture built for usage with mobile or low-cost devices that minimizes network cost and size [40]. Segmentation, classification, and object recognition may all be performed with the MobileNetV2 model. In comparison to its predecessor, MobileNetV2 includes two new features occurring linearly between layers, and bottleneck shortcuts are established [41]. Xception is a depthwise separable convolutions-based deep convolutional neural network architecture; Google researchers came up with the idea. Xception has three different flows: entry, middle, and exit. The data initially pass via the entering flow, then eight times through the middle flow, and finally through the exit flow. Batch normalization is applied to all convolution and separable convolution layers [36].
Ref. [37] have investigated the possibility of using transformers for straightforward image recognition. Apart from the initial patch extraction step, this architecture does not have any image-specific inductive biases, which sets it apart from previous research leveraging self-attention in computer vision. Instead, refs. [37,42] employ a standard transformer encoder seen in natural language processing to decode an image as a sequence of patches. With pre-training on massive datasets, this straightforward approach scales remarkably well. Therefore, vision transformer competes with or outperforms the state-of-the-art on many picture classification datasets, while only requiring a little initial investment. CoaT, an image classifier based on the transformer, features cross-scale attention and efficient conv-attention operations, and is given in [38]. CoaT models achieve strong classification results on ImageNet, and their utility for subsequent computer vision tasks, such as object detection and instance segmentation, has been established. In [39], a novel vision Transformer called Swin-T is introduced; it generates a hierarchical feature representation and scales computationally linearly with the size of the input image.
For all models, corneal images were fed into the networks to train the models and extract the weights. For 20 epochs, we used a batch size of 32. Moreover, we employed the Adam optimization approach, with a learning rate of 0.001, to iteratively modify network weights. Table 1 displays all of the parameter values utilized by the various classifiers.

2.3. Generating Synthetic Cornea Images

The diagnostic ratio is negatively affected by a lack of data [43], and this is the fundamental challenge with model training [44]. We synthesized new examples that were learnt from existing data examples using a new way of producing synthetic corneal images using generative adversarial networks (GANs) to expand the training data and enhance diagnostic rates. GANs are deep CNN networks that generate new data from previously trained data such as images [45]. For synthesizing labeled images of the cornea, we employed conditional GANs [46]. The structure of the CGAN model used in this work (see Figure 2) is two networks that compete against one another to achieve a common goal, which is to learn the distribution of data p d a t a from samples (images in our work). Whereas in the first network, called the generator G network, an image G(x) is generated, usually from noise shaped by the uniform distribution P z , which is close to the target image, as it produces an image representing the class you want to generate, in addition to noise, to function as an assistant factor that aids the model in synthesizing images that are close to reality. On the other hand, the second network, dubbed Discriminator D, tries to discern between real and fake images entered into the network; in other words, the input is x, whereas the output D(x). It compares the image created by the rival network to the actual image. The loss function, shown in Equation (1), is optimized to train adversarial networks [47].
m i n G m a x D = E x ~ P d a t a l o g D ( x ) + E z ~ P z [ l o g ( 1 D ( G ( z ) ) ) ]
where the D is trained to maximize D(x) for images derived from real data and minimize the D(x) that is derived from not real data. On the other hand, the Generator seeks to trick the Discriminator by generating an image G ( z ) , which calls for maximizing the value of D ( G ( z ) ) . These two networks are still in competition during the training phase, with the Generator attempting to improve its performance to deceive the Discriminator, while the latter distinguishes between the real and fake images.
The generator accepts a vector of random numbers with a size of 100 created by uniform distribution, and this vector reshapes into 4 × 4 × 1024. The architecture involved four deconvolution layers to up-sample the image using a 5 × 5 filter size. Finally, the output is the image with a size of 64 × 64 × 3. Except for the last layer, the batch normalization and ReLU activation functions are used. The Discriminator-issued class label, in addition to the real or fake decision, derives from a corneal image with size 64 × 64 × 3 using a filter with size 5 × 5 with four convolutional layers. To reduce the spatial dimensionality, stride convolution is used in each layer. Batch normalization and ReLU were also applied in each layer (except the fully connected layer).
The training of CGAN was conducted separately to generate every corneal image category, as well as conducted iteratively for the Discriminator and Generator. The noise sample Z 1 Z n derives from a uniform distribution in the range [–11], n = 100. The slope of the leak of ReLU was equal to 0.2. The zero-centered center normal distribution was employed to initialize the weights with a standard deviation of 0.02. Moreover, for 20 epochs, we used the Adam optimizer, and the learning rate was equal to 0.0001. Figure 2 illustrates the structure of the proposed system.

3. Results

The goal of this research, in which all of the steps have been outlined in detail in Algorithm 1, is to find out to what extent generated data affect the diagnosis of corneal diseases, and how well classifiers can classify them. Therefore, the CGAN model has been trained to deal with data disparities; in other words, each corneal disease’s image generated is separated with high-quality topographical images by using fine-tuning parameters to disband the scarcity of cornea dataset. For clinical decision transfer, learning methods have been exploited, where the augmented dataset is used in training the networks.
Algorithm 1. Algorithm of the Proposed Method.
Inputs: D:Dataset, img: a cornea’s image which is selected from the D;
1GI= Build a model M which generate images from noise and targeting D
2For I = 1: CNN classifieres // (MobilenetV2, Resnet50, Xception, ViT, CoaT, and Swin-T)
3[accuracy, precision, recall, f1-score] = Calculate metrics [Accuracy, Precision, Recall, F1-score] from GI
4End for
5[SSIM, MSE, PSNR, FID] = Calculate [SSIM, MSE, PSNR, FID] between an image from GI and D
6End
The results of diagnosing corneal diseases are reported using different types of transfer learning models, such as MobileNetv2, Resnet50, and Xception.
To detect the importance of data generation, as well as its effect on classification tasks, we used the original dataset to train and test each classifier with and without corneal-generated images.
On the other hand, to assess the strength of the synthesis model and its ability to synthesize convergent data in a particular category and divergent from other categories, each classifier was trained on the synthesized data without using the original data. We employed eight-fold cross-validation with case separation at the patient level in all of our experiments and evaluations. The used examples contained the corneal cases (normal or abnormal for each corneal map).
For each batch of data images, we trained the network and assessed the outcomes individually. The CGAN architecture is used to train each corneal-case class separately, utilizing the same eight-fold cross-validation method and data split. Following training, the generator is capable of creating realistic corneal case images separately using a vector of noise formed by uniform distributions (see Figure 3). Accordingly, the model synthesized eight different cases of corneal images: normal and abnormal cases for sagittal, corneal thickness, elevation front, and elevation back images.
We employed two main kinds of metrics in our research. First, we used observational error metrics such as accuracy, precision, recall, and F1-score metrics to evaluate classification accuracy (Equations (2), (3), (4), and (5), respectively). Second, we used Equations (6) and (7) to evaluate the synthesized image’s quality with the original images via the structural similarity index method (SSIM) [48] and the peak signal-to-noise ratio (PSNR) [49].
A c c u r a c y = T P + T N T N + T P + F N + F P
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1 _ S c o r e = 2 T P 2 T P + F P + F N
where TP = true positives, TN = true negatives, FP = false positives, and FN = false negatives.
Structural similarity (SSIM) [48] is an image quality measurement based on Equation (6) between the approximated image y e L   and the ground truth image y t L .
S S I M ( y t L ,   y e L ) = 1 M j = 1 M ( 2 μ j t μ j e + c 1 ) ( 2 σ j t e + c 2 ) ( μ 2 j t + μ 2 j e + c 1 ) ( σ 2 j t + σ 2 j e + c 2 )
In contrast, peak signal-to-noise ratio (PSNR) [49] is an objective assessment based on comparisons using particular numerical criteria [50,51]; a higher PSNR value indicates better image quality. Images generated by Equation (7) have significant numerical differences at the low end of the PSNR scale [52,53].
P S N R ( f , g ) = 10 l o g 10 ( 255 2 M S E ( f ,   g ) )
MATLAB2020b is used for the implementation of corneal diagnosing. All training processes were performed using an NVIDIA GeForce GTX 1660 GPU.
Using the above-mentioned metrics for different classifiers, few results were recorded when no synthesized data were used; this might be due to overfitting over the smaller number of training images. Conversely, using the CGAN model, the results improved as the number of training instances grew (see Table 2).
Since our data images are unbalanced, we suggested revealing how the corneal diagnosis would be affected if a balanced dataset was available. Therefore, we used the traditional data balancing methods, where we conducted data resampling using both approaches to make a balanced dataset out of an imbalanced one. The first approach was undersampling (keeping all samples in the rare class and randomly selecting an equal number of samples in the abundant class); the second approach was oversampling (increasing the size of rare samples using repetition). These two approaches were applied to the data before and after generating images.
Results reported that, generally, when applying data resampling on the original data (before using the CGAN model), the classifiers achieved a moral performance, while the data were balanced. Moreover, training by oversampling synthesized data for all classifiers outperforms training by underdamped synthesized data. On the other hand, applying oversampled data on the generated image (after implementing the CGAN model) will not affect the classifier results since the data are vast enough to train the models correctly. In contrast, undersampling negatively affected the achievement of classifiers due to the data being decreased again (see Table 3).
This issue of whether the set of images generated was sufficiently distinct to allow classification between the corneal case categories was investigated with the help of an expert ophthalmologist. We provided him with 500 randomly generated images with various categories to classify and diagnose. Table 4 summarizes the findings, and Table 5 shows the average of SSIM and PSNR for a random selection of 100 images.
The SSIM and PSNR have been calculated before and after training the CGAN model on a random sample of 100 images. Table 5 shows that the model can generate synthetic images very close to the original. Therefore, we can consider those images to be legitimate for training CNNs models, and ophthalmologists can use them in clinical research.
The CNN classifiers are repeatedly tested in this work to determine the testing process. The suggested model can be applied in real-time, where testing images only takes a few moments, according to Table 6. While the CoaT model requires the longest ATT, the ATT for ViT beats the other classifiers.
The high quality of the images can be seen in the images synthesized from the test images using the CGAN model, which are displayed in Figure 4. It is also possible to notice the stability of the structures and morphologies of the images.

4. Discussion

The objectives of this work were to apply the CGAN model to generate synthetic medical images for data augmentation to expand limited datasets and improve clinical decision-making for corneal diseases. Thus, we investigated the extent to which synthetic corneal images help another system perform better behind the scenes. The study used a small dataset comprising the sagittal, corneal thickness, elevation front, and elevation back of corneal images. Each class has its distinct characteristics, although there is considerable intra-class variation. Our diagnosis was based on the four maps, each of which was examined to determine whether it was normal or diseased. To identify corneal disorders, a variety of transfer learning architectures were employed. We discovered that by utilizing the CGAN model to synthesize extra realistic images, we could increase the size of the training data groups, thus boosting the clinical decision. The diagnostic outcomes for mobilenetV2, Resnet50, Xception, ViT, CoaT, and Swin-T classifiers improved from 75.2 % to 88.6 %, 77.13% to 90.5%, 78.9% to 90.7 %, 71.2% to 88.7%, 65.6% to 69.3%, and 58.4% to 63.4%, respectively. Results from Table 2 show that the synthetic data samples generated can increase the variability of the input dataset, resulting in more accurate clinical decisions.
The scores demonstrate that the synthesized images have useful visuals and, more crucially, useful characteristics that may be used in computer-aided diagnosis. The other aspect of this research is to test the effect of data balance on diagnostic results, where we used the resampling method to make the dataset balanced. The results showed that training the model before generating a new set of data on a balanced dataset is very important, especially in circumstances where data are scarce. On the contrary, we did not notice a significant impact on the performance of the classifiers when using the data resampling on the generated data because the data was sufficient and suitable for training the models without the need to balance them using data balancing methods. This is clear evidence of the importance of the model proposed in this paper. In a final experiment, we compared the performance of the classifiers-based systems employed in this study for clinical decision-making (Table 3). The highest performance was derived from synthesized data in the Xception classifier, whereas the best performance came from using balance data in Resnet50 when using the oversampling approach, but the ViT model while using the undersampling approach.
This work has several limitations. For example, the training complexity was enhanced by training distinct GANs for each corneal case class. It might be useful to look into GAN designs that produce multi-class samples at the same time. Another type of GAN learning process might increase the quality of the corneal image. It is also possible to do more research to improve the training loss function by adding regularization terms.
Because the human factor is critical in evaluating the proposed model’s outputs, an expert opinion was obtained after providing him with a set of generated corneal images containing a randomly selected set of normal and abnormal corneal images. The following was the expert’s opinion: “Creating a new template for the corneal topographical of four refractive maps is considered an interesting subject as it enriched the overall expected shapes that could be seen during the daily clinic. These new images which created based on real cases collected previously and diagnosed that the new images are still inside the reality borderlines. Gain good experience with the new shapes and specify the further required steps of a diagnosis other than the topographical maps that could be specified advanced for predicted out-of-skim cases. In such a way, offline training for the new ophthalmologists and improving the skill of diagnosis with the preparation for new unseen cases could be done.” In the future, we look to develop our research to exploit other GANs that might benefit from corneal image synthesis for better achievement.

5. Conclusions

In conclusion, we proposed a strategy for improving performance in a medical issue with little data by generating synthetic medical images for data augmentation. On a corneal diseases diagnosis task, we discovered that synthetic data augmentation beat traditional data augmentation in accuracy by roughly 13%. Additionally, we investigated the performance of the classifiers in different conditions, and we found that while working with cornea images to diagnose diseases, the Xcepton classifier is more responsive than the rest of the used classifiers. We anticipate that synthetic augmentation can help with a variety of medical issues, and that the method we have outlined can lead to more powerful and reliable support systems.

Author Contributions

Conceptualization, S.K.J. and S.A.; methodology, S.K.J., S.A., N.H.G. and J.M.; software, S.K.J. and S.A.; validation, S.A., N.H.G. and J.M.; formal analysis, S.K.J. and S.A.; investigation, N.H.G. and J.M.; resources, S.K.J., S.A., N.H.G. and J.M.; data curation, S.K.J., S.A., N.H.G. and J.M.; writing—original draft preparation, S.K.J., S.A., N.H.G. and J.M.; writing—review and editing, T.A.R. and S.Q.S.; visualization, T.A.R.; supervision, T.A.R.; project administration; funding acquisition, S.Q.S. and P.S.J. All authors have read and agreed to the published version of the manuscript.

Funding

Dr. P. S. JosephNg, Faculty of Data Science & Information Technology, INTI International University, Persiaran Perdana BBN, 71800 Nilai, Negeri Sembilan, Malaysia.

Institutional Review Board Statement

The manuscript is conducted within the ethical manner advised by the targeted journal.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data can be shared upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest to any party.

References

  1. Tsai, Y.Y.; Chen, P.Y.; Ho, T.Y. Transfer learning without knowing: Reprogramming black-box machine learning models with scarce data and limited resources. In International Conference on Machine Learning; PMLR; IBM: New York, NY, USA, 2020; pp. 9614–9624. [Google Scholar]
  2. Yaniv, G.; Moradi, M.; Bulu, H.; Guo, Y.; Compas, C.; Syeda-Mahmood, T. Towards an efficient way of building annotated medical image collections for big data studies. In Intravascular Imaging and Computer Assisted Stenting, and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis; Springer: Cham, Switzerland, 2017; pp. 87–95. [Google Scholar]
  3. Minnema, J.; van Eijnatten, M.; Kouw, W.; Diblen, F.; Mendrik, A.; Wolff, J. CT image segmentation of bone for medical additive manufacturing using a convolutional neural network. Comput. Biol. Med. 2018, 103, 130–139. [Google Scholar] [CrossRef] [Green Version]
  4. Alvén, J. Improving Multi-Atlas Segmentation Methods for Medical Images. Master’s Thesis, Chalmers Tekniska Hogskola, Göteborg, Sweden, 2017. [Google Scholar]
  5. Zhong, Z.; Zheng, L.; Kang, G.; Li, S.; Yang, Y. Random erasing data augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 13001–13008. [Google Scholar]
  6. Jain, S.; Seth, G.; Paruthi, A.; Soni, U.; Kumar, G. Synthetic data augmentation for surface defect detection and classification using deep learning. J. Intell. Manuf. 2020, 33, 1007–1020. [Google Scholar] [CrossRef]
  7. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, Montreal, QC, Cannada, 8–13 December 2014. [Google Scholar]
  8. Zhao, J.; Mathieu, M.; LeCun, Y. Energy-based generative adversarial network. arXiv 2016, arXiv:1609.03126. [Google Scholar]
  9. Yi, X.; Walia, E.; Babyn, P. Generative adversarial network in medical imaging: A review. Med. Image Anal. 2019, 58, 101552. [Google Scholar] [CrossRef] [Green Version]
  10. Phillip, I.; Zhu, J.; Zhou, T.; Efros, A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
  11. Marcelo, B.; Sapiro, G.; Caselles, V.; Ballester, C. Image inpainting. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA, 23–28 July 2000; pp. 417–424. [Google Scholar]
  12. Yunjey, C.; Choi, M.; Kim, M.; Ha, J.; Kim, S.; Choo, J. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision And Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8789–8797. [Google Scholar]
  13. Yang, Q.; Li, N.; Zhao, Z.; Fan, X.; Eric, I.; Chang, C.; Xu, Y. MRI cross-modality image-to-image translation. Sci. Rep. 2020, 10, 3753. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Yuxi, W.; Zhang, Z.; Hao, W.; Song, C. Multi-Domain Image-to-Image Translation via a Unified Circular Framework. IEEE Trans. Image Process. 2020, 30, 670–684. [Google Scholar]
  15. Costa, P.; Galdran, A.; Meyer, M.I.; Abramoff, M.D.; Niemeijer, M.; Mendonca, A.M.; Campilho, A. Towards adversarial retinal image synthesis. arXiv 2017, arXiv:1701.08974. [Google Scholar]
  16. Dai, W.; Doyle, J.; Liang, X.; Zhang, H.; Dong, N.; Li, Y.; Xing, E.P. Scan: Structure correcting adversarial network for chest x-rays organ segmentation. arXiv 2017, arXiv:1703.08770. [Google Scholar]
  17. Xue, Y.; Xu, T.; Zhang, H.; Long, L.R.; Huang, X. Segan: Adversarial network with multi-scale L1 loss for medical image segmentation. Neuroinformatics 2018, 16, 383–392. [Google Scholar] [CrossRef] [Green Version]
  18. Dong, N.; Trullo, R.; Lian, J.; Petitjean, C.; Ruan, S.; Wang, Q.; Shen, D. Medical image synthesis with context-aware generative adversarial networks. In International Conference on Medical Image Computing and Computer-Assisted Intervention, Proceedings of the 20th International Conference, Quebec City, QC, Canada, 11–13 September 2017; Springer: Cham, Switzerland, 2017; pp. 417–425. [Google Scholar]
  19. Thomas, S.; Seeböck, P.; Schmidt-Erfurth, S.M.W.U.; Langs, G. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In International Conference on Information Processing in Medical Imaging, Proceedings of the 25th International Conference, IPMI 2017, Boone, NC, USA, 25–30 June 2017; Springer: Cham, Switzerland, 2017; pp. 146–157. [Google Scholar]
  20. Jameel, S.K.; Aydin, S.; Ghaeb, N.H. Machine Learning Techniques for Corneal Diseases Diagnosis: A Survey. Int. J. Image Graph. 2021, 21, 2150016. [Google Scholar] [CrossRef]
  21. Ruchi, S.; Amador, C.; Tormanen, K.; Ghiam, S.; Saghizadeh, M.; Arumugaswami, V.; Kumar, A.; Kramerov, A.A.; Ljubimov, A.V. Systemic diseases and the cornea. Exp. Eye Res. 2021, 204, 108455. [Google Scholar]
  22. Jameel, S.K.; Aydin, S.; Ghaeb, N.H. Local information pattern descriptor for corneal diseases diagnosis. Int. J. Electr. Comput. Eng. 2021, 11, 4972–4981. [Google Scholar] [CrossRef]
  23. Shanthi, S.; Aruljyothi, L.; Balasundaram, M.B.; Janakiraman, A.; Nirmaladevi, K.; Pyingkodi, M. Artificial intelligence applications in different imaging modalities for corneal topography. Surv. Ophthalmol. 2021, 67, 801–816. [Google Scholar] [CrossRef]
  24. Nazar, S.; Hussein, N. Vector machine. Int. J. Curr. Res. 2018, 10, 75461–75467. [Google Scholar]
  25. Ikram, I.; Rozema, J.; Consejo, A. Corneal modeling and Keratoconus identification. Biomath Commun. Suppl. 2018, 5, 1. [Google Scholar]
  26. Arbelaez, M.C.; Versaci, F.; Vestri, G.; Barboni, P.; Savini, G. Use of a support vector machine for keratoconus and subclinical keratoconus detection by topographic and tomographic data. Ophthalmology 2012, 119, 2231–2238. [Google Scholar] [CrossRef] [PubMed]
  27. Lopes, B.T.; Ramos, I.C.; Dawson, D.G.; Belin, M.W.; Ambrósio, R., Jr. Detection of ectatic corneal diseases based on pentacam. Z. Med. Phys. 2016, 26, 136–142. [Google Scholar] [CrossRef]
  28. Jameel, S.K.; Aydin, S.; Ghaeb, N.H. SWFT: Subbands wavelet for local features transform descriptor for cornealdiseases diagnosis. Turk. J. Electr. Eng. Comput. Sci. 2021, 29, 875–896. [Google Scholar]
  29. Albawi, S.; Mohammed, T.A.; Al-Zawi, S. Understanding of a convolutional neural network. In Proceedings of the 2017 International Conference on Engineering and Technology (ICET), Antalya, Turkey, 21–23 August 2017; IEEE: Bellevue, WA, USA, 2017; pp. 1–6. [Google Scholar]
  30. Xu, L.; Ren, J.S.; Liu, C.; Jia, J. Deep convolutional neural network for image deconvolution. Adv. Neural Inf. Process. Syst. 2014, 27, 1790–1798. [Google Scholar]
  31. Li, Q.; Cai, W.; Wang, X.; Zhou, Y.; Feng, D.D.; Chen, M. Medical image classification with convolutional neural network. In Proceedings of the 2014 13th International Conference on Control Automation Robotics & Vision (ICARCV), Singapore, 10–12 December 2014; IEEE: Bellevue, WA, USA, 2014; pp. 844–848. [Google Scholar]
  32. Sinjab, M.M. Corneal Tomography in Clinical Practice (Pentacam System): Basics & Clinical Interpretation; Jaypee Brothers Medical Publishers: New Dehli, India, 2018. [Google Scholar]
  33. Hashemi, H.; Mehravaran, S. Day to day clinically relevant corneal elevation, thickness, and curvature parameters using the orbscan II scanning slit topographer and the pentacam scheimpflug imaging device. Middle East Afr. J. Ophthalmol. 2010, 17, 44–55. [Google Scholar]
  34. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4510–4520. [Google Scholar]
  35. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  36. Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
  37. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  38. Xu, W.; Xu, Y.; Chang, T.; Tu, Z. Co-scale conv-attentional image transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 9981–9990. [Google Scholar]
  39. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10012–10022. [Google Scholar]
  40. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  41. Toğaçar, M.; Cömert, Z.; Ergen, B. Intelligent skin cancer detection applying autoencoder, MobileNetV2 and spiking neural networks. Chaos Solitons Fractals 2021, 144, 110714. [Google Scholar] [CrossRef]
  42. Chen, C.F.R.; Fan, Q.; Panda, R. Crossvit: Cross-attention multi-scale vision transformer for image classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 357–366. [Google Scholar]
  43. Eslam, M.; Sarin, S.K.; Wong, V.W.S.; Fan, J.G.; Kawaguchi, T.; Ahn, S.H.; Zheng, M.; Shiha, G.; Yilmaz, Y.; Gani, R.; et al. The Asian Pacific Association for the Study of the Liver clinical practice guidelines for the diagnosis and management of metabolic associated fatty liver disease. Hepatol. Int. 2020, 14, 889–919. [Google Scholar] [CrossRef]
  44. Jammel, S.K.; Majidpour, J. Generating Spectrum Images from Different Types—Visible, Thermal, and Infrared Based on Autoencoder Architecture (GVTI-AE). Int. J. Image Graph. 2021, 22, 2250005. [Google Scholar] [CrossRef]
  45. Sorin, V.; Barash, Y.; Konen, E.; Klang, E. Creating artificial images for radiology applications using generative adversarial networks (GANs)—A systematic review. Acad. Radiol. 2020, 27, 1175–1185. [Google Scholar] [CrossRef] [PubMed]
  46. Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
  47. Majidpour, J.; Jammel, S.K.; Qadir, J.A. Face Identification System Based on Synthesizing Realistic Image using Edge-Aided GANs. Comput. J. 2021. [Google Scholar] [CrossRef]
  48. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  49. Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; IEEE: Bellevue, WA, USA, 2010; pp. 2366–2369. [Google Scholar]
  50. Cadik, M.; Slavik, P. Evaluation of two principal approaches to objective image quality assessment. In Proceedings of the Eighth International Conference on Information Visualisation, IV 2004, London, UK, 14–16 July 2004; IEEE: Bellevue, WA, USA, 2004; pp. 513–518. [Google Scholar]
  51. Nguyen, T.B.; Ziou, D. Contextual and non-contextual performance evaluation of edge detectors. Pattern Recognit. Lett. 2000, 21, 805–816. [Google Scholar] [CrossRef]
  52. Elbadawy, O.; El-Sakka, M.R.; Kamel, M.S. An information theoretic image-quality measure. In Conference Proceedings, IEEE Canadian Conference on Electrical and Computer Engineering (Cat. No. 98TH8341), Toronto, ON, Canada, 25–28 May 1998; IEEE: Bellevue, WA, USA, 1998; Volume 1, pp. 169–172. [Google Scholar]
  53. Dosselmann, R.; Yang, X.D. Existing and emerging image quality metrics. In Proceedings of the Canadian Conference on Electrical and Computer Engineering, Saskatoon, SK, Canada, 1–4 May 2005; IEEE: Bellevue, WA, USA, 2005; pp. 1906–1913. [Google Scholar]
Figure 1. The four corneal maps: (a) sagittal, (b) elevation front, (c) corneal thickness, and (d) elevation back maps.
Figure 1. The four corneal maps: (a) sagittal, (b) elevation front, (c) corneal thickness, and (d) elevation back maps.
Biomolecules 12 01888 g001
Figure 2. Structure of the proposed system.
Figure 2. Structure of the proposed system.
Biomolecules 12 01888 g002
Figure 3. Samples of the generated images for different corneal conditions using the Conditional Generative Adversarial Network (CGAN) model (a) normal and abnormal corneal sagittal maps (b) normal and abnormal corneal thickness maps (c) normal and abnormal corneal elevation front maps (d) normal and abnormal corneal elevation back maps.
Figure 3. Samples of the generated images for different corneal conditions using the Conditional Generative Adversarial Network (CGAN) model (a) normal and abnormal corneal sagittal maps (b) normal and abnormal corneal thickness maps (c) normal and abnormal corneal elevation front maps (d) normal and abnormal corneal elevation back maps.
Biomolecules 12 01888 g003
Figure 4. Example of original and synthesis images.
Figure 4. Example of original and synthesis images.
Biomolecules 12 01888 g004
Table 1. Values of the parameters used in the classifiers (million).
Table 1. Values of the parameters used in the classifiers (million).
MethodImage SizeParameters
MobilenetV2224 × 2243.5
Resnet50224 × 22425.6
Xception299 × 29922.9
ViT128 × 12836.3
CoaT224 × 22422
Swin-T224 × 22429
Table 2. Performance comparison for classification of corneal conditions among obstetric models (%).
Table 2. Performance comparison for classification of corneal conditions among obstetric models (%).
ClassifierDataAccuracyPrecisionRecallF1-Score
MobilenetV2Original75.272.473.272.3
Synthesized88.686.589.887.5
Resnet50Original77.1374.674.674.3
Synthesized90.59090.490.1
XceptionOriginal78.975.675.775.1
Synthesized90.79090.690.2
ViTOriginal71.268.268.167
Synthesized88.790.784.486.2
CoaTOriginal65.664.965.265.1
Synthesized69.368.168.468.2
Swin-TOriginal58.456.357.556.9
Synthesized63.462.562.762.6
Table 3. Performance comparison for classifying corneal conditions among obstetric models after balancing data (%).
Table 3. Performance comparison for classifying corneal conditions among obstetric models after balancing data (%).
ClassifierDataAccuracyPrecisionRecallF1-Score
OVSUNSOVSUNSOVSUNSOVSUNS
MobilenetV2Original85.575.485.776.285.575.485.375.4
Synthesized88.581.188.881.688.481.188.481
Resnet50Original86.3675.786.37686.675.786.375.5
Synthesized90.282.890.88390.882.890.882.7
XceptionOriginal8677.386.2788677.385.976.7
Synthesized9082.790.382.79082.69082.6
ViTOriginal74.570.973.268.472.869.57368.9
Synthesized89.886.188.285.588.985.688.585.6
CoaTOriginal69.463.769.162.668.962.86962.7
Synthesized73.866.872.665.772.965.972.765.8
Swin-TOriginal60.256.959.856.558.956.559.356.5
Synthesized65.661.764.660.8646064.360.4
OVS: oversampling, UNS: undersampling.
Table 4. The results from the expert (%).
Table 4. The results from the expert (%).
Sagittal ImagesCT ImagesEF and EB Images
Diagnosis by an Expert0.940.980.93
Table 5. Average of Structural Similarity Index (SSIM) and peak signal-to-noise ratio (PSNR) for 100 random images.
Table 5. Average of Structural Similarity Index (SSIM) and peak signal-to-noise ratio (PSNR) for 100 random images.
SSIMPSNR
0.87233.221
Table 6. Convolutional neural network (CNN) classifier’s average time test (ATT) (s).
Table 6. Convolutional neural network (CNN) classifier’s average time test (ATT) (s).
MobilenetV2Resnet50XceptionViTCoaTSwin-T
0.02580.01870.01520.01080.03420.0203
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jameel, S.K.; Aydin, S.; Ghaeb, N.H.; Majidpour, J.; Rashid, T.A.; Salih, S.Q.; JosephNg, P.S. Exploiting the Generative Adversarial Network Approach to Create a Synthetic Topography Corneal Image. Biomolecules 2022, 12, 1888. https://doi.org/10.3390/biom12121888

AMA Style

Jameel SK, Aydin S, Ghaeb NH, Majidpour J, Rashid TA, Salih SQ, JosephNg PS. Exploiting the Generative Adversarial Network Approach to Create a Synthetic Topography Corneal Image. Biomolecules. 2022; 12(12):1888. https://doi.org/10.3390/biom12121888

Chicago/Turabian Style

Jameel, Samer Kais, Sezgin Aydin, Nebras H. Ghaeb, Jafar Majidpour, Tarik A. Rashid, Sinan Q. Salih, and Poh Soon JosephNg. 2022. "Exploiting the Generative Adversarial Network Approach to Create a Synthetic Topography Corneal Image" Biomolecules 12, no. 12: 1888. https://doi.org/10.3390/biom12121888

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop