Next Article in Journal
Constitutive Model of Bond-Slip between Rubber Granule–Basalt Fiber Composite Modified Concrete and Rebar
Next Article in Special Issue
Connection-Aware Heuristics for Scheduling and Distributing Jobs under Dynamic Dew Computing Environments
Previous Article in Journal
Flow-Induced Vibration of Cantilever Type Elastic Material in Straight Tricylinder
Previous Article in Special Issue
UAV Detection and Tracking in Urban Environments Using Passive Sensors: A Survey
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Deep Convolutional Generative Adversarial Network (DCGAN) and Xtreme Gradient Boost for X-ray Image Augmentation and Detection

Faculty of Computing and Information Technology in Rabigh, King Abdulaziz University, Jeddah 21589, Saudi Arabia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(23), 12725; https://doi.org/10.3390/app132312725
Submission received: 31 October 2023 / Revised: 12 November 2023 / Accepted: 23 November 2023 / Published: 27 November 2023
(This article belongs to the Special Issue Deep Learning and Edge Computing for Internet of Things)

Abstract

:
The COVID-19 pandemic has exerted a widespread influence on a global scale, leading numerous nations to prepare for the endemicity of COVID-19. The polymerase chain reaction (PCR) swab test has emerged as the prevailing technique for identifying viral infections within the current pandemic. Following this, the application of chest X-ray imaging in individuals provides an alternate approach for evaluating the existence of viral infection. However, it is imperative to further boost the quality of collected chest pictures via additional data augmentation. The aim of this paper is to provide a technique for the automated analysis of X-ray pictures using server processing with a deep convolutional generative adversarial network (DCGAN). The proposed methodology aims to improve the overall image quality of X-ray scans. The integration of deep learning with Xtreme Gradient Boosting in the DCGAN technique aims to improve the quality of X-ray pictures processed on the server. The training model employed in this work is based on the Inception V3 learning model, which is combined with XGradient Boost. The results obtained from the training procedure were quite interesting: the training model had an accuracy rate of 98.86%, a sensitivity score of 99.1%, and a recall rate of 98.7%.

1. Introduction

The respiratory infection known as COVID-19, colloquially referred to as the coronavirus, has had a substantial impact on a considerable global population. According to recent statistical data, the cumulative number of global infections has exceeded 243 million cases, with Saudi Arabia reporting a significant figure of over 801,000 cases. The fatality rate in Saudi Arabia stands at 1.15%, leading to a total of 9223 recorded deaths [1,2]. Despite the widespread administration of the vaccination to a majority of the population, the health authority emphasizes the continued importance of adhering to health precautions. COVID-19 manifests a range of symptoms, including elevated body temperature, emesis, and even gastrointestinal distress in the form of diarrhea. X-ray imaging can be utilized as a comprehensive diagnostic tool to assess the impact of viral infections on the human lungs and their potential to spread to other organs. The potential for utilizing X-ray images in the analysis of COVID-19 via visual inspection presents an opportunity to employ image processing and deep learning techniques for the comprehensive investigation of infection areas and the potential prediction of future spread. The utilization of convolutional neural networks (CNNs) for deep learning has been widely acknowledged as a robust method for the detection and recognition of medical images [3,4]. Deep learning has substantially assisted in the classification of COVID-19 via the use of X-rays. Other researchers have proposed an automatic detection process using a deep learning method. Their methods concern applying adaptive median filtering and histogram equalization [5]. Neha Gianchandani et al. proposed a method for rapid COVID-19 diagnosis using ensemble deep transfer learning models from chest radiographic images. They designed two deep learning models utilizing chest X-ray images to diagnose COVID-19 [6]. Abdullahi Umar Ibrahim et al. used a deep learning approach based on a pre-trained AlexNet model to classify COVID-19, pneumonia, and normal CXR scans obtained from different public databases [7]. Ebenezer Jangam et al. used a stacked ensemble of VGG 19 and DenseNet 169 models to detect COVID-19, achieving high accuracy and recall in five different datasets consisting of chest X-ray images and chest CT scans [7]. The advancement of technologies for diagnosing COVID-19 from X-ray images has several advantages, as shown in Table 1.

1.1. Motivation of Study

During the pandemic, chest X-rays and computer tomography (CT) had the potential for the clinical diagnosis and evaluation of COVID-19 patients. These approaches are useful for doctors performing diagnosis and therapy. Keymal and Şen used automatic detection with a bidirectional LSTM network and deep features [8]. They relied on the Bi-LSTM network inside the deep feature and then compared it with the deep neural network of the fivefold cross-validation approach. Their model was trained using 100 epochs and 32 batch sizes. The fourth section of their model was designed to handle the activation function, with two neurons representing the classes COVID-19 and no-finding [8]. A radiological study found a diagnostic tool for COVID-19. However, it can also be used for measuring the chronicity of diseases or complications in patients [9]. Most researchers rely on the visual interpretation of chest X-ray images, which requires an expert such as a radiologist or medical doctor. The densities of chest X-rays might vary depending on the severity of the disease. Figure 1 shows the denseness of chest X-ray images.

1.2. State of the Art

The utilization of artificial intelligence (AI) equipped with deep learning capabilities has become prevalent throughout the pandemic for assessing chest X-ray pictures. The present study aims to develop a research design for the identification of anomalies in the lung pictures of patients during the initial screening process. Another researcher also utilized artificial intelligence to identify COVID-19 based on cough sounds. The researchers directed their attention towards the analysis of cough sounds obtained from several datasets, with the objective of detecting and diagnosing respiratory diseases. This was achieved via the utilization of a support vector machine (SVM) classifier and linear regression techniques. The utilization of artificial neural networks (ANNs) and the random forest (RF) classifier has also been observed in the analysis of cough patterns and the subsequent determination of the severity of respiratory diseases in patients [10]. Respiratory conditions, such as asthma and cough, can be detected via the application of Wigner distribution methodologies within a low-noise setting [10].
The other study has focused on the problem of classifying COVID-19 patients from healthy people using a limited public dataset. They used two-step transfer learning models to train on small datasets [11]. They conducted a pre-trained deep residual network on a large pneumonia dataset in their first step. Satisfying results for COVID-19 detection in healthy and pneumonia-infected X-ray images were obtained. Their proposed model used two types of layers for pre-trained steps using ResNet. Afterwards, they used feature smoothing (FSL) based on 1 × 1 convolution to maintain the shape of tensors. They continued by smoothing pre-trained tensors to acquire the characteristics of the new image input. FSL will then be used with residual blocks for further processing with a feature extraction layer (FEL).
FEL has double channels and three operations to be combined with a feature map into a single dimension [11]. The input of the X-ray images is 224 × 224 pixels, using three channels in ResNet34. Then, it is fed into three residual blocks inside ResNet34. FSL is inserted before and after the second residual block. While FEL is implanted after the third residual block, the final training uses a 512 × k layer (k refers to the number of classes) with limited X-ray images of COVID-19. The images are cropped using a ratio of 0.8–1.0. Then, it is rotated with a random rotation angle [–20, 20]. The result is an initial cross-entropy loss that is relatively high at around 0.67, with an average loss of 0.73. The significant decrease in entropy loss was sighted in the 100th batch at 0.47. Then, after the 1635th batch, cross-extropy loss can be pressed until 0.072; this number is very substantial since the initial loss is around 0.67. State-of-the-art COVID-19 automatic detection using chest X-rays varies depending on the data’s characteristics and the convolutional network’s structure. Table 2 summarizes related works on COVID-19 detection using chest X-ray images.

1.3. Contribution of the Proposed Work

The primary contributions of this study can be succinctly stated as follows: (1) the implementation of data augmentation using the deep convolutional generative adversarial network (DCGAN), resulting in enhanced dataset quality; (2) the utilization of a modified Inception V3 model, specifically designed to accommodate both the mixed DCGAN dataset and the original dataset, thereby achieving a greater accuracy rate compared to previous research efforts. The XGradient Boost algorithm enhances feature mapping. The framework of the paper is outlined as follows: Section 1 is dedicated to examining the current level of knowledge, and the underlying factors that drive research endeavors. Section 2 provides an overview of existing studies pertaining to the application of deep learning techniques in the detection of chest diseases. Section 3 of the document encompasses the content and proposed approach. Section 4 of the paper provides a comprehensive analysis of the experimental results and subsequent comments. Additionally, it evaluates the performance of the proposed model in comparison to earlier studies. Finally, Section 5 serves as the concluding section of the study.

2. Related Work

The researchers observed that individuals with COVID-19 infection exhibited obvious ground-glass opacities in their lungs, which appeared darker in comparison to the surrounding area when compared to individuals without COVID-19 [15,16]. Consequently, scientists place significant reliance on utilizing chest X-ray pictures to detect and analyze COVID-19, as well as determining the subsequent course of action for patients. In practical contexts, the ability to analyze numerous cases simultaneously is necessary in order to address the limitations imposed by staff and testing kit availability. Hence, the utilization of image-based detection techniques using X-ray pictures holds significant promise in assisting hospitals and medical personnel by facilitating the identification of infected patients. The radiography system is widely accessible in most hospitals, and medical staff is generally better acquainted with interpreting X-ray images as opposed to utilizing the latest testing kits [15,16]. Furthermore, the generative adversarial network (GAN) framework has gained significant popularity in the field of medical image processing. In their study, Zhao et al. introduced a framework that utilizes the VGG16 model and a DCGAN-based model for generating synthetic lung pictures. The generated images are then used for classification purposes, employing the forward and backward GAN techniques [17]. The efficiency of generative adversarial networks (GANs) has also been demonstrated in the detection of anomalies in retinal images, specifically in the comparison between healthy and unhealthy retinal images. Hence, the absence of a sufficient dataset has compelled researchers to further investigate the available dataset. Consequently, the utilization of a generative adversarial network (GAN) has become imperative in order to address the limitations imposed by the dataset. The new study published in the Journal of Radiology has demonstrated the higher performance of chest X-ray imaging compared to laboratory testing methods, such as PCR or fast tests. As a result, numerous research studies have concluded that the utilization of chest radiography as the primary screening method is recommended for the detection of COVID-19 infection. The integration of artificial intelligence (AI) with radiographic imaging has the potential to facilitate extensive detection capabilities and streamline the responsibilities of medical professionals, enabling them to allocate their attention towards providing effective treatment to patients who have tested positive for a medical condition.
Computers have a big role in diagnosing diseases. Globally, doctors have detected viruses in the chest by looking at X-ray images of the patient’s chest [18]. X-rays are commonly used to diagnose pneumonia, but it is still difficult to distinguish between lung infections caused by COVID-19 or pneumococcal pneumonia using X-ray images [19]. It is difficult for radiologists to distinguish COVID-19 from viral pneumonia because the images of various pneumonia viruses are similar and overlap with other lung diseases. Thus, to help diagnose COVID-19, artificial intelligence with deep learning that is able to provide fast, precise, and inexpensive results is needed [20]. With Mask_RCNN, diagnosing diseases using X-ray images will be more accurate and efficient for understanding the type and severity of patterns [21]. Deep learning has been widely used in the field of biomedical image processing, where the results have proven to be effective in classifying various diseases such as pneumonia, respiratory disorders, etc. [12]. Abdul Qayyum et al. detected COVID-19 on lung X-ray images using the depth-wise multilevel feature concatenated deep neural network method [22]. N. Kumar et al. [12] detected COVID-19 using a deep transfer learning-based ensemble model designed by integrating EffientNet, GoogLeNet, and XceptionNet for the early diagnosis of COVID-19 infection. Faisal Muhammaad Shah et al. [23] reviewed some newly emerging AI-based models that can detect COVID-19 from X-ray or CT images of the lungs.
Sourabh Shastri et al. designed a nested ensemble model using deep learning methods based on a long short-term memory (LSTM) model, which evaluated confirmed intensive care COVID-19 and death cases in India [24]. Kemal Akyol and Baha Sen propose deep learning using the Bi-LSTM network to detect COVID-19 and no-finding cases via chest X-ray images [16]. K. Shankar et al. developed a metaheuristic-based fusion model for COVID-19 diagnosis using chest X-ray images [25]. Medical image collection is a very expensive and tedious process that requires the participation of radiologists and researchers [26]. The ability of CNNs to perform such tasks is due to their large number of parameters and fine-tuning approach. Thus, even with limited datasets, it can carry out the best detection and recognition process [27,28,29]. The dataset of images for medical imaging is quite hard to collect and might be very expensive because it involves radiologists and researchers [30]. During the pandemic, collecting X-ray datasets for the chest was still difficult; however, another researcher has proposed a data augmentation method to augment the synthetic dataset. This method is used to encompass a modified dataset for training purposes.
Previous researchers have used several approaches for data augmentation, such as image transformation, color adjustment, distorting, or polishing. Currently, the advanced form of data augmentation is more capable than conventional data amplification. One of the methods that previous researchers used was known as the generative adversarial network, or the GAN. This approach is very innovative for producing synthetic images without supervision via the min–max game algorithm. The GAN uses two disparate networks, G(z) and D(x), that have different functions. G(z) is used to generate a faithful copy of the image in order to fake another network with a certain discriminator, which is computed by D(x) [30]. The infected patient with COVID-19 might show several symptoms, such as fever and a cough similar to the flu. However, serious cases show organ failure, breathing difficulty, and even death [31,32]. As a result of the exponential growth of COVID-19 infections, many countries are facing serious issues with their health systems, and most of them are at the point of downfall because their facilities are not capable of handling a large number of patients at the same time. The testing kit and ventilator stock decreased rapidly; therefore, most countries employ a lockdown policy, strictly ban any gathering, and ask everybody to stay at home. One of the critical steps in detecting the infection in patients is testing the method. Primarily, COVID-19 testing used polymerase chain reaction (rRT-PCR) tests [33,34]. This is also known as a swab test. By taking some liquid from the nose or throat, the result might be observed in a few hours or days. Furthermore, the other approach that can be used is to obtain X-ray radiography images of the patient’s chest [34].

3. Research Method and Materials

This section describes the dataset, DCGAN, Xtreme Gradient Boost (XGBoost), and evaluation metrics and measurement.

3.1. Dataset

Historically, researchers have directed their attention towards employing deep learning techniques for the purpose of binary classification in the context of binary images. The primary objective of our research was to apply the generative adversarial network (GAN) methodology in conjunction with convolutional neural networks (CNNs) and Xtreme Gradient Boosting (XGBoost) techniques. Multiple datasets consisting of both normal and pneumonia patient data were obtained from the Paul Cohen dataset, encompassing a range of resolutions [35]. An alternative version of the dataset that Joseph Paul Cohen provided, which includes samples of COVID-19, further supports the dataset that we used [36]. We have chosen these two samples as they are among the initial datasets collected for chest X-ray analysis. Subsequently, the aforementioned dataset will be utilized as input for the DCGAN Network in order to generate synthetic images. Figure 2 illustrates an example of normal, pneumonia, and COVID-19 chest X-ray images.

3.2. Proposed Methodology

The generative adversarial network (GAN) can produce images or samples of data that imitate the original dataset’s feature distribution. There are two core components of the GAN, which are the generator and discriminator, that are trained simultaneously via a combative process. The discriminator learns how to differentiate the fake and authentic image datasets, while the generator discovers methods for producing images that look like authentic images [37]. The architecture of the GAN is depicted in Figure 3.
The generator utilizes latent space coordinates as an input to generate a novel image. Typically, a vector with 100 numerical values represents the latent space. During the training phase, the generator learns the process of mapping each individual point in order to generate an image. Once the model has been retrained, it will acquire new mapping capabilities.
The process of learning GANs is to perform real-time training on both the discriminator and generator networks, which is challenging and requires min–max adjustments for the discriminator and generator. This process is shown in Equation (1) [38].
m i n G m a x D   V G A N D , G = E x ~ p d a t a x l o g D x + E z ~ p z z l o g 1 D ( G ( z ) )
E x ~ p d a t a x is the expected value of real instances, while E z ~ p z z is the expected value of fake instances. p z z represents the random noise variable for a standard normal distribution. G(z) denotes the generator function that is responsible for mapping the data to space, x denotes the original data, and D(x) denotes the probability of original data(x) and not the generated ones.
It is widely acknowledged that Inception V3 is a convolutional neural network design that belongs to the Inception family. This approach incorporates various enhancements, including label smoothing, factorized 7 × 7 convolution, and an auxiliary classifier. The Inception module initially consists of a 5 × 5-sized convolutional layer, which is then decreased by employing two 3 × 3 convolutions. The auxiliary classifier is employed in order to enhance the convergence of deep neural networks by mitigating the issue of vanishing gradients. Traditionally, the techniques of maximum and average polling have been employed in order to decrease the dimensions of the grid and feature map. The Inception V3 model reduces the size of the grid by performing a division operation on the current grid size, dividing it by 2. For instance, if a grid consists of a d × d grid with k filters, the process of reduction will yield a d/2 × d/2 grid with 2k filters.
The core concept of this paper is to combine DCGAN with transfer learning (Inception_V3) to acquire better chest X-ray image results. The architecture of the Inception-V3 model is depicted in Figure 4, while the proposed methodology is portrayed in Figure 5.
Image data processing is essential to the convolutional neural network (CNN) because it will determine the classification result. The initial process is an image-resizing process that resizes the image into 288 × 288 dimensions for training purposes, followed by image normalization. The next step will involve image augmentation using DCGAN. Then, it will continue the training process using Inception V3.

3.3. Xtreme Gradient Boost (XGBoost)

We have adopted the XgBoost algorithm from Chen, T. and Guestrin, C. [40]. XGBoost is designed for given datasets that have n examples and m features, and it is represented by D x i , y i   ( D = n , x i   R m , y i R ) . By using the K additive function in Equation (2), it can be used for predicting the output:
y i ^ = Ø x i = k = 1 K f k x i , f k F
where F = f x = ω q ( x ) q   :   R m T ,   ω R T is the spatial regression of trees, q represents the tree structures, and T represents the number of leaves on the tree. The total score will be given by ω by adding up the corresponding leaves. The regularized objective for learning a set of functions is given by Equations (3) and (4).
L Ø = i l y i ^ y i + k Ω f k
w h e r e   Ω f = γ T + 1 2 λ ω 2
The given tree representing Equation (3) cannot be trained using Euclidian distances; therefore, Chen, T. and Guestrin, C. [40] used an additive approach, as shown in Equation (5). y i ^ ( t ) represents the prediction of the i-th instance at the t-th iteration.
L ( t ) = i = 1 n l y i , y i ^ ( t 1 ) + f t ( x i ) + Ω f t
Equation (5) can be improved further with a second-order approximation, as depicted in Equation (6):
L ( t ) i = 1 n l y i , y i ^ ( t 1 ) + g i f t x i + 1 2 h i f t 2 x i + Ω f t
where g i = y i ^ ( t 1 ) l y i , y i ^ ( t 1 ) and h i = y i ^ ( t 1 ) 2 l y i , y i ^ ( t 1 ) , and both gi and hi are loss functions for the first and second gradient statistics at step t. Equation (6) can be simplified more by removing the constant term; thus, the equation will transform into Equation (7) [40].
L ~ ( t ) = i = 1 n g i f t x i + 1 2 h i f t 2 x i + Ω f t
If we define I j = i | q x i = j as an instance that symbolizes leaf j, then Equation (7) can be revised as described in Equations (8) and (9) by expanding Ω .
L ~ t = i = 1 n g i f t x i + 1 2 h i f t 2 x i + γ T + 1 2 λ j = 1 T w j 2
= j = 1 T i I j g i w j + 1 2 i I j h i +   λ w j 2 + γ T
For the static value of q(x), the optimum weight w j of leaf j is given by Equations (10) and (11).
w j ( q ) = i I j g i i I j h i +   λ
L ~ t q = 1 2 j = 1 T i I j g i 2 i I j h i +   λ
Algorithm 1 describes the proposed approach’s process flow:
Algorithm 1: Data Augmentation and XGradientBoost
Input: Chest X-ray Image I (x,y)
Step 1:  Preprocessing ,   convert   inage   to   grayscale , then   apply   normalization .  
I : X R n M i n , , M a x   become .   I N : X R n n e w M i n , , n e w M a x .   I N = ( n e w M a x n e w M i n ) 1 1 + e I β α + n e w M i n
Step 2: Basic Augmentation image (scale, flip, rotate)
Scale ,   I ( x , y ) x r a t i o = o l d _ i m a g e . x n e w _ i m a g e . x  
                                                        y r a t i o = o l d _ i m a g e . y n e w _ i m a g e . y
                                                          I n e w f l o o r x x r a t i o , f l o o r ( y y r a t i o )
                                   Horizontal   Flip ,   I n e w ( w i d t h x 1 , y )
                                   Rotation   I n e w = x r , y r
          x r = x c e n t e r _ x cos a n g l e y c e n t e r _ y sin a n g l e + c e n t e r _ x
          y r = x c e n t e r _ x sin a n g l e + y c e n t e r _ y cos a n g l e + c e n t e r _ y
Step 3: Advanced augmentation image(DCGAN)
Compute   the   expected   value   of   real   E x ~ p d a t a x and fake instance
E z ~ p z z using Equation (1)
Step 4: Features Extraction
Push   the   image   I n e w = x r , y r ,   into   convolution   network ,   then   calculate   output   of   Pixel  
Value   V = i = 1 q j = 1 q f i j d i j F
Step 5: Features Mapping with XGradient Boost
Add the output to Xgradient boost tree by predicting the optimum weight of the leaf j, described in Equation (6).
Output: I n e w = x r , y r

3.4. Evaluation Metrics

We have followed the standard and most common metrics for evaluation: precision, recall, and F1 score. The standard definitions are defined in Equations (12)–(15):
P r e c i s i o n = T P T P + F P
R e c a l l   o r   T r u e   P o s i t i v e   R a t e = T P T P + F N
where FP, FN, TP, and TN are false-positive, false-negative, true-positive, and true-negative values, correspondingly. The F1 score is used to measure the model’s accuracy, and it can be computed using Equation (4):
F 1 = 2 × p r e c i s i o n × r e c a l l p r e c i s i o n + r e c a l l = 2 T P 2 T P + F P + F N

4. Results and Discussion

This section describes the implementation details of the proposed DCGAN-Inception V3 and Xtreme Gradient Boost.

4.1. Generating the DCGAN Image

The DCGAN method is used to generate a new dataset based on the original data by considering factors from the original image. There are two training parts used here:
  • Part 1: the discriminator is trained to maximize the probability of correctly classifying the given input as either real or fake.
  • Part 2: the generator is trained by minimizing log (1-D(G-(Z))) to generate a better fake image.
The process starts by generating fake images based on the DCGAN approach. Initially, the picture looks like a blank canvas. In the 252nd iteration, the shadow on chest X-ray images gradually appeared. Following the 504th iteration, the chest portion became more precise. Once the 1000th iteration was reached, the produced image presented a recognizable chest X-ray image. Afterwards, the picture gradually appeared throughout the iteration. Finally, in iterations > 2000, the generated image revealed an appealing result, with transparent portions of the lung area, rib bone, and even back spine bone observed, as shown in Figure 6.
In Figure 7, during the image generation process, the generator (G) loss rate is relatively high initially and then gradually decreases relative to the iterations. In comparison, the discriminator has fluctuated with respect to its value since the beginning of the iterations. However, when it reaches the 1000th plus iteration, the discriminator loss is flat and reaches zero.

4.2. Classification Results

We have used two primary sources of datasets (Cohen, J.P. [36] and Tabik, R. [41]) and one dataset generated from the DCGAN process, as explained in the previous subsection. During the image dataset generation process utilizing the DCGAN, the GPU’s limitations introduce restrictions, resulting in the production of only 300 chest X-ray pictures in a single compilation. Subsequently, following the repetition of the process for a total of ten iterations, we successfully amassed a dataset of 3000 chest X-ray pictures that were generated. The generated data will be integrated with actual images from primary sources for the purpose of training. According to the analysis of generated images, it has been determined that a portion of the DCGAN data accounts for around 39% of the entire dataset. The generated dataset has 1000 artificially created photos, encompassing each instance of normal lung conditions, pneumonia, and COVID-19 cases.
We have trained our proposed model with 7600 datasets with three different classes: normal, pneumonia, and COVID-19. Later, we carried out a test for 89 images of normal cases, 67 images for pneumonia, and 45 images for COVID-19. We trained our model using Inception Learning V3 with 100 epochs and a batch size of 128. The result shows that the proposed model’s best training and validation accuracy are 98.86% and 98.49, respectively. The validation loss is 0.015. Refer to the graph depicted in Figure 8. The graph exhibits fluctuations during the initial stages of training, which can be attributed to the limited availability of data. However, to address this issue and ensure stability during the training process, additional data were subsequently introduced. The occurrence of this volatility is also prevalent in the majority of TensorFlow training processes.
The prediction result is fascinating and can classify the tested image accordingly compared to the original dataset. Figure 9 depicts the impact of the prediction of our proposed system. Figure 9A,B,D illustrate the prediction result where the original image is pneumonia, and it is correctly predicted as pneumonia. In contrast, Figure 9C was originally normal and is indicated as a normal case.

4.3. Performance Evaluation

The confusion matrix also showed promising results, with 192 observed as normal and around 372 pneumonia cases (refer to Figure 10). The ROC graph also shows a positive result with a score of 96.70%, as shown in Figure 11.
Table 3 presents the performance analysis of three methods: DCGAN, DCGAN+inceptionV3, and DCGAN+XGradientBoost. Based on their accuracy, precision, recall, and F1 Score, the result shows that DCGAN+XgradientBoost outperformed the other two methods by gaining an accuracy score of 98.88%, precision score of 99.1%, recall rate of 98.70%, and F1 Score of 99.3%.
Figure 12 compares the proposed approach with the other two techniques. It is observed visually that the overall value of DCGAN+Xboost is exceedingly high compared to other techniques with significant scores.

4.4. Analysis and Comparison

In this subsection, the remarks made during training are expressed. The comparison with three previous studies that trained with different strategies is exposed. The comparison of the performance of the proposed model is shown in Figure 13. It is clearly observed that the augmented image using the DCGAN approach with the Inception V3 model outperforms the other approach. Moreover, the comparison performance with the latest work is presented in Table 4.
According to the findings presented in Table 4, a researcher employed the transfer learning methodology to categorize the chest X-ray pictures, resulting in an accuracy rate of 98.62%. This level of accuracy is comparable to the results reached by another researcher who utilized Covinet and CNN techniques. The AlexNet model has achieved an accuracy rate of 94.18% and a precision rate of around 93.4%. When comparing the suggested strategy to comparable methods, it is evident that PGGAN + SMANet achieves the highest accuracy (96.28%), with Wasserstein GAN coming in second (95.34%). The approach we have provided demonstrates compelling outcomes, achieving an accuracy rate of 98.88%, a precision score of 99.1%, a recall rate of 98.7%, and an F1 score of 98.60.
The outcome is influenced by the enhancement of chest X-ray image quality using the DCGAN technique. The integration of the original and produced DCGAN data via InceptionV3 during training achieves a seamless mix. Additionally, XGradientBoost contributes to the process of selecting feature mappings. According to the data presented in Figure 14, our proposed solution has demonstrated superior performance compared to the other four methods.

4.5. Observations about the Experiment

By employing a fusion of DCGAN data, the Inception V3 learning model, and XGradient boost for feature mapping, it is possible to enhance both the recognition rate and quality of chest X-ray data.
Figure 13 illustrates the ROC curve of the suggested approach, which demonstrates the superior performance of our DCGAN, DCGAN + Inception V3, and DCGAN + Inception V3 + XGradient Boost models compared to previous research findings.
Table 3 shows a comparison between the proposed approach and various techniques of data augmentation, such as the DCGAN. The improved suggested method has given a promising improvement to the results.
The proposed methodology has demonstrated enhancements in accuracy, precision, recall, and F1 score ranging from 1.1% to 2.1% in a standard experimental setting. The observed enhancement can be attributed to the utilization of a hybrid approach involving the combination of the DCGAN and inception V3, together with the incorporation of XGradient Boost. If a one-to-one comparison is conducted, certain accomplishments may exhibit a 4.6% enhancement.

5. Conclusions

In light of the conclusion of the COVID-19 pandemic, it is currently recognized that the virus has transitioned into an endemic phase. These illnesses have become a significant cause of distress for individuals across various demographic groups. Consequently, a significant cohort of researchers is continuously progressing their investigations in the field of computer vision and artificial intelligence in order to address a diverse array of medical intricacies. The application of chest X-ray data in the prediction of COVID-19 has garnered significant interest among the academic community. The proposed methodology prioritizes the employment of artificially generated images via the implementation of the DCGAN (deep convolutional generative adversarial network) to enhance the overall quality of datasets. The synthetic and actual datasets were combined at a ratio of 39:61. The enhancement of chest X-ray image quality can be achieved via the utilization of the rotation, inspiration, and penetration (RIP) technique. Consequently, a hybrid approach was employed, combining the DCGAN framework, the inception V3 learning model, and XGradient Boost to enhance the accuracy of the prediction. The project utilizes three distinct methodologies: the deep convolutional generative adversarial network (DCGAN), DCGAN integrated with Inception V3, and DCGAN combined with Inception V3 with Xtreme Gradient Boost. The evaluation of a model’s performance is greatly influenced by the metrics of accuracy, precision, recall, and F1 score. The examination produced the following results: an accuracy rate of 98.88%, a sensitivity rate of 99.1%, a recall rate of 98.70%, and an F1 score of 98.60%. In order to realize better future performance on real-world case data, potential improvements may involve adjusting the proportion of mixed datasets and supplementing the artificially generated image dataset.

Author Contributions

Conceptualization, A.H.B.; formal analysis, A.H.B. and S.J.M.; methodology, A.H.B. and S.J.M.; writing—original draft, A.H.B.; writing—review and editing, A.H.B., S.J.M. and S.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deputyship for Research and Innovation, Ministry of Education, in Saudi Arabia for funding this research work through project number “IFPRC-216-830-2020” and King Abdulaziz University, DSR, Jeddah, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author. The data are not publicly available as the present dataset necessitates compilation with additional datasets prior to its prospective online publication.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Health, M.O. COVID-19 Command and Control Center CCC, The National Health Emergency Operation Center NHEOC. 2021. Available online: https://covid19.moh.gov.sa (accessed on 23 October 2021).
  2. Worldmeter. Available online: https://www.worldometers.info/coronavirus/country/saudi-arabia/ (accessed on 23 October 2021).
  3. Wang, S.; Sun, J.; Mehmood, I.; Pan, C.; Chen, Y.; Zhang, Y. Cerebral micro-bleeding identification based on a nine-layer convolutional neural network with stochastic pooling. Concurr. Comput. Pract. Exp. 2019, 32, e5130. [Google Scholar] [CrossRef]
  4. Wang, S.; Tang, C.; Sun, J.; Zhang, Y. Cerebral Micro-Bleeding Detection Based on Densely Connected Neural Network. Front. Neurosci. 2019, 13, 422. [Google Scholar] [CrossRef] [PubMed]
  5. Lafraxo, S.; Ansari, M.E. CoviNet: Automated COVID-19 Detection from X-rays using Deep Learning Techniques. In Proceedings of the 2020 6th IEEE Congress on Information Science and Technology (CiSt), Agadir, Morocco, 5–12 June 2021. [Google Scholar]
  6. Akram, T.; Attique, M.; Gul, S.; Shahzad, A.; Altaf, M.; Naqvi, S.S.R.; Damaševičius, R.; Maskeliūnas, R. A novel framework for rapid diagnosis of COVID-19 on computed tomography scans. Pattern Anal. Appl. 2021, 24, 951–964. [Google Scholar] [CrossRef] [PubMed]
  7. Jangam, E.; Barreto, A.A.D.; Annavarapu, C.S.R. Automatic detection of COVID-19 from chest CT scan and chest X-rays images using deep learning, transfer learning and stacking. Appl. Intell. 2021, 52, 2243–2259. [Google Scholar] [CrossRef] [PubMed]
  8. Akyol, K.; Şen, B. Automatic Detection of COVID-19 with Bidirectional LSTM Network Using Deep Features Extracted from Chest X-ray Images. Interdiscip. Sci. Comput. Life Sci. 2021, 14, 89–100. [Google Scholar] [CrossRef] [PubMed]
  9. Autee, P.; Bagwe, S.; Shah, V.; Srivastava, K. StackNet-DenVIS: A multi-layer perceptron stacked ensembling approach for COVID-19 detection using X-ray images. Phys. Eng. Sci. Med. 2020, 43, 1399–1414. [Google Scholar] [CrossRef] [PubMed]
  10. Alqudaihi, K.S.; Aslam, N.; Khan, I.U.; Almuhaideb, A.M.; Alsunaidi, S.J.; Ibrahim, N.M.A.R.; Alhaidari, F.A.; Shaikh, F.S.; Alsenbel, Y.M.; Alalharith, D.M.; et al. Cough Sound Detection and Diagnosis Using Artificial Intelligence Techniques: Challenges and Opportunities. IEEE Access 2021, 9, 102327–102344. [Google Scholar] [CrossRef]
  11. Zhang, R.; Guo, Z.; Sun, Y.; Lu, Q.; Xu, Z.; Yao, Z.; Duan, M.; Liu, S.; Ren, Y.; Huang, L.; et al. COVID19XrayNet: A Two-Step Transfer Learning Model for the COVID-19 Detecting Problem Based on a Limited Number of Chest X-ray Images. Interdiscip. Sci. Comput. Life Sci. 2020, 12, 555–565. [Google Scholar] [CrossRef]
  12. Kumar, N.; Gupta, M.; Gupta, D.; Tiwari, S. Novel deep transfer learning model for COVID-19 patient detection using X-ray chest images. J. Ambient. Intell. Humaniz. Comput. 2021, 14, 469–478. [Google Scholar] [CrossRef]
  13. Narin, A. Detection of COVID-19 Patients with Convolutional Neural Network Based Features on Multi-class X-ray Chest Images. In Proceedings of the 2020 Medical Technologies Congress (TIPTEKNO), Antalya, Turkey, 19–20 November 2020. [Google Scholar]
  14. Saif, A.F.M.; Imtiaz, T.; Rifat, S.; Shahnaz, C.; Zhu, W.P.; Ahmad, M.O. CapsCovNet: A Modified Capsule Network to Diagnose COVID-19 from Multimodal Medical Imaging. IEEE Trans. Artif. Intell. 2021, 2, 608–617. [Google Scholar] [CrossRef]
  15. Fang, Y.; Zhang, H.; Xie, J.; Lin, M.; Ying, L.; Pang, P.; Ji, W. Sensitivity of Chest CT for COVID-19: Comparison to RT-PCR. Radiology 2020, 296, E115–E117. [Google Scholar] [CrossRef] [PubMed]
  16. Xie, X.; Zhong, Z.; Zhao, W.; Zheng, C.; Wang, F.; Liu, J. Chest CT for Typical Coronavirus Disease 2019 (COVID-19) Pneumonia: Relationship to Negative RT-PCR Testing. Radiology 2020, 296, E41–E45. [Google Scholar] [CrossRef] [PubMed]
  17. Zhao, D.; Zhu, D.; Lu, J.; Luo, Y.; Zhang, G. Synthetic Medical Images Using F&BGAN for Improved Lung Nodules Classification by Multi-Scale VGG16. Symmetry 2018, 10, 519. [Google Scholar] [CrossRef]
  18. Thepade, S.D.; Jadhav, K. COVID-19 Identification from Chest X-ray Images using Local Binary Patterns with assorted Machine Learning Classifiers. In Proceedings of the 2020 IEEE Bombay Section Signature Conference (IBSSC), Mumbai, India, 4–6 December 2020. [Google Scholar]
  19. Thepade, S.D.; Chaudhari, P.R.; Dindorkar, M.R.; Bang, S.V. COVID-19 Identification using Machine Learning Classifiers with Histogram of Luminance Chroma Features of Chest X-ray images. In Proceedings of the 2020 IEEE Bombay Section Signature Conference (IBSSC), Mumbai, India, 4–6 December 2020. [Google Scholar]
  20. Qjidaa, M.; Mechbal, Y.; Ben-Fares, A.; Amakdouf, H.; Maaroufi, M.; Alami, B.; Qjidaa, H. Early detection of COVID19 by deep learning transfer Model for populations in isolated rural areas. In Proceedings of the 2020 International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco, 9–11 June 2020. [Google Scholar]
  21. Darapaneni, N.; Ranjane, S.; Satya US, P.; Reddy, M.H.; Paduri, A.R.; Adhi, A.K.; Madabhushanam, V. COVID-19 Severity of Pneumonia Analysis Using Chest X-rays. In Proceedings of the 2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS), Rupnagar, India, 26–28 November 2020. [Google Scholar]
  22. Qayyum, A.; Razzak, I.; Tanveer, M.; Kumar, A. Depth-wise dense neural network for automatic COVID19 infection detection and diagnosis. Ann. Oper. Res. 2021. ahead of print. [Google Scholar] [CrossRef] [PubMed]
  23. Shah, F.M.; Joy, S.K.S.; Ahmed, F.; Hossain, T.; Humaira, M.; Ami, A.S.; Paul, S.; Jim, A.R.K.; Ahmed, S. A Comprehensive Survey of COVID-19 Detection Using Medical Images. SN Comput. Sci. 2021, 2, 434. [Google Scholar] [CrossRef] [PubMed]
  24. Shastri, S.; Singh, K.; Kumar, S.; Kour, P.; Mansotra, V. Deep-LSTM ensemble framework to forecast COVID-19: An insight to the global pandemic. Int. J. Inf. Technol. 2021, 13, 1291–1301. [Google Scholar] [CrossRef] [PubMed]
  25. Shankar, K.; Perumal, E.; Tiwari, P.; Shorfuzzaman, M.; Gupta, D. Deep learning and evolutionary intelligence with fusion-based feature extraction for detection of COVID-19 from chest X-ray images. Multimed. Syst. 2021, 28, 1175–1187. [Google Scholar] [CrossRef]
  26. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef]
  27. Greenspan, H.; Ginneken, B.V.; Summers, R.M. Guest Editorial Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique. IEEE Trans. Med. Imaging 2016, 35, 1153–1159. [Google Scholar] [CrossRef]
  28. Roth, H.R.; van Ginneken, B.; Summers, R.M. Improving Computer-Aided Detection Using Convolutional Neural Networks and Random View Aggregation. IEEE Trans. Med. Imaging 2016, 35, 1170–1181. [Google Scholar] [CrossRef]
  29. Tajbakhsh, N.; Lu, L.; Liu, J.; Yao, J.; Seff, A.; Cherry, K.; Kim, L.; Summers, R.M. Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning? IEEE Trans. Med. Imaging 2016, 35, 1299–1312. [Google Scholar] [CrossRef] [PubMed]
  30. Mikołajczyk, A.; Grochowski, M. Data augmentation for improving deep learning in image classification problem. In Proceedings of the 2018 International Interdisciplinary PhD Workshop (IIPhDW), Swinoujscie, Poland, 9–12 May 2018. [Google Scholar]
  31. Mahase, E. Coronavirus COVID-19 has killed more people than SARS and MERS combined, despite lower case fatality rate. Br. Med. J. 2020, 368, m641. [Google Scholar] [CrossRef]
  32. Wang, W.; Xu, Y.; Gao, R.; Lu, R.; Han, K.; Wu, G.; Tan, W. Detection of SARS-CoV-2 in Different Types of Clinical Specimens. J. Am. Med. Assoc. 2020, 323, 1843–1844. [Google Scholar] [CrossRef] [PubMed]
  33. Corman, V.M.; Landt, O.; Kaiser, M.; Molenkamp, R.; Meijer, A.; Chu, D.K.; Bleicker, T.; Brünink, S.; Schneider, J.; Schmidt, M.L.; et al. Detection of 2019 novel coronavirus (2019-nCoV) by real-time RT-PCR. Eurosurveillance 2020, 25, 2000045. [Google Scholar] [CrossRef] [PubMed]
  34. Lal, A.; Mishra, A.K.; Sahu, K.K. CT chest findings in coronavirus disease-19 (COVID-19). J. Formos. Med. Assoc. Taiwan Yi Zhi 2020, 119, 1000–1001. [Google Scholar] [CrossRef] [PubMed]
  35. Chest X-ray Pneumonia. Available online: https://github.com/ieee8023/covid-chestxray-dataset (accessed on 25 January 2022).
  36. Cohen, J.P.; Morrison, P.; Dao, L. COVID-19 image data collection. arXiv 2020, arXiv:2003.11597. Available online: https://github.com/ieee8023/covid-chestxray-dataset (accessed on 25 January 2022).
  37. Hitawala, S. Comparative study on generative adversarial networks. arXiv 2018, arXiv:1801.04271. [Google Scholar]
  38. Ahmadinejad, M.; Ahmadinejad, I.; Soltanian, A.; Mardasi, K.G.; Taherzade, N. Using new technicque in sigmoid volvulus surgery in patients affected by COVID-19. Ann. Med. Surg. 2021, 70, 102789. [Google Scholar] [CrossRef]
  39. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; Available online: https://arxiv.org/abs/1512.00567v3 (accessed on 4 August 2022).
  40. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. Available online: https://arxiv.org/pdf/1603.02754.pdf (accessed on 30 September 2023).
  41. Tabik, S.; Gómez-Ríos, A.; Martín-Rodríguez, J.L.; Sevillano-García, I.; Rey-Area, M.; Charte, D.; Guirado, E.; Suárez, J.L.; Luengo, J.; Valero-González, M.A.; et al. COVIDGR Dataset and COVID-SDNet Methodology for Predicting COVID-19 Based on Chest X-ray Images. IEEE J. Biomed. Health Inform. 2020, 24, 3595–3605. [Google Scholar] [CrossRef]
  42. Hussain, B.Z.; Andleeb, I.; Ansari, M.S.; Joshi, A.M.; Kanwal, N. Wasserstein GAN based Chest X-ray Dataset Augmentation for Deep Learning Models: COVID-19 Detection Use-Case. In Proceedings of the 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Glasgow, UK, 11–15 July 2022. [Google Scholar]
  43. Ciano, G.; Andreini, P.; Mazzierli, T.; Bianchini, M.; Scarselli, F. A Multi-Stage GAN for Multi-Organ Chest X-ray Image Generation and Segmentation. Mathematics 2021, 9, 2896. [Google Scholar] [CrossRef]
  44. Sundaram, S.; Hulkund, N. GAN-based Data Augmentation for Chest X-ray Classification. In Proceedings of the KDD DSHealth, Singapore, 14–18 August 2021. [Google Scholar]
  45. Motamed, S.; Rogalla, P.; Khalvati, F. Data augmentation using Generative Adversarial Networks (GANs) for GAN-based detection of Pneumonia and COVID-19 in chest X-ray images. Inform. Med. Unlocked 2021, 27, 100779. [Google Scholar] [CrossRef] [PubMed]
  46. Ohata, E.F.; Bezerra, G.M.; das Chagas, J.V.S.; Neto, A.V.L.; Albuquerque, A.B.; de Albuquerque, V.H.C.; Filho, P.P.R. Automatic detection of COVID-19 infection using chest X-ray images through transfer learning. IEEE/CAA J. Autom. Sin. 2021, 8, 239–248. [Google Scholar] [CrossRef]
  47. Al-Waisy, A.S.; Al-Fahdawi, S.; Mohammed, M.A.; Abdulkareem, K.H.; Mostafa, S.A.; Maashi, M.S.; Arif, M.; Garcia-Zapirain, B. COVID-CheXNet: Hybrid deep learning framework for identifying COVID-19 virus in chest X-rays images. Soft Comput. 2020, 27, 2657–2672. [Google Scholar] [CrossRef] [PubMed]
  48. Ibrahim, A.U.; Ozsoz, M.; Serte, S.; Al-Turjman, F.; Yakoi, P.S. Pneumonia Classification Using Deep Learning from Chest X-ray Images During COVID-19. Cogn. Comput. 2021. ahead of print. [Google Scholar] [CrossRef]
Figure 1. Chest X-ray images of patients with various diseases (COVID-19, pneumonia, and tuberculosis).
Figure 1. Chest X-ray images of patients with various diseases (COVID-19, pneumonia, and tuberculosis).
Applsci 13 12725 g001
Figure 2. Sample of dataset: (A) normal, (B) pneumonia, and (C) COVID-19.
Figure 2. Sample of dataset: (A) normal, (B) pneumonia, and (C) COVID-19.
Applsci 13 12725 g002
Figure 3. The architecture of the generative adversarial network (GAN) [37].
Figure 3. The architecture of the generative adversarial network (GAN) [37].
Applsci 13 12725 g003
Figure 4. The architecture of Inception V3 Szegedy, C. et al. [39].
Figure 4. The architecture of Inception V3 Szegedy, C. et al. [39].
Applsci 13 12725 g004
Figure 5. Proposed methodology for DCGAN-InceptionV3.
Figure 5. Proposed methodology for DCGAN-InceptionV3.
Applsci 13 12725 g005
Figure 6. DCGAN iteration for image generation.
Figure 6. DCGAN iteration for image generation.
Applsci 13 12725 g006
Figure 7. Generator and discriminator loss during DCGAN training.
Figure 7. Generator and discriminator loss during DCGAN training.
Applsci 13 12725 g007
Figure 8. Training and validation accuracy.
Figure 8. Training and validation accuracy.
Applsci 13 12725 g008
Figure 9. Prediction result samples.
Figure 9. Prediction result samples.
Applsci 13 12725 g009
Figure 10. Confusion matrix for the proposed approach.
Figure 10. Confusion matrix for the proposed approach.
Applsci 13 12725 g010
Figure 11. ROC curve.
Figure 11. ROC curve.
Applsci 13 12725 g011
Figure 12. Performance metric illustration of DCGAN-XGradientBoost with DCGAN.
Figure 12. Performance metric illustration of DCGAN-XGradientBoost with DCGAN.
Applsci 13 12725 g012
Figure 13. Accuracy rating comparison between DCGAN-XGradientBoost, DCGAN, and DCGAN + Inception V3.
Figure 13. Accuracy rating comparison between DCGAN-XGradientBoost, DCGAN, and DCGAN + Inception V3.
Applsci 13 12725 g013
Figure 14. Performance comparison with relevant works.
Figure 14. Performance comparison with relevant works.
Applsci 13 12725 g014
Table 1. Chest X-ray identification advantages.
Table 1. Chest X-ray identification advantages.
ApproachAdvantages
Chest X-ray diagnosis The expeditious identification and assessment of a substantial patient population within a constrained timeframe.
Chest X-ray imagesThe aforementioned data may be obtainable from various hospitals and clinics responsible for their management. Individuals can be diagnosed with COVID-19.
Light chest X-ray detection systemImplementing the light chest X-ray detection system can potentially mitigate widespread infection via timely diagnosis. The physician may request that the patient engage in self-isolation.
Disease complicationChest X-rays are a valuable diagnostic tool for investigating and monitoring numerous diseases, including those associated with COVID-19.
Table 2. Summary of the previous related work.
Table 2. Summary of the previous related work.
AuthorProposed ModelImage SizePre-TrainedAchievementRemarks
Rouchi, Z, et al. [11]Two-Step transfer learning using XRayNet(2) and XRayNet(3)224 × 224YesAUC score of 99.8% on the training dataset and 98.6 on the testing dataset. Overall accuracy is 91.92%.Small number of datasets
N, Kumar et al. [12]Integration of previous pre-trained models (EfcientNet, GoogLeNet, and XceptionNe)224 × 224YesThe AUC score is 99.2% for COVID-19, 99.3% for normal, 99.01% for pneumonia, and 99.2% for tuberculosis.Medium number of datasets
S. Lafraxo
M. el Ansari [5]
Integrated architecture using an adaptive median filter, convolutional neural network, and histogram equalization256 × 256NoThe proposed system known as CoviNet is able to achieve an accuracy rate of 98.6% for binary and 95.8% for multiclass classification.Medium number of datasets
A. Narin [13]Based on ResNet50, with support vector machines (SVMs); quadratic and cubic1024 × 1024YesThe input images are fed to a convolutional neural network (ResNet) and then three SVM models are used (linear, quadratic, and cubic). The result shows that SVM-Quadratic outperformed the others by 99% in terms of overall accuracy.Medium number of datasets
Saif, A.F.M. et al. [14]CapsCovNet capsule convolutional neural network with three blocks128 × 128YesThere are two input images: US images extracted from the US video dataset and chest X-ray images. Their method has outperformed state-of-the art US images with increments around 3.12–20.2%.Medium number of datasets
Table 3. Performance analysis with various approaches.
Table 3. Performance analysis with various approaches.
ApproachAccuracyPrecisionRecallF1 Score
DCGAN96.5395.7894.5795.4
DCGAN+Inception V397.8697.697.4397.2
DCGAN+XGradientBoost98.8899.198.799.3
Table 4. Comparison to other methods for chest X-ray classification.
Table 4. Comparison to other methods for chest X-ray classification.
Related WorkDatasetMethodAccuracyPrecisionRecallF1
[5]Chest X-rayCovinet System + CNN98.6295.7793.6693.69
[42]Chest X-ray + Wasserstein GANWasserstein GAN95.3499.1--
[43]Chest X-ray + PGGANPGGAN + SMANet96.28---
[44]Chest X-ray + GANDenseNet121 + GAN80.172.783.479.3
[45]Chest X-ray + IAGANInception + IAGAN828469-
[46]Chest X-rayTransfer Learning98.5194.198.4698.46
[47]Chest X-rayResNet34&HRNets97.0295.698.4196.98
[48]Chest X-rayAlexNet mode94.1893.489.198.9
Proposed MethodChest X-ray and Augmented Chest X-ray with DCGAN + XGradientBoostInception V3 + DCGAN98.8899.198.7098.60
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Basori, A.H.; Malebary, S.J.; Alesawi, S. Hybrid Deep Convolutional Generative Adversarial Network (DCGAN) and Xtreme Gradient Boost for X-ray Image Augmentation and Detection. Appl. Sci. 2023, 13, 12725. https://doi.org/10.3390/app132312725

AMA Style

Basori AH, Malebary SJ, Alesawi S. Hybrid Deep Convolutional Generative Adversarial Network (DCGAN) and Xtreme Gradient Boost for X-ray Image Augmentation and Detection. Applied Sciences. 2023; 13(23):12725. https://doi.org/10.3390/app132312725

Chicago/Turabian Style

Basori, Ahmad Hoirul, Sharaf J. Malebary, and Sami Alesawi. 2023. "Hybrid Deep Convolutional Generative Adversarial Network (DCGAN) and Xtreme Gradient Boost for X-ray Image Augmentation and Detection" Applied Sciences 13, no. 23: 12725. https://doi.org/10.3390/app132312725

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop