Next Article in Journal
Energy-Efficient Secure Communications for Wireless-Powered Cognitive Radio Networks
Next Article in Special Issue
Real-Time Semantic Segmentation with Dual Encoder and Self-Attention Mechanism for Autonomous Driving
Previous Article in Journal
Anemia Diagnostic System Based on Impedance Measurement of Red Blood Cells
Previous Article in Special Issue
Detection of Iris Presentation Attacks Using Feature Fusion of Thepade’s Sorted Block Truncation Coding with Gray-Level Co-Occurrence Matrix Features
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Role of Artificial Intelligence in COVID-19 Detection

1
Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
2
School of Science and Technology, Singapore University of Social Sciences, Singapore 599494, Singapore
3
Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Kuala Lumpur 50603, Malaysia
4
Department of Cardiovascular Technology, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal 576104, India
5
Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia
6
Cogninet Brain Team, Cogninet Australia, Sydney, NSW 2010, Australia
7
School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Toowoomba, QLD 4350, Australia
8
Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia
9
Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia
10
Department of Medicine, Columbia University Medical Center, New York, NY 10032, USA
11
School of Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore
12
Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung 41354, Taiwan
13
International Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto 860-8555, Japan
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(23), 8045; https://doi.org/10.3390/s21238045
Submission received: 5 November 2021 / Revised: 26 November 2021 / Accepted: 26 November 2021 / Published: 1 December 2021

Abstract

:
The global pandemic of coronavirus disease (COVID-19) has caused millions of deaths and affected the livelihood of many more people. Early and rapid detection of COVID-19 is a challenging task for the medical community, but it is also crucial in stopping the spread of the SARS-CoV-2 virus. Prior substantiation of artificial intelligence (AI) in various fields of science has encouraged researchers to further address this problem. Various medical imaging modalities including X-ray, computed tomography (CT) and ultrasound (US) using AI techniques have greatly helped to curb the COVID-19 outbreak by assisting with early diagnosis. We carried out a systematic review on state-of-the-art AI techniques applied with X-ray, CT, and US images to detect COVID-19. In this paper, we discuss approaches used by various authors and the significance of these research efforts, the potential challenges, and future trends related to the implementation of an AI system for disease detection during the COVID-19 pandemic.

1. Introduction

COVID-19 was first reported by the Wuhan Municipal Health Commission, China, in December 2019. It is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), and is considered one of the deadliest global pandemics in history [1]. The World Health Organization (WHO) declared the COVID-19 outbreak a pandemic in March 2020, and there have been 203,944,144 cases and 4,312,902 deaths globally according to the WHO statistics of 12 August 2021 (available online: https://covid19.who.int/table (accessed on 12 August 2021)). The pandemic situation has caused worldwide distress by affecting people socially, medically, and economically. This infectious disease in severe form often leads to acute respiratory syndrome and the development of pneumonia. The outbreak was thought to be initiated via zoonotic spread from the seafood markets in Wuhan, China. Later, it was believed that transmission between humans was responsible for community spread of the infection throughout the world, and approximately 200 countries have been affected by this pandemic [2,3,4,5]. Although individuals of all ages are at risk of being infected, severe COVID-19 symptoms are more likely in people aged 60 and above, and individuals with comorbidities.
Once the SARS-CoV-2 virus enters the body via respiratory aerosol, it acts on the respiratory system, and affects patients with varying degrees of clinical severity. During the initial days of infection, the clinical presentation remains asymptomatic, although immune response is mediated in the body. Those persons affected are infectious at this phase, and the disease can be diagnosed by nasal swab [6,7,8]. Further migration of the virus from nasal epithelial cells into the upper respiratory tract results in symptoms of fever, dry cough, malaise, etc. The majority of infected patients do not progress beyond this phase, as the immune response from the host is sufficient to contain the disease from spreading to the lower respiratory tract and lungs [9] (refer to Figure 1).
Approximately one-fifth of infected cases develop lower respiratory tract infection, and these patients present with acute respiratory distress syndrome (ARDS). Histologically, this stage reveals lung sequestration along with host cell apoptosis. Persistent inflammation and diffuse alveolar damage are common histopathologic patterns observed among the infected patients exhibiting ARDS [5,10].
COVID-19 affects people in different ways. Asymptomatic patients will have positive nasal swab results and normal chest X-ray images. Patients with mild illness exhibit different commonly known symptoms such as fever, sore throat, dry cough, malaise and body aches or nausea, vomiting, abdominal pain, and loose stools. Patients with moderate illness show symptoms of pneumonia with no significant hypoxemia (persistent fever and cough). This group of infected patients also shows abnormal lesions on high-resolution chest computed tomography (CT). Severe illness is defined as patients who present with pneumonia and significant systemic hypoxemia (SpO2 < 92%). In cases of critical infection, the patients show life-threatening complications such as ARDS, along with shock, coagulation defects, encephalopathy, heart failure, and acute kidney injury [11,12,13,14,15,16,17].
Disease confirmation and the severity of the disease can be determined by nasal/throat swab, several serological tests, and imaging modalities. Reverse transcription polymerase chain reaction (RT-PCR) remains the best molecular method in the diagnosis of the disease [18]. However, as in the case of other diagnostic methods, RT-PCR is not error-free. It is time consuming, expensive, and requires manual effort. In order to judge the diagnostic test results accurately, expert clinicians are required to read them. The correct interpretation of the test results requires a high level of clinical expertise, as the data may vary significantly from person to person [19]. Moreover, the availability of test kits is limited, especially in rural geographical regions, and, if available, the quality of the kits may not be guaranteed. Moreover, persons can experience discomfort—slight pain and irritation—during the nasal swab test. Using image modalities such as X-rays or CT scans, it is possible to obtain a quick result for critical situations, even before receiving RT-PCR test results.
In order to overcome these existing shortcomings, many computer-aided diagnostic tools (CADTs) using artificial intelligence (AI) and machine learning techniques have been utilized to support clinical findings from the imaging modalities [20]. These prediction techniques can precisely detect the disease, thereby aiding in the prevention and detection of epidemiologic risk [21]. These automated tools using cost-effective imaging modalities assist to address COVID-19 by preventing false negative reports, and can be used in the case of scarcity or non-availability of RT-PCR test kits in rural areas. Researchers have reported exhaustive studies using imaging modalities for the detection of COVID-19 [19,21,22,23,24,25,26,27,28,29]. Although these review papers have shown the significance of deep learning and machine leaning algorithms for automated detection, this paper explores the following key points in association with detection:
  • The state-of-the-art AI techniques (deep neural network (DNN) and hand-crafted feature learning (HCFL) based models) used to detect COVID-19.
  • Analysis of the results of AI techniques with various imaging modalities.
  • The key challenges and future direction in the detection of COVID-19.
The structure of this paper is as follows. Section 2 describes the search criteria applied to accumulate and filter research articles from various sources. In Section 3, a consolidated review of extensively employed AI techniques using different medical imagery for COVID-19 detection is presented. The results using various datasets and methods are analyzed in Section 4. The key challenges, future scope, and recommendations are discussed in Section 5. Finally, the conclusions of our systematic review are presented in Section 6.

2. Search Criteria and Selection Process

The methodology from the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [30] was adopted in this study. The systematic search process was carried out using the search query on Scopus, Google Scholar, PubMed, and Springer. The following search items were used: “COVID-19”, “Automated detection of COVID-19”, “Computer-aided diagnosis for COVID-19”, “Deep learning techniques for COVID-19” (using “Chest X-ray” OR “Chest computed tomography” OR “Lung Ultrasound”), and “Database for COVID-19”. In order to widen the search process, we also included AI algorithms with specific techniques, its subfield, and its utilization (such as “CAD tools”, “Convolutional neural networks”, “Machine Learning”, “Classification”, and “Identification”). Related articles written in English from 2020 to 2021 were downloaded, not limited to country or region of the author. The search process took 22 days to complete. A total of 1277 articles were found. The relevance of the downloaded articles to the main aims of this study was verified using a search string strategy. Articles related to detection of COVID-19 using clinical data, statistical analysis, and case studies with no data mining and deep leaning techniques were excluded from the selection.
The relevance of a paper was based on title, abstract, materials and methods. An article was considered based on the voting scheme by the authors’ group of the current study. The authors are well-versed in the field of deep learning and machine learning techniques using various imaging modalities. Low-quality and conference papers were removed from the database. A final total of 202 papers (184 articles with 18 review papers) were compiled and analyzed. The selection process is shown in Figure 2. To the best of our knowledge, we have considered the data mining and deep learning research publications reported to present for identification of COVID-19 using various image modalities.

3. AI Techniques for COVID-19 Detection

Based on the state-of-the-art AI techniques to automatically detect COVID-19 using medical imagery, we categorized the methodologies as: (i) the DNN-based approach, (ii) the HCFL-based approach, and (iii) the hybrid approach. The input data consisted mainly of X-ray, CT, and US medical images of patients. In the DNN-based approach, convolutional neural networks (CNNs) are employed to automatically characterize the COVID-19 imagery. The DNN approach groups the feature extraction and classification components into an integrated neural network. In the HCFL-based approach, knowledge of features extraction techniques is required, followed by feature selection/ranking and classification stages. The hybrid approach fuses the methodologies from DNN- and HCFL-based approaches to obtain promising results. Figure 3 illustrates the key components used in the COVID-19 detection system.

3.1. COVID-19 Dataset: Medical Image

RT-PCR is the gold standard to diagnose COVID-19 using a nasal/throat swab. Sometimes the test results may not be available immediately and may cause a false negative result, due to the quality of the sample [31]. In such situations, various chest imaging modalities such as X-ray, CT, and Ultrasound (US) help to confirm COVID-19 suspects [32]. The combination of AI techniques with various imaging modalities can assist to increase the efficiency of COVID-19 detection worldwide [32].
The development of an automated COVID-19 detection system based on chest X-ray imagery requires labeled images of normal and COVID-19 cases so as to train the system to differentiate healthy persons from COVID-19 patients. To test the system with an independent test dataset and to enhance its efficacy, it is necessary for these datasets to be made available publicly. With large datasets, it is possible for researchers to cross verify existing AI models before installation in hospitals or testing centers. Hence, medical images such as chest X-ray, CT, and lung US images are essential for the development of an automated COVID-19 detection system. Many researchers have of their own volition or in collaboration with hospitals, aggregated the COVID-19 datasets with various imaging modalities and released them publicly to assist research communities. Figure 4 shows examples of several chest images from publicly available datasets.
The majority of the state-of-the-art AI techniques depend on publicly available datasets (refer to Table 1). The first dataset uses the X-ray as the imaging modality, and is very popular due to the huge dataset collected from nine different sources and made available in a single source (refer to the given source in Table 1). It is noted that there are only a few public sources available for US images, compared to X-ray and CT images. In addition to the public datasets mentioned in Table 1, there are also other sources which have not yet been as widely utilized as an X-ray image source (available in: https://public.roboflow.ai/classification/covid-19-and-pneumoniascans (accessed on 19 July 2021)), for CT images [33], https://www.kaggle.com/andrewmvd/covid19-ct-scans (accessed on 19 July 2021), and for US images [34], https://github.com/jannisborn/covid19_ultrasound (accessed on 19 July 2021).
The X-ray images collected from various researchers in different parts of the world are available in portable network graphics format with the size of 299 × 299 pixels (https://www.kaggle.com/tawsifurrahman/covid19-radiography-database (accessed on 21 August 2021)). In [35], COVID-19 CT images were collected in various sizes from medRxiv (https://www.medrxiv.org/, latest accessed on 29 November 2021) and bioRxiv (https://www.biorxiv.org/, latest accessed on 29 November 2021), which was posted from 19 January to 25 March 2020. The minimum, average, and maximum widths are 124, 383, and 1485, respectively. The minimum, average, and maximum heights are 153, 491, and 1853, respectively [35]. In [36], CT scans have been collected from real patients in Sao Paulo Hospital, Brazil. It is also observed that the CT images were collected from municipal hospitals in Moscow, Russia. These are segregated based on severity i.e., CT1–CT4: COVID-19-related findings. The number of cases for each category is: CT0—254; CT1—684; CT2—125; CT3—45; and CT4—2 [37]. The largest publicly available lung US dataset was released in [39]. In total, 261 recordings (202 videos and 59 images) were gathered from 216 patients using either convex or linear probes. In addition, the British Society of Thoracic Imaging has also released a COVID-19 image database for teaching purposes (available in: https://www.bsti.org.uk/training-and-education/covid-19-bsti-imaging-database/ (accessed 19 July 2021)). Authors can use these underutilized datasets to enhance the heterogeneous capability of their own dataset. In addition, using the freely available datasets, researchers can initiate a community-oriented research effort to develop various models using AI techniques. Hence, it is also possible for the researchers to generalize their system using the various medical images.

3.2. Methodology

This section discusses the key processing stages covered by the different authors in the development of state-of-the-art COVID-19 detection systems.

3.2.1. Preprocessing/Segmentation

Preprocessing is the initial stage used to enhance image quality by improving contrast and standardizing image pixel intensity levels. This stage plays a major role in obtaining accurate results. Usually, image quality is greatly improved by employing the contrast limited adaptive histogram equalization (CLAHE) technique [40]. Denoising techniques such as the Kirsch filter [41], Weiner filter [42], and pixel intensity normalization are also implemented. Other preprocessing techniques such as edge detection using the Prewitt filter (PF) [42], histogram equalization (HE), and gamma correction (GC) [43] may be useful. The aforementioned techniques are used in several works and can significantly increase the accuracy of the results.
For the CNN-based method, a common set of preprocessing techniques are employed. These techniques include resizing and shuffling. Furthermore, images are converted to RGB and then input to a CNN. In order to visualize the image more distinctly, the image boundaries are smoothed by normalization using morphological filters and by applying different filters and enhancement techniques. In addition, lung imagery is extracted using segmentation techniques such as region growing [44] and watershed [45], UNet [46], and LinkNet [47], where the latter is a variant of UNet and the variational data imputation method (VDI) [48].
In the process of training a deep learning model, sometimes there may be a shortage of datasets. In such situations, data augmentation techniques may be used to create additional data by slightly altering the existing data, thereby creating different versions of the original data. This acts as a regularizer and reduces overfitting while training the model. Data augmentation techniques such as rotation, cropping, flipping, and translation [49], Gaussian blur, and contrast adjustment have been used [50]. For the class imbalance, SMOTE [51] has been employed by several authors. Synthesis images can also be created using an adversarial network (GAN) [52], conditional GAN [53], auxiliary classifier generative adversarial network (ACGAN) [54] and Keras’ ImageDataGenerator (https://keras.io/api/preprocessing/image/ (accessed on 16 September 2021)).

3.2.2. Feature Extraction

Feature extraction is mainly adapted to identify nonlinearities in the lung, thereby identifying lung abnormalities, if any. Several feature extraction techniques have been activated to detect COVID-19 more accurately. Handcrafted feature extraction methods such as the discrete wavelet transform (DWT) [55] and gray-level co-occurrence matrix (GLCM), and Haralick texture features [56] are the more commonly used methods. In addition, the features are also extracted with the two-dimensional (2D) curvelet transform (CTf) [57], residual exemplar local binary pattern (ResExLBP) [58], first order statistical features (FOSF) [50], histogram of oriented gradients (HOG) [59], dual-tree complex contourlet transform (DTCT) [60], local directional number pattern (LDN) [61], Pillow library [62] and fractional multichannel exponent moments (FrMEMs) [63], local binary pattern (LBP) [64], and multichannel fractional order Legendre Fourier moments (MFrLFM) [65], to characterize textural information.
Similarly, features models have also been extracted using a CNN-based approach. In this approach, base architectures such as ResNet101 [66], AlexNet [67], DenseNet-201 [68], VGG16 [69], GoogLeNet [70], MobileNetv2 [71], Inceptionv3 [72], SqueezeNet [73], VGG19 [74], and Xception [75] have been adjusted for feature learning and extraction. Transfer learning (TL) has been arrayed to cope with the limitations that arise from lack of freely accessible labeled medical images. In addition to TL, methods such as the multilayer perceptron convolutional neural network (MLP-CNN) have been assembled to handle mixed data types consisting of numerical/categorical and image data [76]. Similarly, a high-resolution network (HRNet) has been used for extracting detailed features [77]. In addition, the authors have also furnished customized CNN models to improve system performance.

3.2.3. Feature Selection/Optimization

Feature selection is employed to reduce redundant content by preserving significant information. The sequential feature selector algorithm (SFS) [78], chaotic salp swarm algorithm (CSSA) [79], advanced squirrel search optimization algorithm (ASSOA) [80], and harmony search (HS) [81] algorithm are extensively utilized to reduce redundant information in feature representation. Similarly, ReliefF and Neighborhood Component Analysis (NCA) are used to select optimal features, i.e., RFINCA [82]. In addition, methods such as binary gray wolf optimization (GWO) [83] and hybrid social group optimization (HSGO) [84] have proven their efficacy in providing best optimized features. Scientists have also fitted the fractional-order marine predators algorithm (FO-MPA) [85], minimum redundancy and maximum relevance (mRMR) [86], and manta ray foraging optimization (MRFO) [63] in order to select the most significant features. Feature dimensionality reduction has been undertaken using a t-distributed stochastic neighbor embedding (t-SNE) technique [87] and principal component analysis (PCA) [88]. Apart from these methods, feature selection using mutual information (MI) [89], Relief-F [90] and the dragonfly algorithm (DA) [91], and the guided whale optimization algorithm (Guided WOA) [92] have also been set up. In addition, feature selection has been performed using maximum entropy and ANOVA test [93].
Because optimizers are the crucial part of the neural network, the most commonly used algorithms for DNN approaches are the stochastic gradient descent, adaptive learning rate optimization algorithm [94], and root mean square propagation [95], which are supplied to update the network weights. CNN with GWO and whale optimization with the BAT algorithm have been employed to tune the hyperparameters [96,97]. Furthermore, biogeography-based optimization [98], and the multi-objective differential evolution (MODE) parameter tuning method have been used to optimize the parameters [99].

3.2.4. Classification

In the classification stage, a decision is made on test images by predicting the labels. In order to categorize COVID-19 infections, highly accurate classifier techniques play an important role. Classifier techniques such as random forest (RF) [100], the support vector machine (SVM) [101], and the bagging tree classifier [102] have proven their efficacy in multiclass classifications. In addition to these classification techniques, k-nearest neighbor (k-NN) [103], decision tree (DT) [104], Naïve Bayes (NB) [105] and artificial neural network (ANN) [106], generalized regression neural network (GRNN) [107], MLP neural network [108], probabilistic neural network (PNN) [109], and extreme learning machine (ELM) [110] classifier are also used by the research community. Moreover, adaptive boosting (AdaBoost) [111], eXtreme Gradient Boosting (XGBoost) [112], and logistic regression (LR) [113] have also been incorporated by various investigators. However, the authors selected the classifiers based on the best achieved results for the extracted features. Table 2, Table 3, Table 4 and Table 5 are summaries of state-of-the-art techniques used in the automated detection of COVID-19 with various image modalities.

4. Results

From our extensive literature review, it was observed that many of the CAD tools in the area of several medical fields have used accuracy, sensitivity or recall, specificity, positive predictive value (PPV) or precision, F-measure or F-score, and area under the curve (AUC) to evaluate the performance of the system [274,275,276]. Similarly, the performance of the CAD tool for the identification of COVID-19 was also evaluated using the same performance parameters as mentioned above. Let TP, TN, FP, and FN indicate true positive, true negative, false positive and false negative, respectively. They are given by the following equations:
Accuracy = (TP + TN)⁄(TP + TN + FP + FN)
Sensitivity = TP⁄(TP + FN)
Specificity = TN⁄(TN + FP)
PPV = TP⁄(TP + FP)
F1-score = 2TP/(2TP + FP + FN)
In all performance measures, the higher the value, the better the performance of the model. The developed AI models for COVID-19 detection using various medical images, such as X-ray, CT, and US, can be categorized into 2, 3, 4, and 5 classes per imaging modality, as shown in Figure 5.
Figure 5 reveals that two-class classification (Healthy vs. COVID-19 or COVID-19 vs. NonCOVID-19) was the most frequently reported among the different imaging modalities. Combinations of different class categorizations were also observed in CADTs which used X-ray images. Table 6 conveys the average performance outcomes of the systems considered in the present review irrespective of the number of cases. Many of the studies used publicly available datasets and achieved comparable results.
It is observed from Table 6 that the systems developed with X-ray and CT images had five-class classification and achieved a Cvd.Acc (avg.) of 92.41% using X-ray images. It is also observed that the two-class models are no longer valid when other diseases with similar symptoms were presented [178]. It is noted from Table 2, Table 3, Table 4 and Table 5 that few studies have performed four-class (normal vs. COVID-19 vs. viral pneumonia (VP) vs. bacterial pneumonia (BP)) classification [114,118,138,154,161,179,189,194,264]. They have obtained the Cvd.Acc (avg.) of 89.91%. Hence, for further analysis of the system we considered the model which can categorize three or more images. Box plot analysis was carried out to obtain the overall performance of the three-class classification system used in COVID-19 detection. Figure 6 shows the box plots for Cvd.Acc, Cvd.Sen, Cvd.Spe, F1-Score, and AUC values of the reported AI methods in the three-class classification scenario. Box plots represent the distribution characteristics of performance measures based on minimum, first quartile, median, third quartile, and maximum.
It is noted from Figure 6 that AI techniques using X-ray imagery had acceptable performance when compared to other medical images. For the three-class scenario, the method achieved Cvd.Acc (avg.) of 94.78%, 94.55%, and 94.99% using X-ray, CT, and the system with both X-ray and CT, respectively, by considering all state-of-the-art techniques. Further, we also analyzed the systems which can categorize three or more classes. It is observed from Table 2 that ResNet50 with DWT and GLCM [114], customized CNN [118,154,179,189], GoogLeNet [138], InceptionNet [141], AlexNet [160], a combination of DenseNet103 and ResNet18 [148], an ensemble of various models such as InceptionResNetV2, ResNet152V2, VGG16, and DenseNet201 [153], and a grouping of MobileNet and InceptionV3 [161] were effectively used for four-class classification using X-ray images. Further, the authors also used CNN models for five-class classification using X-ray images [129,168,177]. From Table 2, it is also noted that only RF [114], SVM [179], and ensemble of classifiers [194] have achieved comparable results for four-class categorization. Herein, the RF classifier shows its suitability multiclass categorization by achieving Cvd.Acc of 98.48%. From Table 3, it is observed that grouping of ResNet152V2, DenseNet201, and VGG16 [212], deep learning model [216], and PSSPNN [232] were used by the authors to categorize four-class CT images. The combination of various DNN models achieved a Cvd.Acc of 98.83% [212]. From Table 4 it is noted that minimal work has been reported using lung US imagery. In [258] the autoencoder and modified DenseNet201 is used for four-class classification, and achieved a better result, by over 17%, compared to traditional DenseNet. In [260,264], the system is tested with X-ray and CT modalities, and achieved better classification for four classes. The usage of VGG19 [260] and VGG16 [264] have shown their significance in four-class classification, as noted in Table 5. In [265], a combination of DenseNet103 with Haralick textural features and the ResNet101 model also showed promising performance. It is furthermore observed that for all modalities, only the VGG19 model is used for three-class categorization [273]. It achieved better result for US images, when compared to X-ray and CT.

5. Discussion

Investigators have developed many models to detect COVID-19 during the past two years and have shown that there is a role for AI in detecting COVID-19 [19,21,22,23,24,25,26,27,28,29,277,278,279,280,281]. The 184 technical papers reviewed in this study provide up-to-date knowledge on the usage of AI techniques in detecting COVID-19. The developed models were categorized based on DNN, HCFL, and hybrid methodologies. The number of articles based on the three methodologies are highlighted in Figure 7.
It is observed from Figure 7 that 70% of the papers reported the use of a DNN-based approach, which included pre-trained networks and customized CNNs. Very few papers were developed to quantify the severity of COVID-19 [282,283,284,285,286]. It is also noted that the computational cost of various deep learning approaches is high [287,288]. From Figure 5, 40%, 78.26%, and 50% of the papers using X-ray, CT, and all modalities, respectively, reported only two-class classification. However, it is difficult to show its significance level in real-time to categorize multiple classes with similar symptoms. It is also observed from Table 6 that, for four-class classification, the Cvd.Sen and Cvd.Spe of the methods increased 4.5% and 1.66%, respectively, using CT images, when compared to X-ray images. In most of the cases, CNNs were able to successfully extract significant information from lung tissue with pneumonia, (i.e., BP and VP). Pre-trained networks such as ResNet, DenseNet, and VGG were successfully used in all of the modalities for greater than three-class categorization. However, the comparison of the pre-trained networks for binary classification may not be as useful, since it may fail to distinguish diseases which have similar symptoms with COVID-19.
In short, it is very difficult to make a comprehensive comparison of methodologies in this present situation because the methods were evaluated using various datasets of different sizes. Hence, the general opinion on the algorithm may be reduced. Few investigators performed k-fold cross validation and in most of the cases the hold-out method was used. Therefore, it is difficult to observe the consistency in the developed models.
Although several models have been developed to detect COVID-19, there are many factors involved in the analysis of COVID-19 imagery, which are listed as follows:
Implementation of multiclass categorization models: Many of the studies implemented two-class categorization; however, these are restricted to only understanding the features of normal and COVID-19 images. For disease symptoms similar to COVID-19, there is a need for algorithms which can discriminate among various classes, such as normal, COVID-19, pneumonia, BP, VP, tuberculosis, and lung opacity. Hence, there is a need for models which can understand the inherent characteristics of various diseases and predict the severity level. Investigators should therefore concentrate on the generalization aspects of the developed models by considering all image modalities.
Implementation aspects: State-of-the-art techniques have trained models using a transfer learning approach. Although the results are promising, the primary architecture has been developed to handle real-world color images. Hence, there is a need for DNNs which are trained from scratch using real medical images. In addition, the selection of appropriate hyper parameters to obtain improved accuracy will play a significant role in training networks developed in the future. The discrimination power of AI techniques can be improved by training the system with multiple views of medical images, which, however, requires extra time. Hence, there is a need for compact featuring to represent COVID-19 and other similar diseases to handle huge datasets.
CADTs to analyze prognosis of COVID-19: Researchers should exploit the hybrid methodology to help medical doctors to understand the treatment outcomes for COVID-19. It is important to develop models to assess the health condition of post-COVID-19 patients for better health and management of the system.

5.1. Future Trends

Since the onset of the COVID-19 pandemic, home isolation and quarantine have been implemented by governments across the world to control the spread of the pandemic [289,290]. In addition, risk factors such as fever, weakness, heart disease, and dry cough, are the most critical issues in the mortality of patients [291]. A person who has tested positive for COVID-19 or who has been in close contact with a confirmed COVID-19 person has to undergo a period of quarantine. In cases where home quarantine is required, especially in rural areas of developing countries, the hospital may require frequent health updates from the patient. This can be done via smartphone where the patient monitors his/her own temperature and/or SpO2 level and reports the results to the medical doctor. In this way the doctor is able to monitor patient health remotely and provide suitable prescriptions or medications when required. There is also a chance that the results obtained from the antigen rapid self-test kit may be negative, despite the patient showing symptoms of COVID-19 disease. In addition, there may be other issues such as people with disabilities and elderly people dependent on them. Considering all of these issues, the best solution would be to remotely monitor the patient without the need for frequent visits to the hospital.
Recent advancements in the Internet of Things (IoT) have paved the way for providing improved healthcare support services [292]. In the future, a cloud-based wireless healthcare system can be used to control the observation of COVID-19 epidemiologically, as shown in Figure 8. X-ray images of the patient’s chest can be taken at selected rural hospitals. X-ray imaging is a fast, inexpensive, and minimally invasive procedure, and X-ray units are available in most rural hospitals. Before collecting the data, the institute’s ethical committee approval should be granted, and the imaging data should be collected after obtaining written consent from the patients. The collected data are stored in a secured cloud-based server with unique identification number for each patient. X-ray images are then analyzed using a cloud-based system, and observations are sent to the medical doctors. On close examination of the imagery, the doctor provides suitable advice to the patient along with prescriptions and treatment instructions. Hence, medical doctors and their patients can interact remotely for any further treatment even in rural communities.

5.2. Limitations of the Review

1. This review considered only manuscripts written in English.
2. In this review process, many databases were explored using different search queries; thus a few relevant works may have been neglected in the search. The review process was performed based on technical papers to detect COVID-19 rather than on clinical studies.
3. The present work provides a systematic review of AI techniques, analysis, and its advancement. However, the transformation before and after COVID-19 is not assigned great importance in this study.
The scope of this review was the comprehension of the AI techniques using different imaging modalities. It is observed that the CT scan, which is the faster and more feasible method, has been proven to be the most sensitive tool in the diagnosis of COVID-19 compared to the RT-PCR test [293]. However, the technique involves a high dose of radiation and is not available in the rural health care sectors in developing countries [294,295]. In contrast, the chest X-ray is a universally available technique with 30–70 times lower radiation exposure, and the test is performed during the initial investigational process for COVID-19 [296]. However, lung US is an alternative mode that produces results similar to those of the chest CT and is considered to be superior to the chest X-ray in the diagnosis of lung pathology in COVID-19 infection. Nonetheless, this modality is not useful when the pleura is spared from the pneumonic pathology during the early course of the disease [297]. Recent developments in the diagnosis of COVID-19 using signals such as respiratory sounds, speech signals, and coughing sounds, have also attracted many researchers [298,299]. Furthermore, in the future, this can be combined with other imaging modalities and signals to enhance the performance of the system using various deep learning approaches.

6. Conclusions

AI techniques do not substitute for medical doctors and expert radiologists. However, they can efficiently and automatically impact the analysis of medical imagery. The development of CAD tools to detect COVID-19 have grown significantly in recent years, contributing to the body of clinical and medical research. The early detection of COVID-19 using AI techniques would be helpful to prevent the progression of the pandemic by enabling rapid decision-making. This study aimed to observe and analyze the growth and improvement in AI techniques for the detection of COVID-19. In this review, 184 papers were selected and summarized. The results showed that all DNN, HCFL, and hybrid approaches have high a potential to predict COVID-19 cases. The classification, segmentation, and quantification of the severity level of COVID-19 on heterogeneous datasets can be improved if medical experts play a significant role in building the framework for AI techniques, providing significant knowledge of image features and real-world requirements.

Author Contributions

Conceptualization, A.G., U.R. and U.R.A.; Methodology, A.G. and U.R.; Validation and review, C.P.O., W.Y.C., N.A.K., K.H., P.D.B. and S.C.; Review and editing, C.P.O., E.J.C. and U.R.A.; Original draft preparation, A.G. and U.R., S.N., M.R.G., C.D., J.S.; Visualization, U.R.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research work is funded by Ministry of Higher Education, Malaysia (grant number MRUN2019-3D).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhu, N.; Zhang, D.; Wang, W.; Li, X.; Yang, B.; Song, J.; Zhao, X.; Huang, B.; Shi, W.; Lu, R.; et al. A novel coronavirus from patients with pneumonia in China, 2019. N. Engl. J Med. 2020, 382, 727–733. [Google Scholar] [CrossRef]
  2. Li, Q.; Guan, X.; Wu, P.; Wang, X.; Zhou, L.; Tong, Y.; Ren, R.; Leung, K.S.M.; Lau, E.H.Y.; Wong, J.Y.; et al. Early Transmission Dynamics in Wuhan, China, of Novel Coronavirus–Infected Pneumonia. N. Engl. J. Med. 2020, 382, 1199–1207. [Google Scholar] [CrossRef]
  3. Zheng, M.; Gao, Y.; Wang, G.; Song, G.; Liu, S.; Sun, D.; Xu, Y.; Tian, Z. Functional exhaustion of antiviral lymphocytes in COVID-19 patients. Cell. Mol. Immunol. 2020, 17, 533–535. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Zhang, J.; Litvinova, M.; Wang, W.; Wang, Y.; Deng, X.; Chen, X.; Li, M.; Zheng, W.; Yi, L.; Chen, X.; et al. Evolving epidemiology and transmission dynamics of coronavirus disease 2019 outside Hubei province, China: A descriptive and modelling study. Lancet Infect. Dis. 2020, 20, 793–802. [Google Scholar] [CrossRef]
  5. Cascella, M.; Rajnik, M.; Aleem, A.; Dulebohn, S.C.; Di Napoli, R. Features, Evaluation and Treatment Coronavirus (COVID-19); Stat Pearls Publishing: Treasure Island, FL, USA, 2020. [Google Scholar]
  6. Wan, Y.; Shang, J.; Graham, R.; Baric, R.S.; Li, F. Receptor recognition by novel coronavirus from Wuhan: An analysis based on decade-long structural studies of SARS. J. Virol. 2020, 94, e00127-20. [Google Scholar] [CrossRef] [Green Version]
  7. Hoffmann, M.; Kleine-Weber, H.; Schroeder, S.; Krüger, N.; Herrler, T.; Erichsen, S.; Schiergens, T.S.; Herrler, G.; Wu, N.H.; Nitsche, A.; et al. SARS-CoV-2 cell entry depends on ACE2 and TMPRSS2 and is blocked by a clinically proven protease inhibitor. Cell 2020, 181, 271–280. [Google Scholar] [CrossRef]
  8. Sims, A.C.; Baric, R.S.; Yount, B.; Burkett, S.E.; Collins, P.L.; Pickles, R.J. Severe acute respiratory syndrome coronavirus infection of human ciliated airway epithelia: Role of ciliated cells in viral spread in the conducting airways of the lungs. J. Virol. 2005, 79, 15511–15524. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Tang, N.L.-S.; Chan, P.K.-S.; Wong, C.-K.; To, K.-F.; Wu, A.K.-L.; Sung, Y.-M.; Hui, D.S.-C.; Sung, J.J.-Y.; Lam, C.W.-K. Early Enhanced Expression of Interferon-Inducible Protein-10 (CXCL-10) and Other Chemokines Predicts Adverse Outcome in Severe Acute Respiratory Syndrome. Clin. Chem. 2005, 51, 2333–2340. [Google Scholar] [CrossRef] [Green Version]
  10. Xu, Z.; Shi, L.; Wang, Y.; Zhang, J.; Huang, L.; Zhang, C.; Liu, S.; Zhao, P.; Liu, H.; Zhu, L.; et al. Pathological findings of COVID-19 associated with acute respiratory distress syndrome. Lancet Respir. Med. 2020, 8, 420–422. [Google Scholar] [CrossRef]
  11. Yuki, K.; Fujiogi, M.; Koutsogiannaki, S. COVID-19 pathophysiology: A review. Clin. Immunol. 2020, 215, 108427. [Google Scholar] [CrossRef]
  12. Donnelly, C.A.; Ghani, A.C.; Leung, G.M.; Hedley, A.J.; Fraser, C.; Riley, S.; Abu-Raddad, L.J.; Ho, L.M.; Thach, T.Q.; Chau, P.; et al. Epidemiological determinants of spread of causal agent of severe acute respiratory syndrome in hong kong. Lancet 2003, 361, 1761–1766. [Google Scholar] [CrossRef] [Green Version]
  13. Goyal, P.; Choi, J.J.; Pinheiro, L.C.; Schenck, E.J.; Chen, R.; Jabri, A.; Satlin, M.J.; Campion, T.R., Jr.; Nahid, M.; Ringel, J.B.; et al. Clinical Characteristics of Covid-19 in New York City. N. Engl. J. Med. 2020, 382, 2372–2374. [Google Scholar] [CrossRef]
  14. Guan, W.-J.; Ni, Z.-Y.; Hu, Y.; Liang, W.-H.; Ou, C.-Q.; He, J.-X.; Liu, L.; Shan, H.; Lei, C.-L.; Hui, D.S.C.; et al. Clinical Characteristics of Coronavirus Disease 2019 in China. N. Engl. J. Med. 2020, 382, 1708–1720. [Google Scholar] [CrossRef] [PubMed]
  15. Young, B.E.; Ong, S.; Kalimuddin, S.; Low, J.G.; Tan, S.Y.; Loh, J.; Ng, O.T.; Marimuthu, K.; Ang, L.W.; Mak, T.M.; et al. Epidemiologic features and clinical course of patients infected with SARS-CoV-2 in Singapore. JAMA 2020, 323, 1488–1494. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Cheung, K.S.; Hung, I.; Chan, P.; Lung, K.C.; Tso, E.; Liu, R.; Ng, Y.Y.; Chu, M.Y.; Chung, T.; Tam, A.R.; et al. Gastrointestinal manifestations of SARS-CoV-2 infection and virus load in fecal samples from the hong kong cohort and systematic review and meta-analysis. Gastroenterology 2020, 159, 81–95. [Google Scholar] [CrossRef] [PubMed]
  17. Available online: https://www.webmd.com/lung/COVID19-digestive-symptoms (accessed on 15 June 2020).
  18. Liu, R.; Han, H.; Liu, F.; Lv, Z.; Wu, K.; Liu, Y.; Feng, Y.; Zhu, C. Positive rate of RT–PCR detection of SARS-CoV-2 infection in 4880 cases from one hospital in Wuhan, China, from Jan to Feb 2020. Clin. Chim. Acta 2020, 505, 172–175. [Google Scholar] [CrossRef] [PubMed]
  19. Kugunavar, S.; Prabhakar, C.J. Convolutional neural networks for the diagnosis and prognosis of the coronavirus disease pandemic. Vis. Comput. Ind. Biomed. Art 2021, 4, 12. [Google Scholar] [CrossRef]
  20. Shuja, J.; Alanazi, E.; Alasmary, W.; Alashaikh, A. COVID-19 open source data sets: A comprehensive survey. Appl. Intell. 2021, 51, 1296–1325. [Google Scholar] [CrossRef]
  21. Rehman, A.; Iqbal, M.; Xing, H.; Ahmed, I. COVID-19 Detection Empowered with Machine Learning and Deep Learning Techniques: A Systematic Review. Appl. Sci. 2021, 11, 3414. [Google Scholar] [CrossRef]
  22. Low, W.C.S.; Chuah, J.H.; Tee, C.A.T.H.; Anis, S.; Shoaib, M.A.; Faisal, A.; Khalil, A.; Lai, K.W. An Overview of Deep Learning Techniques on Chest X-ray and CT Scan Identification of COVID-19. Comput. Math. Methods Med. 2021, 2021, 5528144. [Google Scholar] [CrossRef]
  23. Ghaderzadeh, M.; Asadi, F. Deep Learning in the Detection and Diagnosis of COVID-19 Using Radiology Modalities: A Systematic Review. J. Health Eng. 2021, 2021, 6677314. [Google Scholar] [CrossRef]
  24. Ozsahin, I.; Sekeroglu, B.; Musa, M.S.; Mustapha, M.T.; Ozsahin, D.U. Review on Diagnosis of COVID-19 from Chest CT Images Using Artificial Intelligence. Comput. Math. Methods Med. 2020, 2020, 9756518. [Google Scholar] [CrossRef]
  25. Aishwarya, T.; Kumar, V.R. Machine Learning and Deep Learning Approaches to Analyze and Detect COVID-19: A Review. SN Comput. Sci. 2021, 2, 1–9. [Google Scholar] [CrossRef] [PubMed]
  26. Nayak, J.; Naik, B.; Dinesh, P.; Vakula, K.; Dash, P.B.; Pelusi, D. Significance of deep learning for Covid-19: State-of-the-art review. Res. Biomed. Eng. 2021, 1–24. [Google Scholar] [CrossRef]
  27. Alballa, N.; Al-Turaiki, I. Machine Learning Approaches in COVID-19 Diagnosis, Mortality, and Severity Risk Prediction: A Review. Inform. Med. Unlocked 2021, 24, 100564. [Google Scholar] [CrossRef] [PubMed]
  28. Bhattacharya, S.; Maddikunta, P.K.R.; Pham, Q.-V.; Gadekallu, T.R.; Chowdhary, C.L.; Alazab, M.; Piran, J. Deep learning and medical image processing for coronavirus (COVID-19) pandemic: A survey. Sustain. Cities Soc. 2021, 65, 102589. [Google Scholar] [CrossRef]
  29. Tayarani, M.H. Applications of artificial intelligence in battling against covid-19: A literature review. Chaos Solitons Fractals 2021, 142, 110338. [Google Scholar] [CrossRef] [PubMed]
  30. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; The PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Int. J. Surg. 2010, 8, 336–341. [Google Scholar] [CrossRef] [Green Version]
  31. Huang, P.; Liu, T.; Huang, L.; Liu, H.; Lei, M.; Xu, W.; Hu, X.; Chen, J.; Liu, B. Use of Chest CT in Combination with Negative RT-PCR Assay for the 2019 Novel Coronavirus but High Clinical Suspicion. Radiology 2020, 295, 22–23. [Google Scholar] [CrossRef]
  32. Cellina, M.; Martinenghi, C.; Marino, P.; Oliva, G. COVID-19 pneumonia—ultrasound, radiographic, and computed tomography findings: A comprehensive pictorial essay. Emerg. Radiol. 2021, 28, 519–526. [Google Scholar] [CrossRef]
  33. Yan, T. Ccap: A Chest Ct Dataset. 2020. Available online: https://ieee-dataport.org/authors/tao-yan (accessed on 29 November 2021).
  34. Available online: https://iclus-web.bluetensor.ai/ (accessed on 5 October 2020).
  35. Zhao, J.; Zhang, Y.; He, X.; Xie, P. COVID-CT-Dataset: A CT scan dataset about COVID-19. arXiv 2020, arXiv:2003.13865. [Google Scholar]
  36. Soares, E.; Angelov, P.; Biaso, S.; Froes, M.H.; Abe, D.K. SARSCov-2 CT-scan dataset: A large dataset of real patients CT scans for SARS-cov-2 identification. MedRxiv 2020, 20078584. [Google Scholar] [CrossRef]
  37. Morozov, S.P.; Andreychenko, A.E.; Pavlov, N.A.; Vladzymyrskyy, A.V.; Ledikhova, N.V.; Gombolevskiy, V.A.; Blokhin, I.A.; Gelezhe, P.B.; Gonchar, A.V.; Chernina, V.Y. Mosmeddata: Chest ct scans with covid-19 related findings dataset. arXiv 2020, arXiv:2005.06465. [Google Scholar]
  38. Ma, J.; Cheng, G.; Wang, Y.; An, X.; Gao, J.; Yu, Z.; Zhang, M.; Liu, X.; Deng, X.; Cao, S.; et al. COVID-19 CT Lung and Infection Segmentation Dataset. Zenodo 2020, 20, 3757476. [Google Scholar] [CrossRef]
  39. Born, J.; Wiedemann, N.; Cossio, M.; Buhre, C.; Brändle, G.; Leidermann, K.; Aujayeb, A.; Moor, M.; Rieck, B.; Borgwardt, K. Accelerating Detection of Lung Pathologies with Explainable Ultrasound Image Analysis. Appl. Sci. 2021, 11, 672. [Google Scholar] [CrossRef]
  40. Karel, Z. Contrast Limited Adaptive Histograph Equalization. Graphic Gems IV; Academic Press Professional: San Diego, CA, USA, 1994; pp. 474–485. [Google Scholar]
  41. Kirsch, R. Computer determination of the constituent structure of biological images. Comput. Biomed. Res. 1971, 4, 315–328. [Google Scholar] [CrossRef]
  42. Gonzales, R.C.; Richard, E. Woods. Digital Image Processing, 2nd ed.; Englewood Cliffs: Prentice Hall, NJ, USA, 2002. [Google Scholar]
  43. Charles, P. Digital Video and HDTV Algorithms and Interfaces; Morgan Kaufman Publishers: San Francisco, CA, USA, 2003. [Google Scholar]
  44. Haralick, R.M.; Shapiro, L.G. Image segmentation techniques. Comput. Vis. Graph. Image Process. 1985, 29, 100–132. [Google Scholar] [CrossRef]
  45. Meyer, F. Topographic distance and watershed lines. Signal Process. 1994, 38, 113–125. [Google Scholar] [CrossRef]
  46. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  47. Chaurasia, A.; Culurciello, E. LinkNet: Exploiting encoder representations for efficient semantic segmentation. In Proceedings of the 2017 IEEE Visual Communications and Image Processing (VCIP), Saint Petersburg, FL, USA, 10–13 December 2017; pp. 1–4. [Google Scholar]
  48. Selvan, R.; Dam, E.B.; Detlefsen, N.S.; Rischel, S.; Sheng, K.; Nielsen, M.; Pai, A. Lung segmentation from chest X-rays using variational data imputation. arXiv 2020, arXiv:2005.10052. [Google Scholar]
  49. Rahman, S.; Sarker, S.; Al Miraj, A.; Nihal, R.A.; Haque, A.K.M.N.; Al Noman, A. Deep Learning–Driven Automated Detection of COVID-19 from Radiography Images: A Comparative Analysis. Cogn. Comput. 2021, 1–30. [Google Scholar] [CrossRef] [PubMed]
  50. Chandra, T.B.; Verma, K.; Singh, B.K.; Jain, D.; Netam, S.S. Coronavirus disease (COVID-19) detection in Chest X-ray images using majority voting based classifier ensemble. Expert Syst. Appl. 2021, 165, 113909. [Google Scholar] [CrossRef]
  51. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic Minority Over-sampling Technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  52. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. In Proceedings of the 4th International Conference on Learning Representations, ICLR 2016—Conference Track Proceedings, San Juan, PR, USA, 2–4 May 2016. [Google Scholar]
  53. Mehdi, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
  54. Waheed, A.; Goyal, M.; Gupta, D.; Khanna, A.; Al-Turjman, F.; Pinheiro, P.R. CovidGAN: Data Augmentation Using Auxiliary Classifier GAN for Improved Covid-19 Detection. IEEE Access 2020, 8, 91916–91923. [Google Scholar] [CrossRef]
  55. Mallat, S.G. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Trans. Pattern Anal. Mach. Intell. 1989, 11, 674–693. [Google Scholar] [CrossRef] [Green Version]
  56. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern 1973, SMC-3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  57. Górszczyk, A.; Malinowski, M.; Bellefleur, G. Enhancing 3D post-stack seismic data acquired in hardrock environment using 2D curvelet transform. Geophys Prospect. 2015, 63, 903–918. [Google Scholar] [CrossRef]
  58. Tuncer, T.; Dogan, S.; Ozyurt, F. An automated Residual Exemplar Local Binary Pattern and iterative ReliefF based COVID-19 detection method using chest X-ray image. Chemom. Intell. Lab. Syst. 2020, 203, 104054. [Google Scholar] [CrossRef]
  59. Dalal, N.; Triggs, B. Histograms of Oriented Gradients for Human Detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005. [Google Scholar] [CrossRef] [Green Version]
  60. Yin, M.; Liu, W.; Zhao, X.; Guo, Q.-W.; Bai, R.-F. Image denoising using trivariate prior model in nonsubsampled dual-tree complex contourlet transform domain and non-local means filter in spatial domain. Optik 2013, 124, 6896–6904. [Google Scholar] [CrossRef]
  61. Ranjbarzadeh, R.; Ghoushchi, S.J.; Bendechache, M.; Amirabadi, A.; Ab Rahman, M.N.; Saadi, S.B.; Aghamohammadi, A.; Forooshani, M.K. Lung Infection Segmentation for COVID-19 Pneumonia Based on a Cascade Convolutional Network from CT Images. BioMed Res. Int. 2021, 2021, 5544742. [Google Scholar] [CrossRef]
  62. Available online: https://pillow.readthedocs.io/en/3.1.x/reference/Image.html (accessed on 20 May 2020).
  63. Elaziz, M.A.; Hosny, K.M.; Salah, A.; Darwish, M.M.; Lu, S.; Sahlol, A.T. New machine learning method for image-based diagnosis of COVID-19. PLoS ONE 2020, 15, e0235187. [Google Scholar] [CrossRef]
  64. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  65. Hosny, K.M.; Darwish, M.M.; Li, K.; Salah, A. COVID-19 diagnosis from CT scans and chest X-ray images using low-cost Raspberry Pi. PLoS ONE 2021, 16, e0250688. [Google Scholar] [CrossRef] [PubMed]
  66. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
  67. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Net Works. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  68. Huang, G.; Liu, Z.; van der Maaten, L.; Kilian, Q. Weinberger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  69. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  70. Zhou, B.; Lapedriza, A.; Torralba, A.; Oliva, A. Places: An Image Database for Deep Scene Understanding. arXiv 2016, arXiv:1610.02055. [Google Scholar] [CrossRef]
  71. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  72. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  73. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
  74. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. arXiv 2015, arXiv:1409.0575. [Google Scholar] [CrossRef] [Green Version]
  75. Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1800–1807. [Google Scholar] [CrossRef] [Green Version]
  76. Ahsan, M.; Alam, T.E.; Trafalis, T.; Huebner, P. Deep MLP-CNN Model Using Mixed-Data to Distinguish between COVID-19 and Non-COVID-19 Patients. Symmetry 2020, 12, 1526. [Google Scholar] [CrossRef]
  77. Ahmed, S.; Hossain, T.; Hoque, O.B.; Sarker, S.; Rahman, S.; Shah, F.M. Automated COVID-19 Detection from Chest X-ray Images: A High-Resolution Network (HRNet) Approach. SN Comput. Sci. 2021, 2, 1–17. [Google Scholar] [CrossRef]
  78. Guyon, I.; Elisseeff, A. An introduction to variable and feature selection. J. Mach. Learn. Res. 2003, 3, 1157–1182. [Google Scholar]
  79. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  80. El-Kenawy, E.-S.M.; Mirjalili, S.; Ibrahim, A.; Alrahmawy, M.; El-Said, M.; Zaki, R.M.; Eid, M.M. Advanced Meta-Heuristics, Convolutional Neural Networks, and Feature Selectors for Efficient COVID-19 X-ray Chest Image Classification. IEEE Access 2021, 9, 36019–36037. [Google Scholar] [CrossRef]
  81. Geem, Z.W.; Kim, J.H.; Loganathan, G. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  82. Tuncer, T.; Akbal, E.; Dogan, S. An automated snoring sound classification method based on local dual octal pattern and iterative hybrid feature selector, Biomed. Signal Process Contr. 2021, 63, 102173. [Google Scholar] [CrossRef] [PubMed]
  83. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  84. Singh, A.K.; Kumar, A.; Mahmud, M.; Kaiser, M.S.; Kishore, A. COVID-19 Infection Detection from Chest X-ray Images Using Hybrid Social Group Optimization and Support Vector Classifier. Cogn. Comput. 2021, 1–13. [Google Scholar] [CrossRef] [PubMed]
  85. Sahlol, A.T.; Yousri, D.; Ewees, A.A.; Al-Qaness, M.A.A.; Damasevicius, R.; Elaziz, M.A. COVID-19 image classification using deep features and fractional-order marine predators algorithm. Sci. Rep. 2020, 10, 1–15. [Google Scholar] [CrossRef]
  86. Peng, H.; Long, F.; Ding, C. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1226–1238. [Google Scholar] [CrossRef] [PubMed]
  87. van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  88. Krzanowski, W.J. Principles of Multivariate Analysis; Oxford University Press: Oxford, UK, 1988. [Google Scholar]
  89. Kojadinovic, I. Relevance measures for subset variable selection in regression problems based on k-additive mutual information. Comput. Stat. Data Anal. 2005, 49, 1205–1227. [Google Scholar] [CrossRef]
  90. Spolaor, N.; Cherman, E.A.; Monard, M.C.; Lee, H.D. ReliefF for Multi-label Feature Selection. In Proceedings of the 2013 Brazilian Conference on Intelligent Systems, Fortaleza, Brazil, 19–24 October 2013; pp. 6–11. [Google Scholar]
  91. Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2016, 27, 1053–1073. [Google Scholar] [CrossRef]
  92. El-Kenawy, E.-S.M.; Ibrahim, A.; Mirjalili, S.; Eid, M.M.; Hussein, S.E. Novel Feature Selection and Voting Classifier Algorithms for COVID-19 Classification in CT Images. IEEE Access 2020, 8, 179317–179335. [Google Scholar] [CrossRef]
  93. Hasan, A.M.; Al-Jawad, M.M.; Jalab, H.A.; Shaiba, H.; Ibrahim, R.W.; Al-Shamasneh, A.R. Classification of Covid-19 Coronavirus, Pneumonia and Healthy Lungs in CT Scans Using Q-Deformed Entropy and Deep Learning Features. Entropy 2020, 22, 517. [Google Scholar] [CrossRef]
  94. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the International Conference Learn Represent (ICLR), San Diego, CA, USA, 5–8 May 2015. [Google Scholar]
  95. Tieleman, T.; Hinton, G. Divide the gradient by a running average of its recent magnitude. coursera: Neural Networks Machine Learning. Tech. Rep. 2012, 4, 26–31. [Google Scholar]
  96. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  97. Pathan, S.; Siddalingaswamy, P.; Ali, T. Automated Detection of Covid-19 from Chest X-ray scans using an optimized CNN architecture. Appl. Soft Comput. 2021, 104, 107238. [Google Scholar] [CrossRef]
  98. Simon, D. Biogeography-Based Optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef] [Green Version]
  99. Singh, D.; Kumar, V.; Kaur, M. Classification of COVID-19 patients from chest CT images using multi-objective differential evolution–based convolutional neural networks. Eur. J. Clin. Microbiol. Infect. Dis. 2020, 39, 1379–1389. [Google Scholar] [CrossRef] [PubMed]
  100. Ho, T.K. The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 832–844. [Google Scholar] [CrossRef] [Green Version]
  101. Sain, S.R.; Vapnik, V.N. The Nature of Statistical Learning Theory. Technometrics 1996, 38, 409. [Google Scholar] [CrossRef]
  102. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef] [Green Version]
  103. Larose, D.T. Data Mining Methods and Models; Wiley: Hoboken, NJ, USA, 2005. [Google Scholar]
  104. Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Chapman & Hall: Boca Raton, FL, USA, 1984. [Google Scholar]
  105. Schutze, H.; Manning, C.D.; Raghavan, P. Introduction to Information Retrieval; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
  106. Hopfield, J.J. Artificial neural networks. IEEE Circuits Devices Mag. 1988, 4, 3–10. [Google Scholar] [CrossRef]
  107. Wasserman, P.D. Advanced Methods in Neural Computing; Van Nostrand Reinhold: New York, NY, USA, 1993. [Google Scholar]
  108. Trevor, T.; Robert, T.; Jerome, F. The Elements of Statistical Learning: Data Mining, Inference, and Prediction; Springer: New York, NY, USA, 2009. [Google Scholar]
  109. Donald, F. Probabilistic neural networks. Neural Netw. 1990, 3, 109–118. [Google Scholar]
  110. Huang, G.-B.; Zhou, H.; Ding, X.; Zhang, R. Extreme Learning Machine for Regression and Multiclass Classification. IEEE Trans. Syst Man Cybern B Cybern 2012, 42, 513–529. [Google Scholar] [CrossRef] [Green Version]
  111. Freund, Y.; Schapire, R.E. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar] [CrossRef] [Green Version]
  112. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  113. Wright, R. Logistic regression. In Reading & Understanding Multivariate Statistics; Grimm, L.C., Yarnold, P.R., Eds.; American Psychological Association: Washington, DC, USA, 1995. [Google Scholar]
  114. Mostafiz, R.; Uddin, M.S.; Nur-A-Alam, R.M.; Rahman, M.M. Covid-19 detection in chest X-ray through random forest classifier using a hybridization of deep CNN and DWT optimized features. J. King Saud Univ.-Comput. Inf. Sci. 2020. [Google Scholar] [CrossRef]
  115. Brunese, L.; Martinelli, F.; Mercaldo, F.; Santone, A. Machine learning for coronavirus covid-19 detection from chest X-rays. Procedia Comput. Sci. 2020, 176, 2212–2221. [Google Scholar] [CrossRef] [PubMed]
  116. Islam, Z.; Islam, M.; Asraf, A. A combined deep CNN-LSTM network for the detection of novel coronavirus (COVID-19) using X-ray images. Inform. Med. Unlocked 2020, 20, 100412. [Google Scholar] [CrossRef]
  117. Rahimzadeh, M.; Attar, A. A modified deep convolutional neural network for detecting COVID-19 and pneumonia from chest X-ray images based on the concatenation of Xception and ResNet50V2. Inform. Med. Unlocked 2020, 19, 100360. [Google Scholar] [CrossRef] [PubMed]
  118. Khan, A.I.; Shah, J.L.; Bhat, M.M. CoroNet: A deep neural network for detection and diagnosis of COVID-19 from chest X-ray images. Comput. Methods Programs Biomed. 2020, 196, 105581. [Google Scholar] [CrossRef] [PubMed]
  119. Dorr, F.; Chaves, H.; Serra, M.M.; Ramirez, A.; Costa, M.E.; Seia, J.; Cejas, C.; Castro, M.; Eyheremendy, E.; Slezak, D.F.; et al. COVID-19 pneumonia accurately detected on chest radiographs with artificial intelligence. Intell. Med. 2020, 3, 100014. [Google Scholar] [CrossRef] [PubMed]
  120. Ozturk, T.; Talo, M.; Yildirim, E.A.; Baloglu, U.B.; Yildirim, O.; Acharya, U.R. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput. Biol. Med. 2020, 121, 103792. [Google Scholar] [CrossRef] [PubMed]
  121. Altan, A.; Karasu, S. Recognition of COVID-19 disease from X-ray images by hybrid model consisting of 2D curvelet transform, chaotic salp swarm algorithm and deep learning technique. Chaos Solitons Fractals 2020, 140, 110071. [Google Scholar] [CrossRef]
  122. Brunese, L.; Mercaldo, F.; Reginelli, A.; Santone, A. Explainable Deep Learning for Pulmonary Disease and Coronavirus COVID-19 Detection from X-rays. Comput. Methods Programs Biomed. 2020, 196, 105608. [Google Scholar] [CrossRef] [PubMed]
  123. Jain, G.; Mittal, D.; Thakur, D.; Mittal, M.K. A deep learning approach to detect Covid-19 coronavirus with X-ray images. Biocybern. Biomed. Eng. 2020, 40, 1391–1405. [Google Scholar] [CrossRef]
  124. Heidari, M.; Mirniaharikandehei, S.; Khuzani, A.Z.; Danala, G.; Qiu, Y.; Zheng, B. Improving the performance of CNN to predict the likelihood of COVID-19 using chest X-ray images with preprocessing algorithms. Int. J. Med. Inform. 2020, 144, 104284. [Google Scholar] [CrossRef]
  125. Minaee, S.; Kafieh, R.; Sonka, M.; Yazdani, S.; Soufi, G.J. Deep-COVID: Predicting COVID-19 from chest X-ray images using deep transfer learning. Med. Image Anal. 2020, 65, 101794. [Google Scholar] [CrossRef]
  126. Afshar, P.; Heidarian, S.; Naderkhani, F.; Oikonomou, A.; Plataniotis, K.N.; Mohammadi, A. COVID-CAPS: A capsule network-based framework for identification of COVID-19 cases from X-ray images. Pattern Recognit. Lett. 2020, 138, 638–643. [Google Scholar] [CrossRef]
  127. Panwar, H.; Gupta, P.; Siddiqui, M.K.; Morales-Menendez, R.; Singh, V. Application of deep learning for fast detection of COVID-19 in X-rays using nCOVnet. Chaos Solitons Fractals 2020, 138, 109944. [Google Scholar] [CrossRef]
  128. Azemin, M.Z.C.; Hassan, R.; Tamrin, M.I.M.; Ali, M.A.M. COVID-19 Deep Learning Prediction Model Using Publicly Available Radiologist-Adjudicated Chest X-ray Images as Training Data: Preliminary Findings. Int. J. Biomed. Imaging 2020, 2020, 8828855. [Google Scholar] [CrossRef]
  129. Sharma, A.; Rani, S.; Gupta, D. Artificial Intelligence-Based Classification of Chest X-ray Images into COVID-19 and Other Infectious Diseases. Int. J. Biomed. Imaging 2020, 2020, 8889023. [Google Scholar] [CrossRef]
  130. Haque, K.F.; Abdelgawad, A. A Deep Learning Approach to Detect COVID-19 Patients from Chest X-ray Images. AI 2020, 1, 418–435. [Google Scholar] [CrossRef]
  131. Civit-Masot, J.; Luna-Perejón, F.; Morales, M.D.; Civit, A. Deep Learning System for COVID-19 Diagnosis Aid Using X-ray Pulmonary Images. Appl. Sci. 2020, 10, 4640. [Google Scholar] [CrossRef]
  132. Duran-Lopez, L.; Dominguez-Morales, J.; Corral-Jaime, J.; Vicente-Diaz, S.; Linares-Barranco, A. COVID-XNet: A Custom Deep Learning System to Diagnose and Locate COVID-19 in Chest X-ray Images. Appl. Sci. 2020, 10, 5683. [Google Scholar] [CrossRef]
  133. Misra, S.; Jeon, S.; Lee, S.; Managuli, R.; Jang, I.-S.; Kim, C. Multi-Channel Transfer Learning of Chest X-ray Images for Screening of COVID-19. Electronics 2020, 9, 1388. [Google Scholar] [CrossRef]
  134. Tartaglione, E.; Barbano, C.; Berzovini, C.; Calandri, M.; Grangetto, M. Unveiling COVID-19 from CHEST X-ray with Deep Learning: A Hurdles Race with Small Data. Int. J. Environ. Res. Public Health 2020, 17, 6933. [Google Scholar] [CrossRef] [PubMed]
  135. Khan, I.; Aslam, N. A Deep-Learning-Based Framework for Automated Diagnosis of COVID-19 Using X-ray Images. Information 2020, 11, 419. [Google Scholar] [CrossRef]
  136. Lee, K.-S.; Kim, J.; Jeon, E.-T.; Choi, W.; Kim, N.; Lee, K. Evaluation of Scalability and Degree of Fine-Tuning of Deep Convolutional Neural Networks for COVID-19 Screening on Chest X-ray Images Using Explainable Deep-Learning Algorithm. J. Pers. Med. 2020, 10, 213. [Google Scholar] [CrossRef]
  137. De Moura, J.; Ramos, J.J.D.M.; Vidal, P.L.; Novo, J.; Ortega, A.M. Analysis of Separability of COVID-19 and Pneumonia in Chest X-ray Images by Means of Convolutional Neural Networks. Proceedings 2020, 54, 31. [Google Scholar] [CrossRef]
  138. Loey, M.; Smarandache, F.; Khalifa, N.E.M. Within the Lack of Chest COVID-19 X-ray Dataset: A Novel Detection Model Based on GAN and Deep Transfer Learning. Symmetry 2020, 12, 651. [Google Scholar] [CrossRef] [Green Version]
  139. Zulkifley, M.A.; Abdani, S.R.; Zulkifley, N.H. COVID-19 Screening Using a Lightweight Convolutional Neural Network with Generative Adversarial Network Data Augmentation. Symmetry 2020, 12, 1530. [Google Scholar] [CrossRef]
  140. Apostolopoulos, I.D.; Mpesiana, T.A. Covid-19: Automatic detection from X-ray images utilizing transfer learning with convolutional neural networks. Phys. Eng. Sci. Med. 2020, 43, 635–640. [Google Scholar] [CrossRef] [Green Version]
  141. Das, D.; Santosh, K.C.; Pal, U. Truncated inception net: COVID-19 outbreak screening using chest X-rays. Phys. Eng. Sci. Med. 2020, 43, 915–925. [Google Scholar] [CrossRef]
  142. Pratiwi, N.C.; Ibrahim, N.; Fu’Adah, Y.N.; Masykuroh, K. Computer-Aided Detection (CAD) for COVID-19 based on Chest X-ray Images using Convolutional Neural Network. IOP Conf. Ser. Mater. Sci. Eng. 2020, 982, 012004. [Google Scholar] [CrossRef]
  143. Hirano, H.; Koga, K.; Takemoto, K. Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks. PLoS ONE 2020, 15, e0243963. [Google Scholar] [CrossRef]
  144. Wang, L.; Wong, A. COVID-Net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. arXiv 2020, arXiv:2003.09871. [Google Scholar] [CrossRef]
  145. Wang, D.; Mo, J.; Zhou, G.; Xu, L.; Liu, Y. An efficient mixture of deep and machine learning models for COVID-19 diagnosis in chest X-ray images. PLoS ONE 2020, 15, e0242535. [Google Scholar] [CrossRef]
  146. Li, T.; Han, Z.; Wei, B.; Zheng, Y.; Hong, Y.; Cong, J. Robust screening of covid-19 from chest X-ray via discriminative cost-sensitive learning. arXiv 2020, arXiv:2004.12592. [Google Scholar]
  147. Sekeroglu, B.; Ozsahin, I. Detection of COVID-19 from Chest X-ray Images Using Convolutional Neural Networks. SLAS Technol. Transl. Life Sci. Innov. 2020, 25, 553–565. [Google Scholar] [CrossRef]
  148. Oh, Y.; Park, S.; Ye, J.C. Deep Learning COVID-19 Features on CXR Using Limited Training Data Sets. IEEE Trans. Med. Imaging 2020, 39, 2688–2700. [Google Scholar] [CrossRef] [PubMed]
  149. Sahinbas, K.; Catak, F.O. Transfer learning-based convolutional neural network for COVID-19 detection with X-ray images. In Data Science for COVID-19; Elsevier: Amsterdam, The Netherlands, 2021; pp. 451–466. [Google Scholar]
  150. Chowdhury, M.E.H.; Rahman, T.; Khandakar, A.; Mazhar, R.; Kadir, M.A.; Bin Mahbub, Z.; Islam, K.R.; Khan, M.S.; Iqbal, A.; Al Emadi, N.; et al. Can AI Help in Screening Viral and COVID-19 Pneumonia? IEEE Access 2020, 8, 132665–132676. [Google Scholar] [CrossRef]
  151. Sakib, S.; Tazrin, T.; Fouda, M.M.; Fadlullah, Z.M.; Guizani, M. DL-CRC: Deep Learning-Based Chest Radiograph Classification for COVID-19 Detection: A Novel Approach. IEEE Access 2020, 8, 171575–171589. [Google Scholar] [CrossRef]
  152. Shah, P.M.; Ullah, F.; Shah, D.; Gani, A.; Maple, C.; Wang, Y.; Shahid, A.; Abrar, M.; Islam, S.U. Deep GRU-CNN model for COVID-19 detection from chest X-rays data. IEEE Access 2021. [Google Scholar] [CrossRef]
  153. Iskanderani, A.I.; Mehedi, I.M.; Aljohani, A.J.; Shorfuzzaman, M.; Akther, F.; Palaniswamy, T.; Latif, S.A.; Latif, A.; Alam, A. Artificial Intelligence and Medical Internet of Things Framework for Diagnosis of Coronavirus Suspected Cases. J. Health Eng. 2021, 2021, 3277988. [Google Scholar] [CrossRef]
  154. Wang, W.; Li, Y.; Li, J.; Zhang, P.; Wang, X. Detecting COVID-19 in Chest X-ray Images via MCFF-Net. Comput. Intell. Neurosci. 2021, 2021, 3604900. [Google Scholar] [CrossRef]
  155. Shorfuzzaman, M.; Masud, M.; Alhumyani, H.; Anand, D.; Singh, A. Artificial Neural Network-Based Deep Learning Model for COVID-19 Patient Detection Using X-ray Chest Images. J. Health Eng. 2021, 2021, 5513679. [Google Scholar] [CrossRef] [PubMed]
  156. Reshi, A.A.; Rustam, F.; Mehmood, A.; Alhossan, A.; Alrabiah, Z.; Ahmad, A.; Alsuwailem, H.; Choi, G.S. An Efficient CNN Model for COVID-19 Disease Detection Based on X-ray Image Classification. Complex 2021, 2021, 6621607. [Google Scholar] [CrossRef]
  157. Alruwaili, M.; Shehab, A.; El-Ghany, S.A. COVID-19 Diagnosis Using an Enhanced Inception-ResNetV2 Deep Learning Model in CXR Images. J. Health Eng. 2021, 2021, 6658058. [Google Scholar] [CrossRef] [PubMed]
  158. Shukla, P.K.; Sandhu, J.K.; Ahirwar, A.; Ghai, D.; Maheshwary, P.; Shukla, P.K. Multiobjective Genetic Algorithm and Convolutional Neural Network Based COVID-19 Identification in Chest X-ray Images. Math. Probl. Eng. 2021, 2021, 7804540. [Google Scholar] [CrossRef]
  159. Taresh, M.M.; Zhu, N.; Ali, T.A.A.; Hameed, A.S.; Mutar, M.L. Transfer Learning to Detect COVID-19 Automatically from X-ray Images Using Convolutional Neural Networks. Int. J. Biomed. Imaging 2021, 2021, 8828404. [Google Scholar] [CrossRef]
  160. Kaur, M.; Kumar, V.; Yadav, V.; Singh, D.; Kumar, N.; Das, N.N. Metaheuristic-based Deep COVID-19 Screening Model from Chest X-ray Images. J. Health Eng. 2021, 2021, 8829829. [Google Scholar] [CrossRef]
  161. Ahmad, F.; Farooq, A.; Ghani, M.U. Deep Ensemble Model for Classification of Novel Coronavirus in Chest X-ray Images. Comput. Intell. Neurosci. 2021, 2021, 8890226. [Google Scholar] [CrossRef] [PubMed]
  162. Hasan, M.D.K.; Ahmed, S.; Abdullah, Z.M.E.; Khan, M.M.; Anand, D.; Singh, A.; AlZain, M.; Masud, M. Deep Learning Approaches for Detecting Pneumonia in COVID-19 Patients by Analyzing Chest X-ray Images. Math. Probl. Eng. 2021, 2021, 9929274. [Google Scholar] [CrossRef]
  163. Muhammad, Y.; Alshehri, M.D.; Alenazy, W.M.; Hoang, T.V.; Alturki, R. Identification of Pneumonia Disease Applying an Intelligent Computational Framework Based on Deep Learning and Machine Learning Techniques. Mob. Inf. Syst. 2021, 2021, 9989237. [Google Scholar] [CrossRef]
  164. Pandit, M.; Banday, S.; Naaz, R.; Chishti, M. Automatic detection of COVID-19 from chest radiographs using deep learning. Radiography 2021, 27, 483–489. [Google Scholar] [CrossRef] [PubMed]
  165. Kedia, P.; Katarya, R. CoVNet-19: A Deep Learning model for the detection and analysis of COVID-19 patients. Appl. Soft Comput. 2021, 104, 107184. [Google Scholar] [CrossRef]
  166. Saha, P.; Sadi, M.S.; Islam, M. EMCNet: Automated COVID-19 diagnosis from X-ray images using convolutional neural network and ensemble of machine learning classifiers. Inform. Med. Unlocked 2021, 22, 100505. [Google Scholar] [CrossRef] [PubMed]
  167. Panahi, A.H.; Rafiei, A.; Rezaee, A. FCOD: Fast COVID-19 Detector based on deep learning techniques. Inform. Med. Unlocked 2021, 22, 100506. [Google Scholar] [CrossRef]
  168. Xu, Y.; Lam, H.-K.; Jia, G. MANet: A two-stage deep learning method for classification of COVID-19 from Chest X-ray images. Neurocomputing 2021, 443, 96–105. [Google Scholar] [CrossRef]
  169. Karakanis, S.; Leontidis, G. Lightweight deep learning models for detecting COVID-19 from chest X-ray images. Comput. Biol. Med. 2021, 130, 104181. [Google Scholar] [CrossRef] [PubMed]
  170. Alorf, A. The Practicality of Deep Learning Algorithms in COVID-19 Detection: Application to Chest X-ray Images. Algorithms 2021, 14, 183. [Google Scholar] [CrossRef]
  171. Ben Jabra, M.; Koubaa, A.; Benjdira, B.; Ammar, A.; Hamam, H. COVID-19 Diagnosis in Chest X-rays Using Deep Learning and Majority Voting. Appl. Sci. 2021, 11, 2884. [Google Scholar] [CrossRef]
  172. Ahrabi, S.S.; Scarpiniti, M.; Baccarelli, E.; Momenzadeh, A. An Accuracy vs. Complexity Comparison of Deep Learning Architectures for the Detection of COVID-19 Disease. Computation 2021, 9, 3. [Google Scholar] [CrossRef]
  173. Luján-García, J.; Villuendas-Rey, Y.; López-Yáñez, I.; Camacho-Nieto, O.; Yáñez-Márquez, C. NanoChest-Net: A Simple Convolutional Network for Radiological Studies Classification. Diagnostics 2021, 11, 775. [Google Scholar] [CrossRef]
  174. Karbhari, Y.; Basu, A.; Geem, Z.W.; Han, G.-T.; Sarkar, R. Generation of Synthetic Chest X-ray Images and Detection of COVID-19: A Deep Learning Based Approach. Diagnostics 2021, 11, 895. [Google Scholar] [CrossRef] [PubMed]
  175. Khishe, M.; Caraffini, F.; Kuhn, S. Evolving Deep Learning Convolutional Neural Networks for Early COVID-19 Detection in Chest X-ray Images. Mathematics 2021, 9, 1002. [Google Scholar] [CrossRef]
  176. Alam, N.A.; Ahsan, M.; Based, A.; Haider, J.; Kowalski, M. COVID-19 Detection from Chest X-ray Images Using Feature Fusion and Deep Learning. Sensors 2021, 21, 1480. [Google Scholar] [CrossRef]
  177. Vantaggiato, E.; Paladini, E.; Bougourzi, F.; Distante, C.; Hadid, A.; Taleb-Ahmed, A. COVID-19 Recognition Using Ensemble-CNNs in Two New Chest X-ray Databases. Sensors 2021, 21, 1742. [Google Scholar] [CrossRef]
  178. Muñoz-Saavedra, L.; Civit-Masot, J.; Luna-Perejón, F.; Domínguez-Morales, M.; Civit, A. Does Two-Class Training Extract Real Features? A COVID-19 Case Study. Appl. Sci. 2021, 11, 1424. [Google Scholar] [CrossRef]
  179. Barua, P.; Gowdh, N.M.; Rahmat, K.; Ramli, N.; Ng, W.; Chan, W.; Kuluozturk, M.; Dogan, S.; Baygin, M.; Yaman, O.; et al. Automatic COVID-19 Detection Using Exemplar Hybrid Deep Features with X-ray Images. Int. J. Environ. Res. Public Health 2021, 18, 8052. [Google Scholar] [CrossRef] [PubMed]
  180. Abbas, A.; Abdelsamea, M.M.; Gaber, M.M. Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network. Appl. Intell. 2021, 51, 854–864. [Google Scholar] [CrossRef]
  181. Gaur, L.; Bhatia, U.; Jhanjhi, N.Z.; Muhammad, G.; Masud, M. Medical image-based detection of COVID-19 using Deep Convolution Neural Networks. Multimed. Syst. 2021, 1–10. [Google Scholar] [CrossRef]
  182. Luz, E.; Silva, P.; Silva, R.; Silva, L.; Guimarães, J.; Miozzo, G.; Moreira, G.; Menotti, D. Towards an effective and efficient deep learning model for COVID-19 patterns detection in X-ray images. Res. Biomed. Eng. 2021, 1–14. [Google Scholar] [CrossRef]
  183. Narin, A.; Kaya, C.; Pamuk, Z. Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks. Pattern Anal. Appl. 2021, 24, 1207–1220. [Google Scholar] [CrossRef] [PubMed]
  184. Rajagopal, R. Comparative Analysis of COVID-19 X-ray Images Classification Using Convolutional Neural Network, Transfer Learning, and Machine Learning Classifiers Using Deep Features. Pattern Recognit. Image Anal. 2021, 31, 313–322. [Google Scholar] [CrossRef]
  185. Castiglioni, I.; Ippolito, D.; Interlenghi, M.; Monti, C.B.; Salvatore, C.; Schiaffino, S.; Polidori, A.; Gandola, D.; Messa, C.; Sardanelli, F. Machine learning applied on chest X-ray can aid in the diagnosis of COVID-19: A first experience from Lombardy, Italy. Eur. Radiol. Exp. 2021, 5, 1–10. [Google Scholar] [CrossRef]
  186. Sarkar, A.; Vandenhirtz, J.; Nagy, J.; Bacsa, D.; Riley, M. Identification of Images of COVID-19 from Chest X-rays Using Deep Learning: Comparing COGNEX VisionPro Deep Learning 1.0™ Software with Open Source Convolutional Neural Networks. SN Comput. Sci. 2021, 2, 1–16. [Google Scholar] [CrossRef] [PubMed]
  187. De Sousa, P.M.; Carneiro, P.C.; Oliveira, M.M.; Pereira, G.M.; Junior, C.A.D.C.; De Moura, L.V.; Mattjie, C.; Da Silva, A.M.M.; Patrocinio, A.C. COVID-19 classification in X-ray chest images using a new convolutional neural network: CNN-COVID. Res. Biomed. Eng. 2021, 1–11. [Google Scholar] [CrossRef]
  188. Turkoglu, M. COVIDetectioNet: COVID-19 diagnosis system based on X-ray images using features selected from pre-learned deep features ensemble. Appl. Intell. 2021, 51, 1213–1226. [Google Scholar] [CrossRef] [PubMed]
  189. Umer, M.; Ashraf, I.; Ullah, S.; Mehmood, A.; Choi, G.S. COVINet: A convolutional neural network approach for predicting COVID-19 from chest X-ray images. J. Ambient. Intell. Humaniz. Comput. 2021, 1–13. [Google Scholar] [CrossRef]
  190. Madaan, V.; Roy, A.; Gupta, C.; Agrawal, P.; Sharma, A.; Bologa, C.; Prodan, R. XCOVNet: Chest X-ray Image Classification for COVID-19 Early Detection Using Convolutional Neural Networks. New Gener. Comput. 2021, 39, 583–597. [Google Scholar] [CrossRef] [PubMed]
  191. Das, A.K.; Ghosh, S.; Thunder, S.; Dutta, R.; Agarwal, S.; Chakrabarti, A. Automatic COVID-19 detection from X-ray images using ensemble learning with convolutional neural network. Pattern Anal. Appl. 2021, 24, 1111–1124. [Google Scholar] [CrossRef]
  192. Agrawal, T.; Choudhary, P. FocusCovid: Automated COVID-19 detection using deep learning with chest X-ray images. Evol. Syst. 2021, 1–15. [Google Scholar] [CrossRef]
  193. Swapnarekha, H.; Behera, H.S.; Roy, D.; Das, S.; Nayak, J. Competitive Deep Learning Methods for COVID-19 Detection using X-ray Images. J. Inst. Eng. Ser. B 2021, 102, 1177–1190. [Google Scholar] [CrossRef]
  194. Aradhya, V.N.M.; Mahmud, M.; Guru, D.S.; Agarwal, B.; Kaiser, M.S. One-shot Cluster-Based Approach for the Detection of COVID–19 from Chest X–ray Images. Cogn. Comput. 2021, 13, 873–881. [Google Scholar] [CrossRef] [PubMed]
  195. Jain, R.; Gupta, M.; Taneja, S.; Hemanth, D.J. Deep learning based detection and analysis of COVID-19 on chest X-ray images. Appl. Intell. 2021, 51, 1690–1700. [Google Scholar] [CrossRef]
  196. Maior, C.B.S.; Santana, J.M.M.; Lins, I.D.; Moura, M.J.C. Convolutional neural network model based on radiological images to support COVID-19 diagnosis: Evaluating database biases. PLoS ONE 2021, 16, e0247839. [Google Scholar] [CrossRef] [PubMed]
  197. Salam, M.A.; Taha, S.; Ramadan, M. COVID-19 detection using federated machine learning. PLoS ONE 2021, 16, e0252573. [Google Scholar] [CrossRef]
  198. Ahsan, M.; Ahad, T.; Soma, F.A.; Paul, S.; Chowdhury, A.; Luna, S.A.; Yazdan, M.M.S.; Rahman, A.; Siddique, Z.; Huebner, P. Detecting SARS-CoV-2 From Chest X-ray Using Artificial Intelligence. IEEE Access 2021, 9, 35501–35513. [Google Scholar] [CrossRef]
  199. Zhou, J.; Jing, B.; Wang, Z.; Xin, H.; Tong, H. SODA: Detecting COVID-19 in Chest X-rays with Semi-supervised Open Set Domain Adaptation. IEEE/ACM Trans. Comput. Biol. Bioinform. 2021, 1. [Google Scholar] [CrossRef]
  200. Mohagheghi, S.; Alizadeh, M.; Safavi, S.M.; Foruzan, A.H.; Chen, Y.-W. Integration of CNN, CBMIR, and Visualization Techniques for Diagnosis and Quantification of Covid-19 Disease. IEEE J. Biomed. Health Inform. 2021, 25, 1873–1880. [Google Scholar] [CrossRef]
  201. Oyelade, O.N.; Ezugwu, A.E.-S.; Chiroma, H. CovFrameNet: An Enhanced Deep Learning Framework for COVID-19 Detection. IEEE Access 2021, 9, 77905–77919. [Google Scholar] [CrossRef]
  202. Abbas, A.; Abdelsamea, M.M.; Gaber, M.M. 4S-DT: Self-Supervised Super Sample Decomposition for Transfer Learning With Application to COVID-19 Detection. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 2798–2808. [Google Scholar] [CrossRef]
  203. Zhou, C.; Song, J.; Zhou, S.; Zhang, Z.; Xing, J. COVID-19 Detection Based on Image Regrouping and Resnet-SVM Using Chest X-ray Images. IEEE Access 2021, 9, 81902–81912. [Google Scholar] [CrossRef] [PubMed]
  204. Dong, S.; Yang, Q.; Fu, Y.; Tian, M.; Zhuo, C. RCoNet: Deformable Mutual Information Maximization and High-Order Uncertainty-Aware Learning for Robust COVID-19 Detection. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 3401–3411. [Google Scholar] [CrossRef] [PubMed]
  205. Ardakani, A.A.; Kanafi, A.R.; Acharya, U.R.; Khadem, N.; Mohammadi, A. Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks. Comput. Biol. Med. 2020, 121, 103795. [Google Scholar] [CrossRef]
  206. Silva, P.; Luz, E.; Silva, G.; Moreira, G.; Silva, R.; Lucio, D.; Menotti, D. COVID-19 detection in CT images with deep learning: A voting-based scheme and cross-datasets analysis. Inform. Med. Unlocked 2020, 20, 100427. [Google Scholar] [CrossRef]
  207. Li, T.; Wei, W.; Cheng, L.; Zhao, S.; Xu, C.; Zhang, X.; Zeng, Y.; Gu, J. Computer-Aided Diagnosis of COVID-19 CT Scans Based on Spatiotemporal Information Fusion. J. Health Eng. 2021, 2021, 6649591. [Google Scholar] [CrossRef] [PubMed]
  208. Zhang, P.; Zhong, Y.; Deng, Y.; Tang, X.; Li, X. CoSinGAN: Learning COVID-19 Infection Segmentation from a Single Radiological Image. Diagnostics 2020, 10, 901. [Google Scholar] [CrossRef]
  209. Wan, Y.; Zhou, H.; Zhang, X. An Interpretation Architecture for Deep Learning Models with the Application of COVID-19 Diagnosis. Entropy 2021, 23, 204. [Google Scholar] [CrossRef] [PubMed]
  210. Gifani, P.; Shalbaf, A.; Vafaeezadeh, M. Automated detection of COVID-19 using ensemble of transfer learning with deep convolutional neural network based on CT scans. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 115–123. [Google Scholar] [CrossRef]
  211. Loey, M.; Manogaran, G.; Khalifa, N.E.M. A deep transfer learning model with classical data augmentation and CGAN to detect COVID-19 from chest CT radiography digital images. Neural Comput. Appl. 2020, 1–13. [Google Scholar] [CrossRef]
  212. Singh, D.; Kumar, V.; Kaur, M. Densely connected convolutional networks-based COVID-19 screening model. Appl. Intell. 2021, 51, 3044–3051. [Google Scholar] [CrossRef]
  213. Fan, D.-P.; Zhou, T.; Ji, G.-P.; Zhou, Y.; Chen, G.; Fu, H.; Shen, J.; Shao, L. Inf-Net: Automatic COVID-19 Lung Infection Segmentation From CT Images. IEEE Trans. Med. Imaging 2020, 39, 2626–2637. [Google Scholar] [CrossRef] [PubMed]
  214. Ouyang, X.; Huo, J.; Xia, L.; Shan, F.; Liu, J.; Mo, Z.; Yan, F.; Ding, Z.; Yang, Q.; Song, B.; et al. Dual-Sampling Attention Network for Diagnosis of COVID-19 From Community Acquired Pneumonia. IEEE Trans. Med. Imaging 2020, 39, 2595–2605. [Google Scholar] [CrossRef]
  215. Zhang, Y.-D.; Satapathy, S.C.; Zhu, L.-Y.; Gorriz, J.M.; Wang, S.-H. A seven-layer convolutional neural network for chest CT based COVID-19 diagnosis using stochastic pooling. IEEE Sensors J. 2020, 1. [Google Scholar] [CrossRef]
  216. Qian, X.; Fu, H.; Shi, W.; Chen, T.; Fu, Y.; Shan, F.; Xue, X. M3Lung-Sys: A Deep Learning System for Multi-Class Lung Pneumonia Screening From CT Imaging. IEEE J. Biomed. Health Inform. 2020, 24, 3539–3550. [Google Scholar] [CrossRef]
  217. Ozyurt, F.; Tuncer, T.; Subasi, A. An automated COVID-19 detection based on fused dynamic exemplar pyramid feature extraction and hybrid feature selection using deep learning. Comput. Biol. Med. 2021, 132, 104356. [Google Scholar] [CrossRef]
  218. Rohila, V.S.; Gupta, N.; Kaul, A.; Sharma, D.K. Deep learning assisted COVID-19 detection using full CT-scans. Internet Things 2021, 14, 100377. [Google Scholar] [CrossRef]
  219. Shiri, I.; Sorouri, M.; Geramifar, P.; Nazari, M.; Abdollahi, M.; Salimi, Y.; Khosravi, B.; Askari, D.; Aghaghazvini, L.; Hajianfar, G.; et al. Machine learning-based prognostic modeling using clinical data and quantitative radiomic features from chest CT images in COVID-19 patients. Comput. Biol. Med. 2021, 132, 104304. [Google Scholar] [CrossRef]
  220. Priya, C.; Fathima, S.S.S.; Kirubanandasarathy, N.; Valanarasid, A.; Begam, M.S.; Aiswarya, N. Automatic Optimized CNN Based COVID-19 Lung Infection Segmentation from CT Image; Elsevier BV: Amsterdam, The Netherlands, 2021. [Google Scholar]
  221. Wu, Z.; Li, L.; Jin, R.; Liang, L.; Hu, Z.; Tao, L.; Han, Y.; Feng, W.; Zhou, D.; Li, W.; et al. Texture feature-based machine learning classifier could assist in the diagnosis of COVID-19. Eur. J. Radiol. 2021, 137, 109602. [Google Scholar] [CrossRef]
  222. Yu, X.; Lu, S.; Guo, L.; Wang, S.-H.; Zhang, Y.-D. ResGNet-C: A graph convolutional neural network for detection of COVID-19. Neurocomputing 2021, 452, 592–605. [Google Scholar] [CrossRef]
  223. Turkoglu, M. COVID-19 Detection System Using Chest CT Images and Multiple Kernels-Extreme Learning Machine Based on Deep Neural Network. IRBM 2021, 42, 207–214. [Google Scholar] [CrossRef]
  224. He, K.; Zhao, W.; Xie, X.; Ji, W.; Liu, M.; Tang, Z.; Shi, Y.; Shi, F.; Gao, Y.; Liu, J.; et al. Synergistic learning of lung lobe segmentation and hierarchical multi-instance classification for automated severity assessment of COVID-19 in CT images. Pattern Recognit. 2021, 113, 107828. [Google Scholar] [CrossRef]
  225. Gao, K.; Su, J.; Jiang, Z.; Zeng, L.-L.; Feng, Z.; Shen, H.; Rong, P.; Xu, X.; Qin, J.; Yang, Y.; et al. Dual-branch combination network (DCN): Towards accurate diagnosis and lesion segmentation of COVID-19 using CT images. Med. Image Anal. 2021, 67, 101836. [Google Scholar] [CrossRef]
  226. Serte, S.; Demirel, H. Deep learning for diagnosis of COVID-19 using 3D CT scans. Comput. Biol. Med. 2021, 132, 104306. [Google Scholar] [CrossRef] [PubMed]
  227. Perumal, V.; Narayanan, V.; Rajasekar, S.J.S. Prediction of COVID-19 with Computed Tomography Images using Hybrid Learning Techniques. Dis. Markers 2021, 2021, 5522729. [Google Scholar] [CrossRef] [PubMed]
  228. Helwan, A.; Ma’Aitah, M.K.S.; Hamdan, H.; Ozsahin, D.U.; Tuncyurek, O. Radiologists versus Deep Convolutional Neural Networks: A Comparative Study for Diagnosing COVID-19. Comput. Math. Methods Med. 2021, 2021, 5527271. [Google Scholar] [CrossRef]
  229. Lawton, S.; Viriri, S. Detection of COVID-19 from CT Lung Scans Using Transfer Learning. Comput. Intell. Neurosci. 2021, 2021, 5527923. [Google Scholar] [CrossRef] [PubMed]
  230. Li, X.; Tan, W.; Liu, P.; Zhou, Q.; Yang, J. Classification of COVID-19 Chest CT Images Based on Ensemble Deep Learning. J. Health Eng. 2021, 2021, 5528441. [Google Scholar] [CrossRef] [PubMed]
  231. Salamh, A.B.S.; Salamah, A.A.; Akyüz, H.I. A Study of a New Technique of the CT Scan View and Disease Classification Protocol Based on Level Challenges in Cases of Coronavirus Disease. Radiol. Res. Pract. 2021, 2021, 5554408. [Google Scholar] [CrossRef] [PubMed]
  232. Wang, S.-H.; Zhang, Y.; Cheng, X.; Zhang, X.; Zhang, Y.-D. PSSPNN: PatchShuffle Stochastic Pooling Neural Network for an Explainable Diagnosis of COVID-19 with Multiple-Way Data Augmentation. Comput. Math. Methods Med. 2021, 2021, 6633755. [Google Scholar] [CrossRef]
  233. Zheng, B.; Cai, Y.; Zeng, F.; Lin, M.; Zheng, J.; Chen, W.; Qin, G.; Guo, Y. An Interpretable Model-Based Prediction of Severity and Crucial Factors in Patients with COVID-19. BioMed Res. Int. 2021, 2021, 8840835. [Google Scholar] [CrossRef]
  234. Zheng, R.; Zheng, Y.; Dong-Ye, C. Improved 3D U-Net for COVID-19 Chest CT Image Segmentation. Sci. Program. 2021, 2021, 9999368. [Google Scholar] [CrossRef]
  235. Oluwasanmi, A.; Aftab, M.U.; Qin, Z.; Ngo, S.T.; Van Doan, T.; Nguyen, S.B. Transfer Learning and Semisupervised Adversarial Detection and Classification of COVID-19 in CT Images. Complexity 2021, 2021, 6680455. [Google Scholar] [CrossRef]
  236. Manzo, M.; Pellino, S. Fighting Together against the Pandemic: Learning Multiple Models on Tomography Images for COVID-19 Diagnosis. AI 2021, 2, 261–273. [Google Scholar] [CrossRef]
  237. Singh, V.K.; Abdel-Nasser, M.; Pandey, N.; Puig, D. LungINFseg: Segmenting COVID-19 Infected Regions in Lung CT Images Based on a Receptive-Field-Aware Deep Learning Framework. Diagnostics 2021, 11, 158. [Google Scholar] [CrossRef]
  238. Qiblawey, Y.; Tahir, A.; Chowdhury, M.; Khandakar, A.; Kiranyaz, S.; Rahman, T.; Ibtehaz, N.; Mahmud, S.; Maadeed, S.; Musharavati, F.; et al. Detection and Severity Classification of COVID-19 in CT Images Using Deep Learning. Diagnostics 2021, 11, 893. [Google Scholar] [CrossRef]
  239. Chatzitofis, A.; Cancian, P.; Gkitsas, V.; Carlucci, A.; Stalidis, P.; Albanis, G.; Karakottas, A.; Semertzidis, T.; Daras, P.; Giannitto, C.; et al. Volume-of-Interest Aware Deep Neural Networks for Rapid Chest CT-Based COVID-19 Patient Risk Assessment. Int. J. Environ. Res. Public Health 2021, 18, 2842. [Google Scholar] [CrossRef]
  240. Alshazly, H.; Linse, C.; Barth, E.; Martinetz, T. Explainable COVID-19 Detection Using Chest CT Scans and Deep Learning. Sensors 2021, 21, 455. [Google Scholar] [CrossRef]
  241. Voulodimos, A.; Protopapadakis, E.; Katsamenis, I.; Doulamis, A.; Doulamis, N. A Few-Shot U-Net Deep Learning Model for COVID-19 Infected Area Segmentation in CT Images. Sensors 2021, 21, 2215. [Google Scholar] [CrossRef] [PubMed]
  242. Ahuja, S.; Panigrahi, B.K.; Dey, N.; Rajinikanth, V.; Gandhi, T.K. Deep transfer learning-based automated detection of COVID-19 from lung CT scan slices. Appl. Intell. 2021, 51, 571–585. [Google Scholar] [CrossRef]
  243. Garain, A.; Basu, A.; Giampaolo, F.; Velasquez, J.D.; Sarkar, R. Detection of COVID-19 from CT scan images: A spiking neural network-based approach. Neural Comput. Appl. 2021, 33, 12591–12604. [Google Scholar] [CrossRef]
  244. Li, D.; Fu, Z.; Xu, J. Stacked-autoencoder-based model for COVID-19 diagnosis on CT images. Appl. Intell. 2021, 51, 2805–2817. [Google Scholar] [CrossRef]
  245. Rezaeijo, S.M.; Ghorvei, M.; Abedi-Firouzjah, R.; Mojtahedi, H.; Zarch, H.E. Detecting COVID-19 in chest images based on deep transfer learning and machine learning algorithms. Egypt. J. Radiol. Nucl. Med. 2021, 52, 1–12. [Google Scholar] [CrossRef]
  246. Sen, S.; Saha, S.; Chatterjee, S.; Mirjalili, S.; Sarkar, R. A bi-stage feature selection approach for COVID-19 prediction using chest CT images. Appl. Intell. 2021, 1–16. [Google Scholar] [CrossRef] [PubMed]
  247. Shah, V.; Keniya, R.; Shridharani, A.; Punjabi, M.; Shah, J.; Mehendale, N. Diagnosis of COVID-19 using CT scan images and deep learning techniques. Emerg. Radiology 2021, 28, 497–505. [Google Scholar] [CrossRef]
  248. Tan, W.; Liu, P.; Li, X.; Liu, Y.; Zhou, Q.; Chen, C.; Gong, Z.; Yin, X.; Zhang, Y. Classification of COVID-19 pneumonia from chest CT images based on reconstructed super-resolution images and VGG neural network. Health Inf. Sci. Syst. 2021, 9, 1–12. [Google Scholar] [CrossRef] [PubMed]
  249. Ardakani, A.A.; Acharya, U.R.; Habibollahi, S.; Mohammadi, A. COVIDiag: A clinical CAD system to diagnose COVID-19 pneumonia based on CT findings. Eur. Radiol. 2021, 31, 121–130. [Google Scholar] [CrossRef]
  250. Yasar, H.; Ceylan, M. A novel comparative study for detection of Covid-19 on CT lung images using texture analysis, machine learning, and deep learning methods. Multimed. Tools Appl. 2021, 80, 5423–5447. [Google Scholar] [CrossRef] [PubMed]
  251. Elmuogy, S.; Hikal, N.A.; Hassan, E. An efficient technique for CT scan images classification of COVID-19. J. Intell. Fuzzy Syst. 2021, 40, 5225–5238. [Google Scholar] [CrossRef]
  252. Elaziz, M.A.; Al-Qaness, M.A.A.; Zaid, E.O.A.; Lu, S.; Ibrahim, R.A.; Ewees, A.A. Automatic clustering method to segment COVID-19 CT images. PLoS ONE 2021, 16, e0244416. [Google Scholar] [CrossRef]
  253. Han, C.H.; Kim, M.; Kwak, J.T. Semi-supervised learning for an improved diagnosis of COVID-19 in CT images. PLoS ONE 2021, 16, e0249450. [Google Scholar] [CrossRef]
  254. Yousefzadeh, M.; Esfahanian, P.; Movahed, S.M.S.; Gorgin, S.; Rahmati, D.; Abedini, A.; Nadji, S.A.; Haseli, S.; Karam, M.B.; Kiani, A.; et al. ai-corona: Radiologist-assistant deep learning framework for COVID-19 diagnosis in chest CT scans. PLoS ONE 2021, 16, e0250952. [Google Scholar] [CrossRef]
  255. Yan, Q.; Wang, B.; Gong, D.; Luo, C.; Zhao, W.; Shen, J.; Ai, J.; Shi, Q.; Zhang, Y.; Jin, S.; et al. COVID-19 Chest CT Image Segmentation Network by Multi-Scale Fusion and Enhancement Operations. IEEE Trans. Big Data 2021, 7, 13–24. [Google Scholar] [CrossRef]
  256. Paluru, N.; Dayal, A.; Jenssen, H.B.; Sakinis, T.; Cenkeramaddi, L.R.; Prakash, J.; Yalavarthy, P.K. Anam-Net: Anamorphic Depth Embedding-Based Lightweight CNN for Segmentation of Anomalies in COVID-19 Chest CT Images. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 932–946. [Google Scholar] [CrossRef]
  257. Muhammad, G.; Hossain, M.S. COVID-19 and Non-COVID-19 Classification using Multi-layers Fusion From Lung Ultrasound Images. Inf. Fusion 2021, 72, 80–88. [Google Scholar] [CrossRef]
  258. Dastider, A.G.; Sadik, F.; Fattah, S.A. An integrated autoencoder-based hybrid CNN-LSTM model for COVID-19 severity prediction from lung ultrasound. Comput. Biol. Med. 2021, 132, 104296. [Google Scholar] [CrossRef]
  259. Panwar, H.; Gupta, P.; Siddiqui, M.K.; Morales-Menendez, R.; Bhardwaj, P.; Singh, V. A deep learning and grad-CAM based color visualization approach for fast detection of COVID-19 cases using chest X-ray and CT-Scan images. Chaos Solitons Fractals 2020, 140, 110190. [Google Scholar] [CrossRef]
  260. Ibrahim, D.M.; Elshennawy, N.M.; Sarhan, A.M. Deep-chest: Multi-classification deep learning model for diagnosing COVID-19, pneumonia, and lung cancer chest diseases. Comput. Biol. Med. 2021, 132, 104348. [Google Scholar] [CrossRef] [PubMed]
  261. Elpeltagy, M.; Sallam, H. Automatic prediction of COVID− 19 from chest images using modified ResNet50. Multimed. Tools Appl. 2021, 1–13. [Google Scholar] [CrossRef]
  262. Gilanie, G.; Bajwa, U.I.; Waraich, M.M.; Asghar, M.; Kousar, R.; Kashif, A.; Aslam, R.S.; Qasim, M.M.; Rafique, H. Coronavirus (COVID-19) detection from chest radiology images using convolutional neural networks. Biomed. Signal Process. Control. 2021, 66, 102490. [Google Scholar] [CrossRef] [PubMed]
  263. Saha, P.; Mukherjee, D.; Singh, P.K.; Ahmadian, A.; Ferrara, M.; Sarkar, R. GraphCovidNet: A graph neural network based model for detecting COVID-19 from CT scans and X-rays of chest. Sci. Rep. 2021, 11, 1–16. [Google Scholar] [CrossRef]
  264. Perumal, V.; Narayanan, V.; Rajasekar, S.J.S. Detection of COVID-19 using CXR and CT images using Transfer Learning and Haralick features. Appl. Intell. 2021, 51, 341–358. [Google Scholar] [CrossRef]
  265. Lascu, M.-R. Deep Learning in Classification of Covid-19 Coronavirus, Pneumonia and Healthy Lungs on CXR and CT Images. J. Med. Biol. Eng. 2021, 10, 1–9. [Google Scholar] [CrossRef]
  266. Kassania, S.H.; Kassanib, P.H.; Wesolowskic, M.J.; Schneidera, K.A.; Detersa, R. Automatic Detection of Coronavirus Disease (COVID-19) in X-ray and CT Images: A Machine Learning Based Approach. Biocybern. Biomed. Eng. 2021, 41, 867–879. [Google Scholar] [CrossRef] [PubMed]
  267. Li, J.; Zhao, G.; Tao, Y.; Zhai, P.; Chen, H.; He, H.; Cai, T. Multi-task contrastive learning for automatic CT and X-ray diagnosis of COVID-19. Pattern Recognit. 2021, 114, 107848. [Google Scholar] [CrossRef]
  268. Imani, M. Automatic diagnosis of coronavirus (COVID-19) using shape and texture characteristics extracted from X-ray and CT-Scan images. Biomed. Signal Process. Control. 2021, 68, 102602. [Google Scholar] [CrossRef] [PubMed]
  269. Mukherjee, H.; Ghosh, S.; Dhar, A.; Obaidullah, S.M.; Santosh, K.C.; Roy, K. Deep neural network to detect COVID-19: One architecture for both CT Scans and Chest X-rays. Appl. Intell. 2021, 51, 2777–2789. [Google Scholar] [CrossRef] [PubMed]
  270. Pal, B.; Gupta, D.; Rashed-Al-Mahfuz, M.; Alyami, S.; Moni, M. Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images. Appl. Sci. 2021, 11, 4233. [Google Scholar] [CrossRef]
  271. Aminu, M.; Ahmad, N.A.; Noor, M.H.M. Covid-19 detection via deep neural network and occlusion sensitivity maps. Alex. Eng. J. 2021, 60, 4829–4855. [Google Scholar] [CrossRef]
  272. Mittal, H.; Pandey, A.C.; Pal, R.; Tripathi, A. A new clustering method for the diagnosis of CoVID19 using medical images. Appl. Intell. 2021, 51, 2988–3011. [Google Scholar] [CrossRef]
  273. Horry, M.J.; Chakraborty, S.; Paul, M.; Ulhaq, A.; Pradhan, B.; Saha, M.; Shukla, N. COVID-19 Detection Through Transfer Learning Using Multimodal Imaging Data. IEEE Access 2020, 8, 149808–149824. [Google Scholar] [CrossRef]
  274. Gudigar, A.; Raghavendra, U.; Hegde, A.; Menon, G.; Molinari, F.; Ciaccio, E.; Acharya, U. Automated Detection and Screening of Traumatic Brain Injury (TBI) Using Computed Tomography Images: A Comprehensive Review and Future Perspectives. Int. J. Environ. Res. Public Health 2021, 18, 6499. [Google Scholar] [CrossRef]
  275. Raghavendra, U.; Gudigar, A.; Rao, N.; Ciaccio, J.; Ng, E.Y.K. Rajendra Acharya, Computer-aided di-agnosis for the identification of breast cancer using thermogram images: A comprehensive review. Infrared Phys. Technol. 2019, 102, 103041. [Google Scholar] [CrossRef]
  276. Gudigar, A.; Raghavendra, U.; Hegde, A.; Kalyani, M.; Ciaccio, E.J.; Rajendra, A.U. Brain pathology identi-fication using computer aided diagnostic tool: A systematic review. Comput. Methods Programs Biomed. 2020, 187, 105205. [Google Scholar] [CrossRef] [PubMed]
  277. Abilash, V.; Geoffrey, V.; Krishna, S.B.R. Coronavirus Pandemic Analysis Using Deep Learning Techniques A Study. In Proceedings of the 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, 3–5 June 2021; pp. 875–880. [Google Scholar]
  278. Sri, R.S.; Pushpa, A.M. Systematic Study on Diagnosis of Lung Disorders using Machine Learning and Deep Learning Algorithms. In Proceedings of the 2021 Seventh International conference on Bio Signals, Images, and Instrumentation (ICBSII), Kalavakkam, Tamil Nadu, India, 25–27 March 2021; pp. 1–8. [Google Scholar]
  279. Mohammad-Rahimi, H.; Nadimi, M.; Ghalyanchi-Langeroudi, A.; Taheri, M.; Ghafouri-Fard, S. Application of Machine Learning in Diagnosis of COVID-19 Through X-ray and CT Images: A Scoping Review. Front. Cardiovasc. Med. 2021, 8, 185. [Google Scholar] [CrossRef] [PubMed]
  280. Desai, S.B.; Pareek, A.; Lungren, M.P. Deep learning and its role in COVID-19 medical imaging. Intell. Med. 2020, 3, 100013. [Google Scholar] [CrossRef]
  281. Gazzah, S.; Bencharef, O. A Survey on how computer vision can response to urgent need to contribute in COVID-19 pandemics. In Proceedings of the 2020 International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco, 9–11 June 2020; pp. 1–5. [Google Scholar]
  282. Signoroni, A.; Savardi, M.; Benini, S.; Adami, N.; Leonardi, R.; Gibellini, P.; Vaccher, F.; Ravanelli, M.; Borghesi, A.; Maroldi, R.; et al. BS-Net: Learning COVID-19 pneumonia severity on a large chest X-ray dataset. Med. Image Anal. 2021, 71, 102046. [Google Scholar] [CrossRef]
  283. Zhang, H.-T.; Zhang, J.-S.; Nan, Y.-D.; Zhao, Y.; Fu, E.-Q.; Xie, Y.-H.; Liu, W.; Li, W.-P.; Zhang, H.-J.; Jiang, H.; et al. Automated detection and quantification of COVID-19 pneumonia: CT imaging analysis by a deep learning-based software. Eur. J. Nucl. Med. Mol. Imaging 2020, 47, 2525–2532. [Google Scholar] [CrossRef]
  284. Goncharov, M.; Pisov, M.; Shevtsov, A.; Shirokikh, B.; Kurmukov, A.; Blokhin, I.; Chernina, V.; Solovev, A.; Gombolevskiy, V.; Morozov, S.; et al. CT-Based COVID-19 triage: Deep multitask learning improves joint identification and severity quantification. Med. Image Anal. 2021, 71, 102054. [Google Scholar] [CrossRef] [PubMed]
  285. Oulefki, A.; Agaian, S.; Trongtirakul, T.; Laouar, A.K. Automatic COVID-19 lung infected region segmentation and measurement using CT-scans images. Pattern Recognit. 2021, 114, 107747. [Google Scholar] [CrossRef]
  286. Giordano, F.; Ippolito, E.; Quattrocchi, C.; Greco, C.; Mallio, C.; Santo, B.; D’Alessio, P.; Crucitti, P.; Fiore, M.; Zobel, B.; et al. Radiation-Induced Pneumonitis in the Era of the COVID-19 Pandemic: Artificial Intelligence for Differential Diagnosis. Cancers 2021, 13, 1960. [Google Scholar] [CrossRef]
  287. Shoeibi, A.; Khodatars, M.; Alizadehsani, R.; Ghassemi, N.; Jafari, M.; Moridian, P.; Khadem, A.; Sadeghi, D.; Hussain, S.; Zare, A.; et al. Automated detection and forecasting of covid-19 using deep learning techniques: A review. arXiv 2020, arXiv:2007.10785. [Google Scholar]
  288. Sharifrazi, D.; Alizadehsani, R.; Roshanzamir, M.; Joloudari, J.H.; Shoeibi, A.; Jafari, M.; Hussain, S.; Sani, Z.A.; Hasanzadeh, F.; Khozeimeh, F.; et al. Fusion of convolution neural network, support vector machine and Sobel filter for accurate detection of COVID-19 patients using X-ray images. Biomed. Signal Process. Control. 2021, 68, 102622. [Google Scholar] [CrossRef]
  289. Jahmunah, V.; Sudarshan, V.K.; Oh, S.L.; Gururajan, R.; Gururajan, R.; Zhou, X.; Tao, X.; Faust, O.; Ciaccio, E.J.; Ng, K.H.; et al. Future IoT tools for COVID-19 contact tracing and prediction: A review of the state-of-the-science. Int. J. Imaging Syst. Technol. 2021, 31, 455–471. [Google Scholar] [CrossRef] [PubMed]
  290. Basiri, M.E.; Nemati, S.; Abdar, M.; Asadi, S.; Acharrya, U.R. A novel fusion-based deep learning model for sentiment analysis of COVID-19 tweets. Knowl.-Based Syst. 2021, 228, 107242. [Google Scholar] [CrossRef]
  291. Alizadehsani, R.; Sani, Z.A.; Behjati, M.; Roshanzamir, Z.; Hussain, S.; Abedini, N.; Hasanzadeh, F.; Khosravi, A.; Shoeibi, A.; Roshanzamir, M.; et al. Risk factors prediction, clinical outcomes, and mortality in COVID-19 patients. J. Med. Virol. 2021, 93, 2307–2320. [Google Scholar] [CrossRef]
  292. Taiwo, O.; Ezugwu, A.E. Smart healthcare support for remote patient monitoring during covid-19 quarantine. Inform. Med. Unlocked 2020, 20, 100428. [Google Scholar] [CrossRef]
  293. Ardakani, A.A.; Kwee, R.M.; Mirza-Aghazadeh-Attari, M.; Castro, H.M.; Kuzan, T.Y.; Altintoprak, K.M.; Besutti, G.; Monelli, F.; Faeghi, F.; Acharya, U.R.; et al. A practical artificial intelligence system to diagnose COVID-19 using computed tomography: A multinational external validation study. Pattern Recognit. Lett. 2021, 152, 42–49. [Google Scholar] [CrossRef] [PubMed]
  294. Ai, T.; Yang, Z.; Hou, H.; Zhan, C.; Chen, C.; Lv, W.; Tao, Q.; Sun, Z.; Xia, L. Correlation of Chest CT and RT-PCR Testing for Coronavirus Disease 2019 (COVID-19) in China: A Report of 1014 Cases. Radiology 2020, 296, E32–E40. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  295. Fang, Y.; Zhang, H.; Xie, J.; Lin, M.; Ying, L.; Pang, P.; Ji, W. Sensitivity of Chest CT for COVID-19: Comparison to RT-PCR. Radiology 2020, 296, E115–E117. [Google Scholar] [CrossRef]
  296. Lin, E.C. Radiation Risk from Medical Imaging. Mayo Clin. Proc. 2010, 85, 1142–1146. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  297. Peng, Q.-Y.; Wang, X.-T.; Zhang, L.-N.; Chinese Critical Care Ultrasound Study Group (CCUSG). Findings of lung ultrasonography of novel corona virus pneumonia during the 2019–2020 epidemic. Intensiv. Care Med. 2020, 46, 849–850. [Google Scholar] [CrossRef] [Green Version]
  298. Despotovic, V.; Ismael, M.; Cornil, M.; Mc Call, R.; Fagherazzi, G. Detection of COVID-19 from voice, cough and breathing patterns: Dataset and preliminary results. Comput. Biol. Med. 2021, 138, 104944. [Google Scholar] [CrossRef]
  299. Kobat, M.A.; Kivrak, T.; Barua, P.D.; Tuncer, T.; Dogan, S.; Tan, R.-S.; Ciaccio, E.J.; Acharya, U.R. Automated COVID-19 and Heart Failure Detection Using DNA Pattern Technique with Cough Sounds. Diagnostics 2021, 11, 1962. [Google Scholar] [CrossRef]
Figure 1. Pictorial representation of normal and COVID-19 affected lungs.
Figure 1. Pictorial representation of normal and COVID-19 affected lungs.
Sensors 21 08045 g001
Figure 2. Overview of the selection process for relevant articles.
Figure 2. Overview of the selection process for relevant articles.
Sensors 21 08045 g002
Figure 3. The complete framework to detect COVID-19 using various approaches.
Figure 3. The complete framework to detect COVID-19 using various approaches.
Sensors 21 08045 g003
Figure 4. Sample images using various medical image modalities.
Figure 4. Sample images using various medical image modalities.
Sensors 21 08045 g004
Figure 5. Percentage of various classes in the assessment of COVID-19 by imaging modalities (X-ray, CT, and X-ray and CT).
Figure 5. Percentage of various classes in the assessment of COVID-19 by imaging modalities (X-ray, CT, and X-ray and CT).
Sensors 21 08045 g005
Figure 6. Comparison of Cvd.Acc, Cvd.Sen, Cvd.Spe, F1-Score, and AUC of AI techniques to detect COVID-19 using box plots.
Figure 6. Comparison of Cvd.Acc, Cvd.Sen, Cvd.Spe, F1-Score, and AUC of AI techniques to detect COVID-19 using box plots.
Sensors 21 08045 g006
Figure 7. Various methodologies adopted by state-of-the-art techniques using different modalities.
Figure 7. Various methodologies adopted by state-of-the-art techniques using different modalities.
Sensors 21 08045 g007
Figure 8. IoT-based smart healthcare system to detect COVID-19.
Figure 8. IoT-based smart healthcare system to detect COVID-19.
Sensors 21 08045 g008
Table 1. Summary of frequently used publicly available datasets for the detection of COVID-19.
Table 1. Summary of frequently used publicly available datasets for the detection of COVID-19.
S.No.Paper/SourceImaging ModalityTotal Number of Images
1Available in: https://www.kaggle.com/tawsifurrahman/covid19-radiography-database (accessed on 21 August 2021)X-rayNormal: 10,192
COVID: 3616
Viral Pneumonia:1345
Lung opacity: 6012
2Available in: https://www.kaggle.com/prashant268/chest-xray-covid19-pneumonia (accessed on 21 August 2021)X-rayNormal: 1583
COVID: 576
Pneumonia: 4273
3[35]/Available in: https://github.com/UCSD-AI4H/COVID-CT (accessed on 21 August 2021)CTCOVID:349
NonCovid: 397
4[36] Available in:
https://www.kaggle.com/plameneduardo/sarscov2-ctscan-dataset (accessed on 21 August 2021)
CTCOVID:1252
Noncovid:1230
5[37]/Available in: https://mosmed.ai/datasets/covid19_1110 (accessed on 21 August 2021)CT1110 patients with severity grading (CT-0 to CT-4)
6[38]/Available in: https://zenodo.org/record/3757476#.YPUTnugzbIU (accessed on 21 August 2021)CT20 labeled COVID-19 CT scans (1800 + annotated slices)
7[39]/Available in: https://github.com/BorgwardtLab/covid19_ultrasound (accessed on 21 August 2021)USVideos and images
Healthy: 90
COVID-19: 92
Bacterial Pneumonia: 73
Viral Pneumonia: 6
Table 2. State-of-the-art AI techniques to detect COVID-19 using chest X-ray imagery.
Table 2. State-of-the-art AI techniques to detect COVID-19 using chest X-ray imagery.
PaperMethod Used: Preprocessing + Segmentation + Feature Extraction + Feature Selection + Classification or CNN + ClassificationResult ObtainedDataset Used
(Most Are Public)
No. of Classes
[114]Image enhancement + WS +deep CNN (ResNet50) and DWT and GLCM+ mRMR+ RFCvd.Acc: 99.45, Cvd.Sen:.99.17, Cvd.Pre: 97.51,F1-Score: 0.9833N:1500,C-19: 790,BP: 1304,VP: 1215
(after data augmentation)
2 (C-19, NC)
Cvd.Acc: 98.48, Cvd.Sen: 98.72, Cvd.Pre: 97.89,F1-Score: 0.98294
[115]Color layout descriptor + k-NN Cvd.Sen: 96.5, Cvd.Pre: 96.5Total:86
[116]CNN model + Long short-term memory (LSTM)Cvd.Acc: 99.4, Cvd.Sen: 99.3, Cvd.Spe: 99.2, F1-Score: 98.9, AUC: 99.9N: 1525, C-19: 1525,P: 15253
[117]Concatenation of the Xception and ResNet50V2 Cvd.Acc (avg.): 91.4N: 8851,C-19: 180,P: 60543
[118]CNN modelCvd.Acc: 95, Cvd.Sen: 96.9, Cvd.Spe: 97.5, Cvd.Pre: 95, F-measure: 95.6N: 310,C-19: 284,BP: 330,VP: 3273(N, C-19, P)
Cvd.Acc: 89.6, Cvd.Sen: 89.92, Cvd.Spe: 96.4, Cvd.Pre: 90,F-measure: 96.44
[119]CNN modelAUROC: 0.96Pvt. + Public Dataset3
[120]DarkNet based CNN modelCvd.Acc(avg.): 98.08, Cvd.Sen(avg.): 95.13, Cvd.Spe(avg.): 95.3, Cvd.Pre (avg.): 98.03,F1-Score (avg.): 96.51N: 500,C-19: 127,P: 5002 (N, C-19)
Cvd.Acc(avg.): 87.02, Cvd.Sen(avg.): 85.35, Cvd.Spe(avg.): 92.18, Cvd.Pre (avg.): 89.96,F1-Score (avg.): 87.373
[121]2D-CTf + CSSA+ EfficientNet-B0 Cvd.Acc: 99.69, Cvd.Sen: 99.44, Cvd.Spe: 99.81, Cvd.Pre: 99.62, F-measure: 99.53N: 1281,C-19: 159,VP: 1285 3
[122]VGG-16 modelCvd.Acc(avg.): 97N: 3520,C-19: 250,P: 27533
[123]ResNet50 + ResNet101Cvd.Acc: 97.77, Cvd.Sen: 97.14, Cvd.Pre: 97.14N: 315,C-19: 250, BP: 300,VP: 3502(C-19,O)
[58]ResExLBP + Relief-F+ SVMCvd.Acc: 99.69, Cvd.Sen: 98.85, Cvd.Spe: 100N: 234, C-19: 872
[124]VGG16 modelCvd.Acc: 98.1N: 2880, C-19: 415, P: 51792(C-19,NC)
Cvd.Acc: 94.53
[125]ResNet18, ResNet50, SqueezeNet,& DenseNet121Cvd.Sen: 98, Cvd.Spe(avg.): 90C-19: 200, NC:50002
[126]Capsule Network-based architectureCvd.Acc: 95.7, Cvd.Sen: 90, Cvd.Spe: 95.8, AUC: 0.97 2(C-19,O)
[127]VGG16 modelCvd.Sen: 97.62, Cvd.Spe: 78.57N:142, C-19: 1422
[128]ResNet101Cvd.Acc: 71.9, Cvd.Sen: 77.3, Cvd.Spe: 71.8C-19: 154, NC: 5828 (test data)2
[129]Deep learning modelCvd. Acc C-19: 100,P: 93.75,N: 100N: 66, C-19: 51,NC: 21,P: 160,TB: 545
[130]Sequential CNN modelCvd.Acc: 98.3, Cvd.Sen: 100, Cvd.Pre: 96.72, F1-Score: 98.3,ROC area: 0.983N: 659, C-19: 2952
[131]HE +VGG16-based modelCvd.Acc (avg.): 86, Cvd.Sen (avg.): 86, Cvd.Spe(avg.): 93, Cvd.Pre(avg.):86,F1-Score: 86N: 132, C-19: 132,P: 1323
[132]Histogram matching and autoencoder and CLAHE + Custom CNN modelCvd.Acc (avg.):94.43, Cvd.Sen (avg.): 92.53, Cvd.Spe: 96.33, Cvd.Pre(avg.): 93.76,
F1-Score (avg.): 93.14,AUC (avg): 0.988
N: 4337,C-19: 25892
[133]Ensemble of ResNet-18 ModelCvd.Acc: 95.5, Cvd.Sen: 100, Cvd.Pre: 94N: 1579,C-19: 184,P: 42453
[134]HE+ lung segmentation using UNet + Various deep model are analyzed.
[135]4 models analyzed (Best: VGG16 and VGG19)Cvd.Acc: 99.38, Cvd.Sen: 100, Cvd.Spe: 99.33N: 802, C-19: 7902
[136]CLAHE+VGG16 and VGG19 used (Best: VGG16)Cvd.Acc: 95.9, Cvd.Sen: 92.5, Cvd.Spe: 97.5,AUC: 0.950 (max. only for C-19)N: 607,C-19: 607,P: 6073
[137]CNN model to separate COVID-19 and pneumonia
[138]Alexnet, Googlenet, and Restnet18 is used
(Googlenet best for 4 classes)
Cvd.Acc: 80.56, Cvd.Sen: 80.56, Cvd.Pre: 84.17, F1-Score: 82.32N: 79,C-19: 69, BP: 79, VP: 794
[76]MLP-CNNCvd.Acc: 95.4, Cvd.Sen: 95, Cvd.Pre: 92.5, F1-Score: 93.6C-19: 112, NC: 302
[139]LightCovidNetCvd.Acc (avg.): 96.97N: 1341,C-19: 446,P: 13453
[140]MobileNet v2Cvd.Acc: 96.78, Cvd.Sen: 98.66, Cvd.Spe: 96.46N: 504, C-19: 224, P: 714 2(C-19,O)
Cvd.Acc: 94.723(N,C-19,P)
[141]Truncated InceptionNetCvd.Acc (avg.): 98.77, Cvd.Sen(avg.): 95, Cvd.Spe(avg.): 99, Cvd. Pre(avg.): 99
F1 score(avg.): 0.97, AUC (avg.):0.99
N:2003, C-19:162,P: 4280, TB:400
4
[142]CNN modelCvd. Prec (avg.), Cvd. Sen (avg.), F1-score (avg.): 100C-19: 500, P: 5002
[143]CNN modelCvd.Acc (testing): 94.4N:8066, C-19:183,P: 55513
[144]COVID-Net modelCvd.Acc: 93.3Total: 13,975 from 13,870 patients3(N,C-19,P)
[85]CNN model (Inception) + FO-MPA + k-NNCvd.Acc: 98.7, F-score: 98.2DS1: C-19 +ve: 200, C-19 -ve: 16752
Cvd.Acc: 99.6, F-score: 99DS2: C-19 +ve: 219, C-19 -ve: 1341
[63]FrMEMs + MRFO + k-NNCvd.Acc: 96.09, Cvd.Sen: 98.75, Cvd.Pre: 98.75DS1: C-19 +ve: 216,C-19 -ve: 16752
Cvd.Acc: 98.09, Cvd.Sen: 98.91, Cvd.Pre: 98.91DS2: C-19 +ve: 219,C-19 -ve: 1341
[145]Xception model + SVMCvd.Acc: 99.33, Cvd.Sen: 99.27, Cvd.Spe: 99.38, Cvd.Pre: 99.27, F1-score:99.27,AUC: 99.32N: 565,C-19: 5372
[146]Discriminative cost sensitive learning approachCvd.Acc: 97.01, Cvd.Pre: 97, Cvd.Sen: 97.09,F1-score: 96.98N: 1000,C-19: 239,P: 10003
[147]CNN modelCvd.Sen (avg.): 91.05, Cvd.Spe(avg.): 99.61, Cvd.Acc(avg.): 98.34,ROC-AUC(avg.): 95.33N: 1583,C-19: 2252
Cvd.Sen (avg.): 92.88, Cvd.Spe(avg.): 99.79, Cvd.Acc(avg.): 99.44,ROC-AUC(avg.): 96.33C-19: 225, P: 42922
F1 score (avg.): 94.10N: 1583,C-19: 225,P: 42923
[148]HE and GC + DenseNet103 + ResNet18
Cvd.Acc: 91.9N: 191, C-19: 180,BP: 54, VP: 20,TB: 574(N,BP,VP,TB)
[149]VGG16 modelCvd.Acc, Cvd.Sen, Cvd. Prec, F-score: 80C-19: 70, NC: 702
[54]ACGAN based model (CovidGAN)Cvd.Acc: 95.00N: 403, C-19: 7212(N, C-19)
[150]CNN modelCvd.Acc: 99.70, Cvd.Pre: 99.70, Cvd.Sen: 99.70, Cvd.Spe: 99.55N: 1579, C-19: 423,VP:14852(N,C-19VP)
[151]Deep learning modelCvd.Acc: 97.25, Cvd.Pre: 97.24,F1-score: 97.21N: 27,228, C-19: 209, P: 57943
[152]CNN + gated recurrent unit (GRU) Cvd.Sen: 96, Cvd.Pre: 96, F1-score: 95N: 141, C-19: 142, P: 1413
[153]Ensemble of deep CNN model (InceptionResNetV2 + ResNet152V2 + VGG16+ DenseNet201)Cvd.Acc: 99.2, Cvd.Sen: 99.12, Cvd.Spe: 99.07, F-score: 99.17,AUC: 99.21N:2039, C-19:1663,P: 401,TB:3944
[154]MCFF-Net66-Conv1-GAPCvd.Acc: 94.66N:1500,C-19:942, BP:1802,VP:17974
[155]ResNet50V2 + t-SNECvd.Acc: 95.49, Cvd.Sen: 99.19, Cvd.Pre:96.19, F1-score: 98.0, AUC: 95.49N: 616, C-19: 616,P: 6163
[156]CNN modelCvd.Acc:100, Cvd.Sen:100, Cvd.Spe:100, Cvd.Prec:100, F1-score:100, AUC:100N:42, C-19:1362
[157]Enhanced Inception-ResNetV2 modelCvd.Acc(avg.): 98.80, Cvd.Sen(avg.): 99.11, Cvd.Prec(avg.): 98.61,F1 score(avg.): 98.86N:1341,C-19:219,VP: 13453
[158]CNN model and GoogLeNetCvd.Acc: 97.62, Cvd.Sen: 98.29, Cvd.Spe: 97.64, F-score: 98.30,AUC: 97.96N: 1421,C-19: 13322
[159]VGG16 ModelCvd.Acc: 98.72, Cvd.Sen: 98.78, Cvd.Spe: 98.70, Cvd.Prec: 96.43, F1-score: 97.59N:1341,C-19:1200,VP:13453
[160]AlexNet Cvd.Acc: 99.13, Cvd.Sen: 99.4, Cvd.Spe: 99.15,F-score: 99.49,AUC: 99.31Consists: N,C-19,P,TB4
[161]Ensemble of MobileNet and InceptionV3Cvd.Acc: 96.49, Cvd.Prec: 93.01, Cvd.Sen: 92.97,F-score: 92.97N:1050,C-19:1050,BP:1050,VP:10504
[162]VGG16 modelCvd.Acc(avg.): 91.69, Cvd.Sen(avg): 95.92, Cvd.Spe(avg.): 100Total: 77203(N, C-19,P)
[163]CLAHE + InceptionV3 + ANNCvd.Acc: 97.19N: 1583,P: 42732
[97]CNN with various optimization algorithmCvd.Acc:96, Cvd.Sen:100, Cvd.Spe:99, Cvd.Pre:96, F1-Score:0.98N: 1583, C-19: 576, VP:42733
[164]VGG16 modelCvd.Acc: 96, Cvd.Sen: 92.64, Cvd.Spe: 97.27N: 504, C-19: 2242
Cvd.Acc: 92.53, Cvd.Sen: 86.7, Cvd.Spe: 95.1N:504, C-19: 224, P: 7003
[50]FOSF and GLCM and HOG + GWO + Ensemble of classifiers Cvd.Acc: 98.06, Cvd.Sen: 98.83, Cvd.Spe: 96.51, Cvd.Pre: 98.26,F-measure: 98.55 AUC:0.97N: 782, C-19: 782, P: 7822 (N,AB)
Cvd.Acc: 91.32, Cvd.Sen: 96.51, Cvd.Spe: 86.2, Cvd.Pre:87.36,F-measure: 91.71,AUC: 0.912(C-19,P)
[165]Ensemble of deep CNN model (VGG19 + DenseNet121) + SVMCvd.Acc: 99.71N:2341, C-19: 798,P: 23452 (C-19,NC)
Cvd.Acc: 98.28, Cvd.Sen (avg), Cvd.Pre(avg.),F1-Score (avg.): 98.333
[166]CNN model + Ensemble of classifiers Cvd.Acc: 98.91, Cvd.Sen: 97.82, Cvd.Pre: 100,F1-Score: 98.89N: 2300,C-19: 23002
[167]Deep learning model (Inception architecture)Cvd.Acc: 96, Cvd.Sen: 93, Cvd.Spe: 97, Cvd.Pre: 97, F1-Score: 0.96C-19: 435,NC: 5052
[168]UNet with ResNet + CNN modelCvd.Acc (avg.): 96.32N:1840,C-19:433,BP:2780,VP:1345,TB: 3945
[169]Two separate CNN models for binary and ternary classification Cvd.Acc: 98.7, Cvd.Sen: 100, Cvd.Spe: 98.3N:145,C-19: 145, BP: 1452(N, C-19)
Cvd.Acc: 98.3, Cvd.Sen: 99.3, Cvd.Spe: 98.13
[170]VGG16 and Xception model (Best: Xception)Cvd.Sen: 100, Cvd.Spe: 97.6, F1-Score: 97.7N: 400, C-19: 402,P:200,I: 352
[171]Various DNN + Majority voting schemeCvd.Acc: 99.31N: 1338, C-19: 237, VP: 13363
[172]Customized CNN Model Cvd.Acc: 92.95, Cvd.Sen (avg.): 90.72, Cvd.Pre(avg.): 94.04,F1-Score(avg.): 0.9204N: 1341, C-19: 744 (Independent set)2
[173]NanoChest-net modelAnalyzed with various datasets.
[174]VGG16+ HS + k-NNCvd.Acc, Cvd.Sen, Cvd.Pre,F1-Score, AUC:100 N: 480,C-19: 2802
[175]OptiDCNN modelCvd.Acc: 99.11N: 5000, C-19: 1842
[176]HOG and CNN(VGG19) + ME + CNN classifier + WSCvd.Acc: 99.49, Cvd.Sen: 93.65, Cvd.Spe: 95.7C-19 +ve: 1979, C-19 -ve: 31112
[177]Ensemble-CNNs (based on ResNeXt-50, Inception-v3, and DenseNet-161)Cvd.Acc: 75.23 ± 3.40, Cvd.Sen: 75.20, Cvd.Spe: 87.60, Cvd.Pre: 78.28, F1-Score: 73.43
AUC: 0.8140
N: 711, C-19: 711,P:711,BP:711,VP:711
Lung Opacity not Pneumonia:711 (public+Pvt.)
3(N,C-19,P)
Cvd.Acc: 81.00 ± 2.39, Cvd.Sen: 82.96, Cvd.Spe: 85.24, Cvd.Pre: 82.99,F1-Score: 81.49,
AUC: 0.8810
5
[178]Showed that a system with 2-class model are not valid for the diseases with similar symptoms, by conducting various experiments
[179]Exemplar COVID-19FclNet9 + SVMCvd.Acc: 99.64N: 150,C-19:1272
Cvd.Acc: 98.84N: 4000,C-19: 3616, P: 13453
Cvd.Acc: 97.60N: 234,C-19:125,BP:242,VP:1484
[180]Decompose, Transfer, and Compose (DeTraC)+PCACvd.Acc: 93.1, Cvd.Sen:100N: 80, C-19:105,SARS: 113
[77]UNet + HRNet Cvd.Acc: 99.26, Cvd.Sen:98.53, Cvd.Spe: 98.82 Total: 2722
[181]Various CNN model used (Best:EfficientNetB0)Cvd.Acc:92.93, Cvd.Sen: 90, Cvd.Spe: 95, Cvd. Prec: 88.3,F1- score: 0.88N: 1341, C-19: 420, P: 1345 3
[182]EfficientNet B3-XCvd.Acc: 93.9, Cvd.Sen: 96.8, Cvd.PPV: 100N:7966+100, C-19: 152+31 P: 5421+1003
[183]Various pre-trained CNN models (Best: ResNet50)Cvd.Acc: 96.1 (N,C-19), Cvd.Acc: 99.5(C-19,VP), Cvd.Acc: 99.7(C-19,BP)N: 2800, C-19: 341, BP: 2772, VP: 14932
[184]CNN model + SVMCvd.Acc (avg.): 95.81, Cvd. Prec(avg.): 95.27, F1 score(avg.): 94.94N:1266 +317, C-19:460 + 116 P:3418 + 855 (Pvt.)3
[185]ResNet50+ SVMCvd.Sen:80, Cvd.Spe: 81, AUC: 0.81Training and validation
C-19:250, NC:250
Testing independent set
C-19:74,NC:36
(Pvt.)
2
[186]VisionPro Deep Learning™ + COGNEX’s F-score: 95.3 (for segmented lung)N: 7966+100,C-19: 258+100
P: 5451+100
3
[84]Pillow library + HSGO + SVM Cvd.Acc:99.65C-19: 371, NC: 13412
[187]CNN modelCvd.Acc (avg.): 98.03, Cvd.Sen(avg.): 98.83, Cvd.Spe(avg.): 97DS1:C-19: 217, NC: 1126
DS2:C-19: 2025, NC: 2025
2
[188]AlexNet + Relief + SVMCvd.Acc: 99.18N:1583, C-19: 219, P:42903
[189]RGB to YUV and YUV to RGB + CNNCvd.Acc: 84.76, Cvd.Sen: 98.99, Cvd.Spe: 92.19, F-score: 0.9389,AUC: 0.5948N:28,C-19:78,P: 79(each for BP and VP)4
[190]CNN modelCvd.Acc: 98.44Total: 392, C-19: 1962
[191]Deep CNN modelCvd.Acc(avg.): 91.62, AUC:91.71C-19 +ve: 538, C-19 –ve: 4682
[192]Deep CNN model Cvd.Acc(avg.):99.2, Cvd.Sen(avg.):99.2,F1- score: 0.992N, C-19: 2484 (each)
N, C-19,P: 3829 (each)
2
Cvd.Acc(avg.):95.2, Cvd.Sen(avg.):95.2,F1-score: 0.952 3
[193]MobileNetV2 Cvd.Acc: 92.91, Cvd.Pre: 92N: 234, C-19: 390 2
[49]DenseNet201 model+ Quadratic SVM Cvd.Acc: 98.16, Cvd.Sen: 98.93, Cvd.Spe: 98.77N: 2924, C-19: 683,P: 42723
[194]Cluster-based learning + Ensemble of classifiers Cvd.Acc (avg.):100N:79,C-19: 69, BP:79, VP:79 2(N,C-19)
Cvd.Acc(avg.): 85.233(N,C-19,BP)
Cvd.Acc(avg.): 74.054
[195]Various deep CNN models are compared
(Best: XCeptionNet)
F1-score: 0.97N: 1345+238, C-19:490+ 86,P:3632+ 641
(Train + Test)
3
[196]CNN modelCvd.Acc: 98.19N: 10,456, C-19: 573, P: 11,673 (Pvt.)2(C-19,P)
Cvd.Acc: 91.213
[197]Federated learning model Cvd.Acc: 98.72N: 1266, C-19: 460,P: 3418 (Pvt.)2(C-19,P)
Cvd.Acc: 95.963
[80]ResNet50 + ASSOA + MLPCvd.Acc: 99.70Total: 58632(C-19+ve, C-19-ve)
[198]Several CNN models are analyzed (Best: VGG16)Cvd.Acc: 91N:1341, C-19:219,P:13453
[199]Semi-supervised open set domain adversarial network (SODA)Avg. AUC-ROC Score: 0.9006(C-19), 0.9082(P)With different domain target dataset
[200]VGG16 modelCvd.Acc: 97, Cvd.Sen: 99, Cvd.Spe: 99, Cvd.Pre: 97, F-score: 98N:1400, C-19: 210, P: 14003
[201]CovFrameNet (deep learning architecture)Cvd.Acc: 100, Cvd.Sen: 85, Cvd.Spe: 100, Cvd.Pre: 85, F-score: 90, AUC: 50Using two different dataset
[202]Self-supervised super sample decomposition for transfer learning (4S-DT) modelCvd.Acc: 97.54, Cvd.Sen: 97.88, Cvd.Spe: 97.15DS1: N: 296, C-19: 388, SARS: 413(N, C-19, SARS)
Cvd.Acc: 99.80, Cvd.Sen: 99.70, Cvd.Spe: 100DS2: N: 1583,C-19: 576,P: 42733 (N,C-19,P)
[203]VDI + Residual encoder + SVMCvd.Acc: 93.60, Cvd.Sen: 88, Cvd.Pre: 100, F1-score: 93.60C-19: 315, NC: 3572
[204]RCoNetksCvd.Acc (avg.):97.89, Cvd.Sen(avg.):97.76, Cvd.Spe(avg.):98.24, Cvd.PPV(avg.):97.93, F1-score(avg.):97.63N: 8851, C-19: 238, P: 60453
Cvd.Acc (%): COVID accuracy, Cvd.Sen(%): COVID sensitivity, Cvd.Spe(%): COVID specificity, Cvd.Pre(%): COVID precision, Normal: N, COVID-19: C-19, Pneumonia: P, Bacterial pneumonia: BP, Viral pneumonia: VP, Tuberculosis: TB, Non-COVID: NC, Others: O, Abnormal: AB, Private: Pvt., DS: dataset, Severe: S, Non-severe: NS, Mild: M, Moderate: mod, Critical: cr, Infected/Infection: I, Not infected: NI, Community acquired pneumonia (CAP): P, Lung cancer: LC.
Table 3. State-of-the-art AI techniques to detect COVID-19 using CT scans.
Table 3. State-of-the-art AI techniques to detect COVID-19 using CT scans.
PaperMethod Used: Preprocessing + Segmentation + Feature Extraction + Feature Selection + Classification or CNN + ClassificationResult ObtainedDataset
(Most Are Public)
No. of Classes
[205]Various deep models are analyzed (Best: ResNet101)Cvd.Acc: 99.51, Cvd.Sen: 100, Cvd.Spe: 99.02, AUC: 0.994C-19: 108,NC: 86,Total: 1020 slice, (Pvt.)2
[206]EfficientNet family based architectureCvd.Acc: 98.99, Cvd.Sen: 98.80, Cvd.PPV:99.20DS 1- NC: 1230, C-19: 12522
Cvd.Acc: 56.16, Cvd.Sen: 53.06, Cvd.PPV: 54.74 (Train DS 1 & Test DS2)DS 2: NC: 463,C-19: 349
[207]LinkNet + DenseNet + DTCvd.Acc(avg.): 94.4, Cvd.Pre(avg.): 96.7, Cvd.Rec(avg.): 95.2, F1-score(avg.): 96.0C-19:445,NC:2332
[208]novel conditional generative model, called CoSinGANIndependent testing is done using 50 CT cases (for lung segmentation and infection learning)
[93]Intensity normalization and segmentation + Q-deformed entropy + ANOVA+ LSTMCvd.Acc: 99.68N: 107,C-19: 118,P: 963
[209]Modified Alexnet modelCvd.Acc: 94.75, Cvd.Sen: 93.22, Cvd.Spe: 96.69, Cvd.PPV:97.27C-19:3482,NC:2751 (Pvt.)2
[210]Ensemble various models using majority voting schemeCvd.Acc: 85.2, Cvd.Sen: 85.4, Cvd.Pre: 85.7,F-score: 0.852,AUC: 0.91C-19 + ve: 349,C-19 -ve: 3972
[211]ResNet50Cvd.Acc: 82.91, Cvd.Sen: 77.66, Cvd.Spe: 87.62C-19:345,NC:3972
[99]CNN model with MODECvd.Acc: outperforms competitive models by 1.9789% 2
[212]Ensemble is built using ResNet152V2, DenseNet201, and VGG16Cvd.Acc: 98.83, Cvd.Sen: 98.83, Cvd.Spe: 98.82,F-measure: 98.30,AUC: 98.28N:3038,C-19:2373,P: 2890
TB: 3193
4
[36]eXplainable Deep Learning approach (xDNN)F1-score: 97.31SARS-CoV-2: 1252
Non SARS-CoV-2: 1230
2
[35]Multi-task and self-supervised learningCvd.Acc: 89, F1- score: 0.90, AUC: 0.98C-19:349,NC: 4632
[213]Semi-Inf-NetCvd.Sen: 0.725, Cvd.Spe: 0.960, Dice: 0.739100 images from 19 patients (Pvt)C-19 lung Seg.
[214]3D CNN modelCvd.Acc: 87.50, Cvd.Sen: 86.90, Cvd.Spe: 90.10,F1-score: 82,AUC: 94.40Train: 2186, Test: 2796 (Pvt.)2 (CAP,C-19)
[215]CNN modelCvd.Acc (avg): 94.03, Cvd.Sen(avg.): 94.44, Cvd.Spe (avg.): 93.63N: 320, C-19: 320 (Pvt.)2
[92]AlexNet + Guided WOACvd.Acc: 87.50, AUC: 99.50C-19: 334, NC-19: 7942
[216]Multi-task multi-slice deep learning systemCvd.Acc: 95.21N: 251,C-19: 245,H1N1: 105
CAP: 123 (Pvt.)
4
[217]LBP and statistical features + ReliefF and NCA + DNNCvd.Acc: 95.84N: 397,C-19: 3492
[218]Region growing + deep CNN model (ResNet101 as its backbone)Cvd.Acc: 94.9Total: 1110 patients with 5 classes5
[219]Radiomic features + mRMR + XGBoostAUC: 0.95 ± 0.02Total: 152 Patients
[220] Segmentation of infectious lung as ResNet50 backbone
[221]DTCT and GLCM + RFCvd.Acc (avg.): 72.2, Cvd.Sen(avg.): 77, Cvd.Spe(avg.): 68,AUROC (avg.): 0.8C-19: 291, P: 279 (Pvt.)2
[222]ResGNet (Graphs are generated using ResNet101-C features)Cvd.Acc (avg.): 96.62, Cvd.Sen(avg.): 97.33, Cvd.Spe(avg.): 95.91, Cvd.Pre(avg.): 96.21,F1-Score(avg.): 0.9665N:148,C-19: 148 (Pvt.)2
[223]CNN model (DenseNet201) + ELMCvd.Acc: 98.36, Cvd.Sen: 98.28, Cvd.Spe: 98.44, Cvd.Pre: 98.22,F1-Score: 98.25,
AUC: 98.36
C-19: 349,NC: 3972
[224]M 2 UNet (Multi-task multi-instance deep network)Cvd.Acc (avg.): 98.5, Cvd.Sen(avg.): 95.2, Cvd.Pre(avg.): 97.5,F1-Score(avg.): 0.963
AUC(avg.): 0.991
S:51,NS: 191(Pvt.)2
[225]Dual-branch combination network (using UNet + ResNet50)Cvd.Acc: 96.74, Cvd.Sen: 97.91, Cvd.Spe: 96.00,AUC: 0.9864N: 75 scans, C-19: 48 scans (Pvt.)2
[226]Majority voting scheme with ResNet50Cvd.Acc: 96, Cvd.Sen:100, Cvd.Spe: 96,AUC: 0.90Two public datasets are used2
[227]HE + WF + AlexNet + SVMCvd.Acc: 96.69, Cvd.Sen: 96, Cvd.Spe: 98N:500,C-19:488, P:5003
[228]DenseNet-201Cvd.Acc: 97.8, Cvd.Sen: 98.1, Cvd.Spe: 97.3, Cvd.Pre: 98.4, F1-score: 98.25C-19: 1500, NC: 15002
[229]CLAHE + VGG-19 modelCvd.Acc: 95.75, Cvd.Sen: 97.13,F1- score: 95.75, ROC-AUC: 99.30C-19 +ve: 1252, C-19 -ve: 12302
[230]VGG16 model and ensemble learningCvd.Acc: 93.57, Cvd.Sen: 94.21, Cvd.Spe: 93.93, Cvd.Pre: 89.4,F1-score: 91.74N: 243,C-19: 790,P: 3843
[61]Z-score normalization and KF+CNN + fuzzy c-means + LDNCvd.Pre: 96, Cvd.Sen: 97, F-score: 97 and volume overlap error (VOE) of 5.6 ± 1:2%.
[231]Golden Key Tool + VGG modelCvd.Acc: 100DS1- N: 55, C-19: 3492
Cvd.Acc: 93.478, Cvd.Pre: 97.33, F1-score: 87.5DS2- N: 55, C-19: 349, NC: 203
Cvd.Acc: 90.12, Cvd.Pre: 90.6DS3- C-19: 349, NC: 3962
[232]PatchShuffle Stochastic Pooling Neural Network (PSSPNN)F1-score(avg.): 95.79Total:5214(N,C-19, P, TB)
[233]Clinical information and chest CT features + XGBoostCvd.Sen: 90.91, Cvd.Spec: 97.96, AUC: 0.924Total: 1982 (M,S)
[234]3D CU-NetDSC: 0.960, 0.963, 0.771, Cvd.Sen: 0.969, 0.966, 0.837, Cvd.Spe: 0.998, 0.998, 0.998C-19: 70 for detecting C-19 infection
[235]Tensor + COVID-19-Net (VGG16) + Transfer-Net (ResNet50)Cvd.Acc: 94, Cvd.Sen: 96, Cvd.Spe: 92N: 700, C-19: 7002
[236]Ensemble model (using Resnet18, Densenet201, Mobilenetv2 and Shufflenet)Cvd.Acc: 96.51, Cvd.Sen: 96.96, Cvd.Spe: 96.00,F1-Score: 0.97,AUC: 0.99C-19: 349,NC: 3972
[237]LungINFseg, model for segmentationCvd.Acc (avg.): 98.92, Cvd.Sen(avg.): 83.10, Cvd.Spe(avg.): 99.52, DSC(avg.):80.34
intersection over union (IoU) (avg.): 0.6877
20 labeled COVID-19 CT scans (1800 + annotated
Slices)
[238]Feature Pyramid Network(FPN) DenseNet201 for detectionCvd.Sen: 98.3 (m), Cvd.Sen: 71.2(mod), Cvd.Sen: 77.8(s), Cvd.Sen: 100(cr)1110 subjects Severity classification
[239]Volume of interest based DenseNet-201Cvd.Acc: 88.88, Cvd.Sen:89.77, Cvd.Spe: 94.73, F1-Score: 88.88C-19: -moderate risk:40
severe risk:40 extreme risk:40
3
[240]Various deep network architectures are analyzed using publicly available two COVID-19 CT datasets2
[241]UNetF1-Score, improvement of 5.394 ± 3.015%.+ve:492. -ve: 447
[242]Stationary wavelets + CNN model (Best: ResNet18)Cvd.Acc: 99.4, Cvd.Sen: 100, Cvd.Spe: 98.6,AUC: 0.9965C-19:349, NC:3972
[243]Gabor filter + convolution and pooling layers + RFF1 score: 0.99C-19: 349,NC: 3972
[244]Stacked autoencoder detector modelCvd.Acc(avg.):94.7, Cvd.Sen(avg.):94.1, Cvd.Pre(avg.):96.54, F1-score (avg.):94.8C-19: 275,NC: 1952
[245]DenseNet201 model + k-NNCvd.Acc, Cvd.Sen, Cvd.Pre, & F1-score:100C-19:2740,Suspected Cases: 2740 (Private)2
[246]CNN model + MI and Relief-F and DA +SVMCvd.Acc: 98.39, Cvd.Sen: 97.78, Cvd.Pre: 98.21, F1-score: 0.98, AUC: 0.9952SARS-CoV-2: 1252
Non SARS-CoV-2: 1230
2
Cvd.Acc: 90.0, Cvd.Sen: 84.06, Cvd.Pre: 93.55,F1-score: 0.8855, AUC: 0.9414C-19:349, NC: 463
[247]VGG19 modelCvd.Acc: 94.52C-19: 349,NC: 4632
[248]VGG16 modelCvd.Acc: 98.0, Cvd.Sen: 99.0, Cvd.Spe: 94.9N: 275, C-19: 1952
[249]Radiological features + Chi-square test + Ensemble classifierCvd.Acc: 91.94, Cvd.Sen: 93.54, Cvd.Spe: 90.32,AUC: 0.965C-19: 306,non-COVID-19 pneumonia: 306 (Pvt.)2
[250]Various CNN and texture based approachesCvd.Acc (avg.): 95.99, Cvd.Sen(avg.): 94.04, Cvd.Spe(avg.): 99.01,F1-score(avg.): 0.9284, AUC (avg.): 0.9903COVID-19: 386, NC: 10102
[251]Worried deep neural network + pre-trained models (InceptionV3, ResNet50, and VGG19)Cvd.Acc: 99.04, Cvd.Prec: 98.68, Cvd.Rec: 99.11,F-score: 98.90Total: 2623 (Pvt.)2(I,NI)
[252]Density peak clustering approachStructural similarity index (SSIM): 89Total images: 12 (Pvt.)C-19 Seg.
[253]EfficientNet-b0 modelCvd.Acc: 99.83, Cvd.Sen: 92.86, Cvd.Spe: 98.32, Cvd.PPV:91.92Total images: 107,675 (Pvt.)2(C-19,NC)
Cvd.Acc: 97.32, Cvd.Sen: 99.71, Cvd.Spe: 95.98, Cvd.PPV: 93.262 (C-19,P)
[254]EfficientNetB3Cvd.Sen: 97.2, Cvd.Spe: 96.8,F1-score: 0.970, AUC: 0.997N:105,C-19:143,P:147 (Pvt.)3
Cvd.Sen: 92.4, Cvd.Spe: 98.3,F1-score: 0.953,AUC: 0.989N: 121,C-19: 119, P: 117(Pvt.)3
Cvd.Sen: 93.9, Cvd.Spe: 83.1,AUC: 0.954C-19: 856,Non-P: 254 (Pvt.)2
[255]COVID SegnetFor COVID-19 segmentation: Dice Score: 0.726, Cvd.Sen.: 0.751, Cvd.Pre.: 0.726Train: 731 Test: 130 patients (Pvt.)Lung and infected regions seg.
For lung segmentation: Dice Score: 0.987, Cvd.Sen.: 0.986, Cvd.Pre.: 0.990
[256]Anam-NetDice Score: 0.956, Cvd.Acc.: 98.5, Cvd.Sen.: 92.7, Cvd.Spe.: 99.8N:929, AB:880Anomalies seg.
Table 4. State-of-the-art AI techniques to detect COVID-19 using lung US imagery.
Table 4. State-of-the-art AI techniques to detect COVID-19 using lung US imagery.
PaperMethod Used: Preprocessing + Segmentation + Feature Extraction + Feature Selection + Classification or CNN + ClassificationResult ObtainedDataset
(Most Are Public)
No. of Classes
[257]Features from various layers deep CNN model is fusedCvd.Acc (avg.): 92.5, Cvd.Sen(avg.): 93.2, Cvd.Pre(avg.): 91.8N: 53 + 15,C-19: 45+18,BP: 23 + 73
[258]Autoencoder network and separable convolutional branches attached with a modified DenseNet201 17% more than the traditional DenseNetConvex:38, Linear: 20
Score 0 (healthy) to Score 3 (worst-case)
4
[39]Frame- and video-based CNN models (Best: VGG) Cvd.Sen: 0.90 ± 0.08, Cvd.Spe: 0.96 ± 0.04N: 90,C-19:92, BP: 73,VP: 6
(It includes videos and images)
3
Table 5. State-of-the-art AI techniques to detect COVID-19 using X-ray and CT scans.
Table 5. State-of-the-art AI techniques to detect COVID-19 using X-ray and CT scans.
PaperMethod Used: Preprocessing + Segmentation + Feature Extraction + Feature Selection + Classification or CNN + ClassificationResult ObtainedDataset
(Most Are Public)
No. of Classes
[259]VGG19 modelCvd.Acc: 89.47, Cvd.Sen: 76.19, Cvd.Spe: 97.22X-ray: 673 radiology images of 342 patients2(N,C-19)
Cvd.Acc: 95.61, Cvd.Sen: 96.55, Cvd.Spe: 95.29SARS-CoV-2 CT: C-19:1252, NC: 12302(C-19,P)
Cvd.Acc: 95, Cvd.Sen: 94.04, Cvd.Spe: 95.86X-ray: 5856 images2(C-19,NC)
[260]VGG19 + CNN modelCvd.Acc: 98.05, Cvd.Spe: 99.5, Cvd.Rec: 98.05, Cvd.Pre: 98.43,
F1-Score: 98.24,AUC: 99.66
Total images: 33,6764(N,C-19,P,LC)
[65]LBP and MFrLFM + SFSCvd.Acc: 99.3±0.2, F1-score: 93.1±0.2, AUC: 94.9±0.1 Chest X-ray: 1926 2(C-19,NC)
Cvd.Acc: 93.2±0.3, F1- score: 92.1±0.3,AUC: 93.2±0.3 CT scan: 2482
[261]COVID-ResNet53Cvd.Acc: 97.1, Cvd.Sen: 98.9, Cvd.Spe: 95.7, Cvd.Pre: 94.5X-ray: C-19: 4045, NC: 55002(C-19,NC)
Cvd.Acc: 97.7, Cvd.Sen: 98.7, Cvd.Spe: 95.6, Cvd.Pre: 97.9CT: C-19: 5427, NC: 2628
[262]CNN modelCvd.Acc: 96.68, Cvd.Sen: 96.24, Cvd.Spe: 95.65N: 7021,C-19: 1066, P:70213(N,C-19, P)
[263]PF+ GraphCovidNetCvd.Acc, Cvd.Pre, Cvd.Sen,F1- score:100SARS-CoV-2 CT
N: 1229, C-19:1252
2
Cvd.Acc, Cvd.Pre, Cvd.Sen,F1- score:100CT: N: 407, C-19: 3492
Cvd.Acc, Cvd.Pre, Cvd.Sen,F1- score: 99.84X-ray: N: 1592,C-19:504,P: 43433
[264]HE and WF + Haralick texture feature and VGG16 modelCvd.Acc: 93, Cvd.Sen: 90, Cvd.Pre: 91N: 1349,C-19: 407,BP: 2538,VP: 13454
[265]HE and WF + DenseNet103 + Haralick texture feature and ResNet101 modelCvd.Acc: 94.9, Cvd. Sen: 93, Cvd. Pre: 93Total images: 12,520, N: 4100, C-19: 220
P: 4100,Lung opacity: 4100
4
[266]DenseNet121 + Bagging tree classifierCvd.Acc: 99Total images: 274 2(N,C-19)
[267]Contrastive multi-task convolutional neural network (CMT-CNN)
CNN Model: EfficientNet
Cvd.Acc (avg.): 93.46, Cvd.Sen (avg.): 90.57, Cvd.Spe (avg.): 90.84
AUC (avg.): 89.33 (2-class)
CT scan: N: 1164,C-19: 1980,P:16142(C-19,O)
3(N,C-19,P)
Cvd.Acc (avg.): 91.45 (3-class)
Cvd.Acc (avg.): 97.23, Cvd.Sen (avg.): 92.97, Cvd.Spe (avg.): 91.91
AUC (avg.): 92.13 (2-class)
X-ray: N: 1583, C-19: 231,P: 4007
Cvd.Acc (avg.): 93.49 (3-class)
[268]Contextual features reduced by convolutional filters (CFRCF)Cvd.Acc: 94.23CT: C-19: 349, NC: 3972(C-19,NC)
X-ray: C-19: 187, NC: 73
[269]CNN modelCvd. Sen: 97.92, Cvd.Spe: 94.64, Cvd. Pre: 94.81,AUC: 0.9808Total images: 672 (X-ray:336 and CT:336)2(C-19,NC)
[270]VGG16 + InceptionV3 modelsCvd.Sen: 100, Cvd.Pre: 0.97, F1: 0.98CT: 746
X-ray: 268
2(N,C-19)
[271]CovidNet modelCvd. Acc: 100, Cvd. Sen: 100CT: C-19: 1252, NC: 12302
Cvd. Acc: 96.84, Cvd. Sen: 92.19X-ray: N: 445, C-19:321, P:5003
Using all X-ray, CT, and US imageries
[272]Pre-trained deep learning models: DenseNet-161, ResNet-34, VGG-16 and MobileNet-V2 are usedCvd.Sen: 97.91, Cvd.Spe: 99.57, Cvd.Pre: 99.57,F1-score: 98.73X-ray: C-19: 234, NC:2342
Cvd.Acc: 64.41, Cvd.Sen: 66.28, Cvd.Spe: 62.93, Cvd.Pre:58.67,F1-Score: 0.6225CT: C-19: 392, NC:392
Cvd.Acc: 99.36, Cvd.Sen: 98.74, Cvd.Spe: 100, Cvd.Pre:100,F1-Score: 0.9973US: C-19:19, NC:14
[273]VGG19 modelCvd.Pre: 86X-ray: N: 60,361,C-19:140,P:3223
Cvd.Pre: 84CT: C-19: 349, NC: 3972
Cvd.Pre: 100US: N: 235,C-19: 399,P: 2773
Table 6. Average (Avg.) performance of COVID-19 detection systems.
Table 6. Average (Avg.) performance of COVID-19 detection systems.
X-ray
ClassCvd.Acc (%)Cvd.Sen (%)Cvd.Spe (%)F1-score (%)AUC (%)
297.0595.37,08694.7996.1195.45
394.7895.63,54297.1085.7193.55
491.6994.33597.1683.3264.74
592.4182.9695.2481.4988.1
CT
ClassCvd.Acc (%)Cvd.Sen (%)Cvd.Spe (%)F1-score (%)AUC (%)
292.9992.61,89793.2894.5791.40
394.5595.01695.5592.0899.3
497.0298.8398.8297.998.28
5--------94.9
X-ray and CT
ClassCvd.Acc (%)Cvd.Sen (%)Cvd.Spe (%)F1-score (%)AUC (%)
296.5494.3595.8197.3893.87
394.9994.2195.6599.84
495.5294.75--98.2499.66
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gudigar, A.; Raghavendra, U.; Nayak, S.; Ooi, C.P.; Chan, W.Y.; Gangavarapu, M.R.; Dharmik, C.; Samanth, J.; Kadri, N.A.; Hasikin, K.; et al. Role of Artificial Intelligence in COVID-19 Detection. Sensors 2021, 21, 8045. https://doi.org/10.3390/s21238045

AMA Style

Gudigar A, Raghavendra U, Nayak S, Ooi CP, Chan WY, Gangavarapu MR, Dharmik C, Samanth J, Kadri NA, Hasikin K, et al. Role of Artificial Intelligence in COVID-19 Detection. Sensors. 2021; 21(23):8045. https://doi.org/10.3390/s21238045

Chicago/Turabian Style

Gudigar, Anjan, U Raghavendra, Sneha Nayak, Chui Ping Ooi, Wai Yee Chan, Mokshagna Rohit Gangavarapu, Chinmay Dharmik, Jyothi Samanth, Nahrizul Adib Kadri, Khairunnisa Hasikin, and et al. 2021. "Role of Artificial Intelligence in COVID-19 Detection" Sensors 21, no. 23: 8045. https://doi.org/10.3390/s21238045

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop