Next Article in Journal
Autonomous Trajectory Generation Comparison for De-Orbiting with Multiple Collision Avoidance
Next Article in Special Issue
Voting Ensemble Approach for Enhancing Alzheimer’s Disease Classification
Previous Article in Journal
Modelling Causal Factors of Unintentional Electromagnetic Emanations Compromising Information Technology Equipment Security
Previous Article in Special Issue
Evaluating the Reliability of a Shape Capturing Process for Transradial Residual Limb Using a Non-Contact Scanner
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Critical Analysis of the Current Medical Image-Based Processing Techniques for Automatic Disease Evaluation: Systematic Literature Review

by
Baidaa Mutasher Rashed
and
Nirvana Popescu
*
Computer Science Department, University Politehnica of Bucharest, 060042 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(18), 7065; https://doi.org/10.3390/s22187065
Submission received: 20 August 2022 / Revised: 6 September 2022 / Accepted: 14 September 2022 / Published: 18 September 2022
(This article belongs to the Collection Biomedical Imaging and Sensing)

Abstract

:
Medical image processing and analysis techniques play a significant role in diagnosing diseases. Thus, during the last decade, several noteworthy improvements in medical diagnostics have been made based on medical image processing techniques. In this article, we reviewed articles published in the most important journals and conferences that used or proposed medical image analysis techniques to diagnose diseases. Starting from four scientific databases, we applied the PRISMA technique to efficiently process and refine articles until we obtained forty research articles published in the last five years (2017–2021) aimed at answering our research questions. The medical image processing and analysis approaches were identified, examined, and discussed, including preprocessing, segmentation, feature extraction, classification, evaluation metrics, and diagnosis techniques. This article also sheds light on machine learning and deep learning approaches. We also focused on the most important medical image processing techniques used in these articles to establish the best methodologies for future approaches, discussing the most efficient ones and proposing in this way a comprehensive reference source of methods of medical image processing and analysis that can be very useful in future medical diagnosis systems.

1. Introduction

Classification methods have increased in importance and now play a significant role in image processing. Their importance stems from their applications in various fields, particularly in medicine. Given the importance of classification in medicine, new and sophisticated classification tools and methods are needed to diagnose and classify medical images efficiently [1]. Several classification algorithms encompass hundreds of different classification issues, and no single classification method can successfully and efficiently address all classification problems. As a result, answering the question concerning which classification approach is best for a particular study is challenging. The fast growth in medical data and imagery in recent years has necessitated the employment of new methodologies depending on big data technology, artificial intelligence, and machine learning in health care, making it an important research area [2]. Given the importance of classification in the medical field, new approaches for rapidly identifying and evaluating medical images are required. As a result, this research aims to compare existing and conventional methods for medical image classification and, based on these findings, suggest a novel algorithm for medical image classification [3].
The field of medical image processing and analysis has contributed to substantial medical achievements. A correct diagnosis necessitates the precise identification of each disease by integrating methods and techniques that support more effective clinical diagnosis depending on images obtained by various imaging modalities that have been used increasingly widely and successfully to detect illnesses [4]. This study aims to describe the process of medical image analysis, identify the techniques used in the analysis, and give a comprehensive literature review on illness identification based on medical imaging across various diseases and diverse fields and applications in medical imaging. This study searched for works related to the topic of the systematic literature review (SLR) and provided information about the process applied to article selection. In the final step, we kept only the forty most relevant articles that answered our research questions related to medical image processing and analysis techniques for diagnosis.
The study is organized as follows. Section 2 describes the methodology of research applied in the SLR. Section 3 provides a detailed explanation of medical imaging modalities and medical image analysis processes in the surveyed studies. Here, the image processing methods for disease diagnosis developed by the researcher are described. Various filtering and image improvement methods are discussed. Popular segmentation methods are presented, feature extraction methods are introduced, and the classification methods utilized for human disease diagnosis and evaluation metrics are discussed. Section 7 discusses the outcomes and the future work.

2. Research Methodology

This section describes the protocol utilized to locate, collect, and assess the state-of-the-art techniques under study. It is divided into four phases: research questions, research strategy, article selection criteria, and research results.

2.1. Research Questions

The SLR (Systematic Literature Review) aims to address the research questions by finding all relevant research outcomes from previous studies. The research questions are divided into five sub-questions:
Q1: What are the modalities of medical imaging?
Q2: What is the task of medical image processing and analysis?
Q3: Which medical image processing methods are most used in diagnostic systems?
Q4: What diagnostic techniques have been adopted and developed?
Q5: Is the system that has been adopted or designed capable of producing good results?
We searched various databases such as Elsevier, IEEE Explorer, Springer, and Google Scholar. We included the relevant studies that mainly focus on two or more questions based on our research questions.

2.2. Research Strategy

Our systematic literature review collected many studies related to our search topic over the last five years, between 2017 and 2021, from the following databases: Elsevier, IEEE Xplorer, Springer, and Google Scholar. We used combinations of keywords and terms, including “Medical Image Analysis”, “Medical Image Processing”, “Medical Image Processing Techniques”, “Medical Image Processing Techniques” AND “Disease Diagnosis”, and “Diagnostic Techniques” AND “Medical Image Processing”, and we obtained about 3204 articles meeting our searched keywords. After removing 206 duplicate articles, the remaining set consisted of 2998 articles. After examining these studies by their titles and abstracts, a new set of keywords were applied, including “Diseases Classification”, “Machine Learning”, “Deep Learning”, “Neural Networks”, and “Hybrid Diagnosis System”, and 2900 articles were excluded. Of the 98 articles remaining, 58 were excluded after deep reading and based on the exclusion criteria focused on articles that used machine learning and deep learning methods to create a hybrid diagnostic system based on the merging of two methods or the modification or improvement of a common method and its development for proposing possible areas for future research. These articles reached the fourth stage of our systematic exclusion technique and dealt with different topics and methods related to machine learning (ML) methods found in [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32], deep learning (DL) strategies found in [33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54], and convolutional neural network (CNN) approaches found in [55,56,57,58,59,60,61,62]. Finally, 40 studies fulfilling our research criteria were obtained to be deeply analyzed. In this way, the most suitable articles were selected based on the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) technique, as shown in Figure 1. By applying the PRISMA technique, which is appropriate for any systematic literature review, we kept only the most relevant articles from large databases. In the final step, as shown in Figure 1, the last article set was not only a result of the automatic selection based on keyword combinations but also represented the answers to our research questions that are discussed in Section 2.3.

2.3. Criteria for Article Selection

The following criteria were determined to choose articles:
  • Articles using the most up-to-date techniques for analyzing medical images.
  • Articles that were written in the English language.
  • Articles published in the last five years (2017–2021).
  • Studies that were presented at a peer-reviewed conferences or journals.
Following the definition of the inclusion criteria, the following exclusion criteria were determined:
  • Duplicate references from the various electronic archives that were searched.
  • Articles with a page count of less than four.
  • Articles that fail to respond to any of the research questions.
  • Articles that were written in a language other than English
  • Articles that did not address the study’s goals.

2.4. Research Results

After searching the scientific databases and identifying the research results, the most relevant articles related to the research aims were found, which included articles that explored different medical image processing techniques and focused on the techniques of diagnosing diseases. The selected articles were carefully read, and the extracted results were analyzed and assessed to summarize the existing research, identify the most useful techniques, and propose possible areas for future research.

3. Results of Systematic Review

This section is divided into two parts: the first deals with medical imaging modalities, and the second deals with the analysis of medical images. Each revolves around the main objectives of the systematic review.

3.1. Medical Imaging Modalities

Medical images play a critical role in assisting health care workers in reaching patients for diagnosis and treatment. Medical image processing is a set of procedures for extracting clinically useful data from various imaging modalities for diagnosis [63]. Numerous medical imaging modalities include ionizing radiation, magnetic resonance, nuclear medicine, optical methods, and ultrasound as the media. Each modular media has unique characteristics and responses to the human body’s structure [64]. These modalities serve various purposes, such as obtaining images inside the human body or image samples of parts that cannot be seen with the naked eye [65]. The classification of medical imaging modalities and the main types of imaging methods addressed in the surveyed studies are illustrated in Figure 2.
The distribution of the forty chosen studies that used different modalities is illustrated in Table 1; this table shows the detailed distribution of publication references, imaging modality, type of disease, and medical databases.

3.2. Medical Image Analysis

This section is divided into subsections. These subsections introduce the major medical image analysis methods used in the studies reviewed, including image preprocessing, image segmentation, feature extraction to classification, and evaluation metrics.

3.2.1. Medical Image Preprocessing

Image processing is a method for enhancing the quality of an image after eliminating irrelevant image data. Medical images include many irrelevant and unwanted segments. To remove these segments in an image, some preprocessing methods are required. Image preprocessing aims to improve the quality of the images contained in the dataset, which improves the results of segmentation and feature extraction methods [106]. This section presents the preprocessing methods of the studies surveyed. One of the most important techniques to improve medical images and remove noise is filtering. One of the most important filters used is the median filter [69,73,87,91,92,95,99], with its major benefit being to preserve the edges and remove noise. In [71], the authors used the synthetic minority over-sampling technique to generate synthetic samples from minor classes rather than simply replicating them, and they used the changing perspective of images technique to expand the dataset. To generate new images, computer vision techniques such as gray scaling, blurring, enhancing contrast, changing the color channel, sharpening, minimizing noise, and smoothing were used. In [73], the authors described in their study how to utilize pixel-wise interpolation and the modified quadratic transform-based Radon transform algorithm to improve the contrast of the images. In [73,85,100], the authors applied the contrast limited adaptive histogram equalization (CLAHE) approach and morphological operations to remove noises and improve the images. In [76,81], the authors used a fast adaptive median filter to eliminate the noise from medical images and maintain accurate information. In [77], binarization and thinning techniques were applied to the selected images to produce better images. In [80,88,89], the authors improved the noise removal process for medical images to obtain clear images using a Weiner filter for contrast adjustment. In [84], the authors developed two methods to convert the input color images to grayscale, denoise, enhance contrast, and normalize the histogram. The first method utilized only a green channel, while the second method depended on calculating the grayscale by taking an average of red, green, and blue channels computed using certain weights.
In [86,96,102], the authors addressed the normalization method to enhance the visual quality of images. In [87] the authors used noise removal techniques such as max-min filter, midpoint filter, quantum noise filter, alpha-trimmed mean filter, impulse noise filter, and wavelet thresholding methods for noise removal from images to smoothened them to obtain better images. In [91], the authors employed linear and non-linear image processing filters such as mean, Gaussian, Log, and fuzzy filters to eliminate noise from images. In [92], the authors presented the BBHE approach (i.e., Brightness Preserving Bi- Histogram Equalization), a method that decomposes the original image into two sub-images to overcome the drawback of histogram equalization. This is accomplished using a gray-level and histogram equalization approach for each sub-image. The Gradwrap algorithm, addressed in [93], improves the image appearance by removing image distortions. The B1 non-uniformity algorithm corrects the image color and intensity information, and then the N3 bias field correlation is applied after the Gradwrap and B1 non-uniformity algorithms to correct intensity distortion. In [94], the authors used unsharp masking, a popular image sharpening method, to improve the contrast of histopathological images. In [97] the Wang–Mendel algorithm was employed to remove noise from the images; this algorithm is one of the most effective approaches for eliminating noise from medical images due to its high speed.
In [101], the authors analyzed fundus images for image scaling; R, G, and B channel selection; and the preprocessed green channel components were analyzed by the 2D-VMD technique, which has a non-stationary, non-recursive, and completely adaptive decomposition technique for signal image analysis. In [103], the authors used image normalization and data over-sampling techniques with data augmentation to enlarge the dataset so that it could be used for deep learning tasks. Data augmentation approaches use a variety of operations to images, including scaling, geometric deformation, noise addition, alterations to the lighting, and image flipping. The authors employed the straightforward and efficient semi-supervised learning technique of pseudo-label to improve the performance of deep neural network models.

3.2.2. Segmentation Techniques

Image segmentation is responsible for recognizing and outlining items of interest in input images. Automatic medical image segmentation aids in the diagnosis of diseases and the identification of pathogens. Image segmentation methods can be divided into machine learning methods such as supervised and unsupervised machine learning and classical segmentation methods such as threshold-based, edge-based, and region-based methods [107]. This section presents the segmentation methods used by the studies surveyed. In [69,73,74,76,77,81], the authors used thresholding. In this method, the image is partitioned into its foreground and background depending on the threshold value. In [69,96], the authors applied the contour technique; active contour is an active segmentation model that separates the pixels of interest from an image using energy forces and limits. In [79,91], the authors suggested using watershed segmentation techniques; this approach is used to separate different objects. The study [82,91] employed Otsu’s threshold approach to extract the object; this approach finds an optimum threshold automatically. Because of its simple calculation, Otsu’s is the most successful technique for image thresholding. The authors in [84] used the Gaussian matched filter approach with binarization realized with local entropy thresholding. This method allowed them to acquire more exact results than using only one binarization threshold. Additionally, the authors employed the crossing number algorithm or minutiae detection to extract minutiae and then automatically count their number.
The study [95] adopted a color space-based method for mammography image segmentation, followed by mathematical morphology. Post-processing mathematical morphology improved performance, including filling, closure, and other operations.

3.2.3. Feature Extraction Techniques

Feature extraction is the process of extracting meaningful data from raw data. It is crucial in image processing as it enhances the image’s quality by reducing the dimensionality of the image, extracting the unique features, and transforming the input data into a set of features that are used for classification purposes. There are various features such as color, texture, and shape, and each type has several methods to extract features from medical images [108]. The distribution of the chosen studies that used different feature extraction and reduction methods is illustrated in Table 2; this table comprises the detailed distribution of publication references, the type of features, and the method used.

3.2.4. Classification Techniques

Image classification is a difficult task in image analysis. The primary purpose of medical image classification is to accurately establish which parts of the human body are infected with the disease [109]. This study reviews the chosen studies that used the newest classification techniques for medical image classification. The study [67] proposed a new classifier depending on KPCA and SVM. The support vector machine with kernel principal component analysis (SVM-KPCA) approach was designed to classify the images, and the proposed method achieved effective results, which gave 100% accuracy, 100% sensitivity, and 100% specificity. In the study [68], the authors suggested a hybrid approach consisting of a radial basis function neural network (RBFNN) to classify brain MRI images, the accuracy of this model varied from 80% to 90%.
The study [70] developed a neural-based detection system by employing skin imaging for two different skin disorders. The ANN was trained using the non-dominated sorting genetic algorithm-II, a well-known multi-objective optimization technique (NN-NSGA-II). The proposed model was compared to two well-known metaheuristic-based classifiers, NN-PSO (ANN trained with PSO) and NN-CS (ANN trained with Cuckoo Search). The proposed bag-of-features enabled the N-NSGA-II model and obtained 90.56% accuracy, 88.26% precision, 93.64% recall, and 90.87% F-measure with the experimental data, indicating its superiority over other models. The study [71] applied multiple artificial intelligence (AI) techniques, such as the convolutional neural network and support vector machine, which were combined with image processing tools to construct a superior structure; the accuracy attained after training with CNN alone was approximately 91%, which was raised to approximately 95.3% when combined with the SVM. In [79], the authors developed a new SVM-FA (support vector machine optimized with firefly technique) classifier for diagnosing lung cancer in CT images where the SVM classifier, optimized with the firefly technique, was applied to the preprocessed data. A comparative analysis was conducted between the proposed work, traditional work, and the SVM classifier to assess the competence level of the proposed SVM-FA technique. The suggested work was successful and efficient, achieving an accuracy of 96 % and specificity of 83.3%.
The study [86] applied a deep neural network (DNN) with the rectified Adam optimizer to detect Alzheimer’s disease in MRI images. The experimental outcomes showed that DNN with the rectified Adam optimizer outperformed the existing work, landmark-based features with the SVM classifier, with a 16% classification accuracy. In a study [92], the authors addressed a combination of MLC (maximum likelihood classifier) and SVM (support vector machine) classifiers for the classification and diagnosis of DR (diabetic retinopathy) disease in the fundus images. The researchers utilized these classifiers to raise the performance level, and the suggested approach demonstrated high accuracy (98.60%), sensitivity (99%), and specificity (99%). In [95], the authors introduced an individual classifier called: feed-forward ANN (FF-ANN) and two hybrid classifiers, namely: random subspace with random forest (RSwithRF) and random subspace with Bayesian network (RSwithBN), for the classification of MRI brain images. The proposed system showed that ANN and hybrid classification approaches are the most appropriate for classification because of their high accuracy rates and achieved an accurate classification of 95.83%, 97.14%, and 95.71%, respectively.
In [100], the authors used various techniques, such as machine learning and different deep learning models, to predict various ophthalmic diseases; the study showed that the efficiency of each strategy varied depending on the input dataset and based on the many symptoms of each disease.

3.2.5. Metric Evaluation

Accuracy, specificity, and sensitivity are the most prevalent metrics utilized in most disease diagnosis applications for humans. The number of true positive (TP), true negative (TN), false positive (FP), and false-negative (FN) samples are used to calculate these measures [110]. The sensitivity determines how many positive samples were identified (TP), whereas the specificity determines how many negative samples were identified (TN). The ratio of correctly recognized samples to the total number of samples is used to calculate classification accuracy [110]. Other metrics such as precision, recall, F1 score, and AUC must be used in conjunction with these metrics [111]. Table 3 illustrates the number of times each performance metric was utilized in each study.

4. Machine Learning Techniques

This study classified machine learning (ML) segmentation and classification techniques as supervised and unsupervised machine learning. Supervised learning algorithms generate mathematical models using a set of labeled data (images) which are utilized for training, examples of supervised machine learning algorithms are K-nearest neighbors (K-NN), support vector machines (SVM), decision trees (DT), linear regression, logistic regression, random forest (RF), artificial neural networks (ANN), gradient boosting, and naïve Bayes models. Unsupervised learning algorithms create mathematical models based on a set of data that only contains inputs and no required output labels. The algorithms identify patterns in the data and classify them. Examples of unsupervised machine learning algorithms are K-mean clustering, hierarchical clustering, Apriori algorithm, principal component analysis (PCA), and fuzzy C-means (FCM) [112].
In the surveyed studies, several techniques of machine learning (ML) in segmentation and classification were deployed. The study [85] compared the IPFCM (intuitionist possibilistic fuzzy C-mean) method with the hybridization of the negative function of the intuitionistic and the negative function of the possibilistic for mammogram image segmentation with conventional segmentation methods such as the Otsu algorithm, FCM (fuzzy C-mean) clustering, IFCM (intuitionist fuzzy C-mean) clustering, and PFCM (possibilistic fuzzy C-mean). Moreover, they found that this proposed method was the best approach. In studies [88,91,98], the authors used the fuzzy clustering method. Fuzzy clustering is the most extensively used for image segmentation because of its benefits compared with traditional clustering methods, which includes: making regions more homogeneous, decreasing the number of erroneous spots, reducing noise sensitivity, and removing noisy regions [88]. In [92], the authors introduced the MSW-FCM technique (modified spatial weighted fuzzy C-means) for accurately segmenting blood vessels in retinal fundus images.
The distribution of the chosen studies that used the most popular machine learning techniques to diagnose human body disease is illustrated in Table 4. This table comprises the detailed distribution of publication references, techniques used, classified tasks, and classification accuracy results.

5. Deep Learning

Deep learning (DL) is well-known for its performance in image segmentation and classification models [113]. Convolution neural networks (CNN) are the most extensively utilized deep learning approach in the articles reviewed. CNN is an important image processing approach that allows for accurately classifying aberrant and normal samples [114]. CNN employs a layered perceptron-driven architecture composed of fully connected networks in which every neuron in one layer is coupled to all neurons in the subsequent layers. The input images, an in-depth feature extractor, and a classifier are the three main components of a CNN [111]. There are three kinds of layers in CNN, each of which performs a different function: (1) convolutional, (2) pooling, and (3) fully connected. The convolutional layer extracts the characteristics of the structure. The fully connected layer then decides which class the current input belongs to, depending on the retrieved features. Then, the pooling layer is responsible for shrinking feature maps and network parameters. Transfer learning algorithms can increase CNN performance in the case of limited input data [115]. A CNN can be created from scratch using an existing pre-trained network without retraining or fine-tuning a pre-trained network on a target dataset [111].
According to this review, some research publications used distinct deep neural network architectures; Table 5 summarizes studies that used deep learning in disease diagnosis.
The study [66] suggested a new classification structure that depends on a combination of 2D CNN and recurrent neural networks (RNN) that learn the properties of 3D PET images by decomposing them into a series of 2D slices. The intra-slice characteristics are captured using hierarchical 2D CNNs, while the inter-slice features are extracted using the gated recurrent unit (GRU) of an RNN for final classification. The experimental outcomes showed that the suggested method has promising performance for Alzheimer’s disease (AD) diagnosis. The authors in [80] applied CNN-based algorithm on a chest X-ray dataset to detect pneumonia. Three approaches were examined, a linear support vector machine classifier with local rotation and orientation-free features, transfer learning on two CNN models: Visual Geometry Group, i.e., VGG16 and InceptionV3, and a capsule network training from scratch. Data augmentation is a data preprocessing technique applied to all three approaches, the outcomes revealed that data augmentation is an effective technique for all three algorithms to improve performance and efficiency. In [86], the authors proposed a successful approach for predicting the probability of brain cancers in MRI images using CNNs and the (Adam) optimizer algorithm. An adaptive moment estimation (Adam) optimizer has been introduced to expedite training the network and evaluate the model to attain maximum accuracy. The study [93] introduced a new deep learning-based CNN model created by the Bayesian optimization algorithm for classifying Alzheimer’s disease, mild cognitive impairment (MCI), and cognitively normal (CN) in MRI images. The proposed method gives extraordinary outcomes compared to the existing techniques. The study [97] compared a new optimized version of CNN and a new, improved metaheuristic, named the advanced thermal exchange optimizer for the detection of breast cancers with three different techniques including multilayer perceptron (MLP), multiple instances (MI), and transfer learning (TL), which were applied in the MIAS mammography database to demonstrate the superiority of proposed method.
In a study [102], the authors suggested using the ML approach in the training process to create a prediction model by training the CNN algorithm in addition to using the “Adam” optimizer from the Python Keras optimization library that has an initial learning rate of 0.001. The evaluation was carried out after partitioning the data into 80% and 20% for training and evaluation, respectively, to compute the accuracy of classification and loss of model over a set number of 10 epochs, the proposed algorithm gave good results. In the study [103], the authors developed a transfer learning-based technique to determine the severity of diabetic retinopathy; the proposed model was a deep learning model that combined multiple pre-trained image classification CNN models with the global average pooling (GAP) technique. The accuracy of the model attained 82.4% quadratic weighted transfer learning kappa (QWK). In [104], the authors proposed a diagnosis system based on deep neural networks and image retrieval method. Transfer learning and hashing functions increased the CNN performance and image retrieval algorithms. The proposed system attained an accuracy of 97% for CNN and a content-based medical image retrieval (CBMIR) method. The study [105] compared the effectiveness of modern CNN models for the task of modality classification and reported the superiority of a deep learning-based method over classic feature engineering approaches based on multi-label learning algorithms. The experimental results demonstrated that deep learning is more efficient than traditional methods and produced better and more robust feature representations when compared to handcrafted feature extraction approaches. The findings showed that deep transfer learning techniques work well in the medical field, where data is scarce. The Google Inception-v3 model performed the best when it came to classifying medical picture modalities. Except for VGG-16 and Res-Net-50, the other models behaved similarly to Inception-v3.
Generally, the studies utilized accuracy, specificity, sensitivity, precision, recall, f1 score, and AUC as evaluation metrics. According to the conclusions of the studies, deep learning algorithms achieved good results in most of the evaluation metrics (as in Table 5).

6. Diseases Diagnosis System

Image processing has been extensively employed in various illness diagnosis procedures (human, animal, and plant), helping professionals select the appropriate treatment. In diagnosing human diseases, image processing techniques play an essential role. They can be used to identify disease signs (on the skin, for example) or in molecular research using microscope images that show the anatomy of the tissues [116]. The disease diagnosis systems comprise stages and methods for diagnosis. It starts from the first stage, which is the stage of collecting images from different sources, either through a database available online or collecting the images from different sources, i.e., images of patients available on the internet or from a specific hospital. Then the preprocessing image stage as different filtering methods are applied to improve the image, and then the segmentation stage follows by isolating the regions of interest and extracting the important features in the feature extraction stage. At the end of the process, the input image is classified and metrics for evaluating the effectiveness of disease diagnosis techniques are applied. The general system stages for the diagnosis of any disease in image processing are illustrated in Figure 3. This figure shows a conceptual map that explains concepts linked to disease diagnosis steps and a descriptive brief of the major evaluation metrics.
In this study, several disease diagnosis methods were studied that used different image treatment and classification strategies, which dealt with diagnosing human diseases, to examine the new and important methods that addressed in the studies.

7. Discussion and Future Directions

The growing interest in employing medical image processing approaches and AI techniques, such as ML and DL, may reduce the doctor’s workload and repetitive and monotonous procedures to diagnose and analyze patient data and images [117]. In this section, we discuss the state-of-the-art approaches and datasets used to diagnose human diseases through an answer to research questions and future directions.

7.1. Answer to Research Questions

The data were extracted from the studied articles as they were explained and clarified to answer the research questions.
“Q1: What are the modalities of medical imaging?” The methods of medical imaging were identified in the articles, where many modern imaging methods for diseases have been used, which help in diagnosing diseases in the early stages. The MRI imaging modality was used in fourteen articles in the diagnosis of brain diseases, lung diseases, cardiovascular diseases, and cardiac attacks; the CT imaging modality was used in eight articles in the diagnosis of lung diseases, liver tumors, and bone diseases. The X-Ray modality was utilized in seven articles to diagnose osteoarthritis disease, pneumonia disease, and breast cancer. Five articles used retina fundus images to diagnose eye illnesses for retina disease diagnosis. PET imaging was utilized in four articles to diagnose Alzheimer’s disease, and dermoscopy skin imaging was used to diagnose skin problems.
“Q2: What is the task of medical image processing and analysis?“ In the analysis of the articles, it was identified that the task of the analysis of medical images and processing it to diagnose several diseases in different parts of the human body to assist the patient in detecting the disease and treating it in the early stage.
“Q3: Which medical image processing methods are most used in diagnostic systems?” Through the analysis of the articles, we discovered that by using filtering techniques, we may improve the initial image and get a more exact detection of the ROI borders. Noise can be reduced by smoothing with a low pass or median filtering, while edge sharpening and increased contrast can help to keep the ROIs’ borders. Color normalization, such as histogram equalization or specification, can boost contrast. A variety of transforms might be used to extract important features. Fourier and wavelet transform can be utilized to locate conditions in a domain other than the spatial one. Because the borders of the object or regions containing the crucial information for the diagnosis must be identified correctly, image segmentation is one of the most essential phases in a disease diagnosis procedure. The simplest but effective method is gray-level segmentation utilizing a single, multiple, or more advanced (Otsu) criteria. Several human illness diagnosis approaches have used active contour detection and modifications, such as a snake. Several clustering approaches were applied in the segmentation process, such as K-mean and fuzzy C-means, to separate the ROIs. The articles used many methods to extract the important features from objects. We found that the texture features were the most used in analyzing the articles. It was obvious from the analysis of the articles that the most popular classification method used in articles was a neural network with all types and SVM, where it was used alone or with another technique as a hybrid system for the classification of diseases. The rest of the classification techniques examined in this review such as K-means, K-NN, decision trees, naïve Bayes, random forest, logistic regression, and gradient boosting. All the classification or clustering methods can be exploited in the last stage of the diagnosis application to create a new diagnosis system to identify disease.
“Q4: What diagnostic techniques have been adopted and developed?” Analyzing the articles, we found that some of the articles adopted new hybrid methods for classification and diagnosis of diseases, such as using the SVM method with KPSA, SVM with FA, SVM with MLC, as well as using it with fuzzy. Moreover, the combination of random forest (RF) with random subspace (RS) and the networks CNN with RNN and with optimizers were used to create systems for disease diagnosis.
“Q5: Is the system that has been adopted or designed capable of producing good results?” The main classification methods discussed in the articles achieved efficient and good results in diagnosing diseases. The accuracy of SVM ranged between 88% and 100% in the several human disease diagnosis applications examined, naïve Bayes achieved accuracy between 78% and 94%, decision trees ranged between 82% and 99%, and random forest ranged between 80% and 99%. The different kinds of neural networks achieved accuracy between 73% and 97%, and the accuracy of CNN ranged between 86% and 99.3%. Finally, the accuracy of the K-nearest neighbor ranged between 73% and 95.5% in the referenced approaches.

7.2. Future Directions

It takes great effort to improve the performance of classification or diagnosis of diseases using multiple methods of medical images. In future work, we will try to present new research directions that can be further exploited in disease diagnosis through medical image processing techniques. We will compare the most common methods that are used in the diagnosis systems and select the most effective methods that introduce higher accuracy to the diagnosis for the medical image database that we use to build a new diagnostic system for diseases. The future research directions are discussed briefly as follows:
A. Study and analysis of the best and most common classification and diagnosis methods.
B. Experimenting with a set of medical data to compute the classification accuracy of the methods used.
C. Comparison of classification accuracy of these methods to identify the methods that are highly accurate.
D. Building a new diagnostic method by deriving a hybrid method, modifying a previous method, or combining previous methods.
The current study improved our knowledge of finding the best techniques that can be beneficial to our future research. We hope that this systematic literature review will also be useful to other researchers in their endeavor to improve disease diagnosis through medical image processing techniques. Due to our comparisons, the most common methods used in diagnosis systems have been discussed, making it easier to select effective methods that introduce higher accuracy to diagnosis methods using medical databases.

8. Conclusions

Medical imaging plays a critical role in inspecting and diagnosing human diseases. For diagnosis, many algorithms based on diverse methodologies have been created. As a result, disease detection has become an important topic in medical image processing and medical imaging research.
This review studied the articles on disease diagnosis published between 2017 and 2021. Overall, forty articles were analyzed from specialized academic repositories. The review focused on six factors: datasets used, various medical imaging modalities, image preprocessing techniques, image segmentation techniques, feature extraction techniques, classification approaches, and performance metrics, which were used to build and evaluate the disease diagnostic models. In addition, this systematic review highlighted a variety of AI techniques and presented a comprehensive study by exploring new diagnosis techniques, disease diagnosis issues, provided a variety of insightful information (such as the use of ML and DL), and an evaluation for each study. We aimed to address our study questions about effective diagnosis methodologies and discover the solutions proposed by many researchers in the diagnosis of diseases based on this. It was discovered that developing a new disease diagnosis method is quite important.
According to the findings of this SLR, researchers have adopted various methods to classify medical images associated with multiple disease diagnoses. These methods have shown promising results in terms of accuracy, cost, and detection speed. We found after analyzing the forty articles that the best preprocessing technique was the median filter, which was used in many studies, as has proven its ability to reduce noise and preserve the boundaries of the object. Regarding segmentation approaches, threshold techniques were the most used to extract the lesion from an image. Threshold-based approaches are the most often utilized among all the traditional methods, according to classic review publications, due to their applicability for numerous segmentation issues in medical images [118]. The methods for extracting features from medical images depend on the images used by analyzing the articles, we found that extracting texture features gave the best results. As for the applied classification methods, we found that the support vector machine method gave the best result in classification, as its accuracy in [67] reached 100% when it was used with KPC (support vector machine with kernel principal component analysis (SVM- KPCA)) approach. Thus, this method achieved the best accuracy and the best performance in diagnosis. Among the machine learning-based approaches whose associated works and analyses are presented, the supervised learning methods, notably the neural network-based methods, were the most widely employed, with different kinds of neural networks used to identify various diseases.
We also discovered that deep learning utilizing the CNN network has unique skills and advances in recognizing and classifying medical images, particularly those connected to breast, lung, and brain cancer. Other common classification approaches comprise fuzzy clustering, K-NN, K-means, decision trees, random forests, and other prominent classification algorithms. The most utilized measures were accuracy, sensitivity, and specificity.
This study aimed to propose future research directions by focusing on imaging modalities, techniques, and procedures used in the reviewed articles. Furthermore, this publication will aid in the development of new research that assesses and compares various medical image processing and analysis techniques. The findings provided in this SRL reveal that tremendous progress has been made in medical image processing over the last five years. In addition, the goal of this SRL was to undertake a detailed analysis of research achievements linked to the usage of medical image processing techniques applied to medical databases to know the current state-of-the-art techniques.

Author Contributions

The concept of the article was proposed by N.P., the data resources and validation were contributed by B.M.R., the formal analysis, investigation, and draft preparation were performed by B.M.R. The supervision and review of the study were headed by N.P. The final writing was critically revised by N.P. and finally approved by the authors. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The study does not report any data.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

CT—Computed Tomography; MRI—Magnetic Resonance Imaging; US—Ultrasound; PET–Positron Emission Tomography; GI—Gastrointestinal Diseases; FCM—Fuzzy C-Mean; IFCM—Intuitionist Fuzzy C-Mean; PFCM—Possibilistic Fuzzy C-Mean; IPFCM—Intuitionist Possibilistic Fuzzy C-Mean; MSW-FCM—Modified Spatial Weighted Fuzzy C-mean; DWT—Discrete Wavelet Transform; PCA—Principal Component Analysis; KPCA—Kernel Principal Component Analysis; SFTA—Segmentation-based Fractal Textural Analysis; SIFT—Scale Invariant Feature Transform; GLCM—Gray Level Co-occurrence Matrix MI- Moment Invariant; WHT—Walsh Hadamard Transform; HOG—Histogram of Oriented Gradients; SVD—Single Value Decomposition; CCSA—Chaotic Crow Search Algorithm; 2D-FFT—Two Dimension Fourier Features Transform; 2D-DWT—Two Dimension Wavelet Features Transform CM—Color Moment; SVM—Support Vector Machine; RBFNN—Radial Basis Function Neural Network; ANN—Artificial Neural Network; SVM-FA—Support Vector Machine with Firefly Technique; DNN—Deep Neural Network; MLC—Maximum Likelihood Classifier; FF-ANN—Feed-Forward Artificial Neural Network; K-NN—K Nearest Neighbor; LDA—Linear Discriminate Analysis; PNN—Probabilistic Neural Network; RSDA—Rough Set DATA Analysis; FSVM—Fuzzy Support Vector Machine; CNN—Convolutional Neural Network; RF—Random Forest; DT—Decision Trees; ROI—Region Of Interest; RNN—Recurrent Neural Network.

References

  1. Ansari, Z.; Mateenuddin, Q.; Abdullah, A. Performance research on medical data classification using traditional and soft computing techniques. Int. J. Recent Technol. Eng. (IJRTE) 2019, 8, 990–995. [Google Scholar]
  2. Kermany, D.S.; Goldbaum, M.; Cai, W.; Valentim, C.C.S.; Liang, H.; Baxter, S.L.; McKeown, A.; Yang, G.; Wu, X.; Yan, F.; et al. Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning. Cell 2018, 172, 1122–1131.e9. [Google Scholar] [CrossRef] [PubMed]
  3. Sudheer Kumar, E.; Shoba Bindu, C. Medical image analysis using deep learning: A systematic literature review. In Proceedings of the International Conference on Emerging Technologies in Computer Engineering, Jaipur, India, 4–5 February 2019; pp. 81–97. [Google Scholar]
  4. Murtaza, G.; Shuib, L.; Abdul Wahab, A.W.; Mujtaba, G.; Nweke, H.F.; Al-garadi, M.A.; Zulfiqar, F.; Raza, G.; Azmi, N.A. Deep learning-based breast cancer classification through medical imaging modalities: State of the art and research challenges. Artif. Intell. Rev. 2020, 53, 1655–1720. [Google Scholar] [CrossRef]
  5. Myszczynska, M.A.; Ojamies, P.N.; Lacoste, A.; Neil, D.; Saffari, A.; Mead, R.; Hautbergue, G.M.; Holbrook, J.D.; Ferraiuolo, L. Applications of machine learning to diagnosis and treatment of neurodegenerative diseases. Nat. Rev. Neurol. 2020, 16, 440–456. [Google Scholar] [CrossRef]
  6. Ker, J.; Bai, Y.; Lee, H.Y.; Rao, J.; Wang, L. Automated brain histology classification using machine learning. J. Clin. Neurosci. 2019, 66, 239–245. [Google Scholar] [CrossRef]
  7. Amrane, M.; Oukid, S.; Gagaoua, I.; Ensari, T. Breast cancer classification using machine learning. In Proceedings of the 2018 Electric Electronics, Computer Science, Biomedical Engineerings’ Meeting (EBBT), Istanbul, Turkey, 18–19 April 2018; pp. 1–4. [Google Scholar]
  8. Vijayvargiya, A.; Kumar, R.; Dey, N.; Tavares, J.M.R. Comparative analysis of machine learning techniques for the classification of knee abnormality. In Proceedings of the 2020 IEEE 5th International Conference on Computing Communication and Automation (ICCCA), Greater Noida, India, 30–31 October 2020; pp. 1–6. [Google Scholar]
  9. Barstugan, M.; Ozkaya, U.; Ozturk, S. Coronavirus (COVID-19) classification using ct images by machine learning methods. arXiv 2020, arXiv:2003.09424. [Google Scholar]
  10. Kwekha-Rashid, A.S.; Abduljabbar, H.N.; Alhayani, B. Coronavirus disease (COVID-19) cases analysis using machine-learning applications. Appl. Nanosci. 2021, 11, 1–13. [Google Scholar] [CrossRef]
  11. Pahar, M.; Klopper, M.; Warren, R.; Niesler, T. COVID-19 cough classification using machine learning and global smartphone recordings. Comput. Biol. Med. 2021, 135, 104572. [Google Scholar] [CrossRef]
  12. Abdulkareem, N.M.; Abdulazeez, A.M.; Zeebaree, D.Q.; Hasan, D.A. COVID-19 world vaccination progress using machine learning classification algorithms. Qubahan Acad. J. 2021, 1, 100–105. [Google Scholar] [CrossRef]
  13. Ballı, S. Data analysis of Covid-19 pandemic and short-term cumulative case forecasting using machine learning time series methods. Chaos Solitons Fractals 2021, 142, 110512. [Google Scholar] [CrossRef]
  14. Sivaranjani, S.; Ananya, S.; Aravinth, J.; Karthika, R. Diabetes prediction using machine learning algorithms with feature selection and dimensionality reduction. In Proceedings of the 2021 7th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 19–20 March 2021; pp. 141–146. [Google Scholar]
  15. Jindal, H.; Agrawal, S.; Khera, R.; Jain, R.; Nagrath, P. Heart disease prediction using machine learning algorithms. In Proceedings of the IOP conference series: Materials science and engineering, Gorakhpur, India, 14–15 February 2020; p. 012072. [Google Scholar]
  16. Fernandez Escamez, C.S.; Martin Giral, E.; Perucho Martinez, S.; Toledano Fernandez, N. High interpretable machine learning classifier for early glaucoma diagnosis. Int. J. Ophthalmol. 2021, 14, 393–398. [Google Scholar] [CrossRef] [PubMed]
  17. Mijwil, M.M. Implementation of Machine Learning Techniques for the Classification of Lung X-Ray Images Used to Detect COVID-19 in Humans. Iraqi J. Sci. 2021, 62, 2099–2109. [Google Scholar] [CrossRef]
  18. Devi, R.L.; Kalaivani, V. Machine learning and IoT-based cardiac arrhythmia diagnosis using statistical and dynamic features of ECG. J. Supercomput. 2020, 76, 6533–6544. [Google Scholar] [CrossRef]
  19. Khanday, A.M.U.D.; Rabani, S.T.; Khan, Q.R.; Rouf, N.; Mohi Ud Din, M. Machine learning based approaches for detecting COVID-19 using clinical text data. Int. J. Inf. Technol. 2020, 12, 731–739. [Google Scholar] [CrossRef] [PubMed]
  20. Erickson, B.J.; Korfiatis, P.; Akkus, Z.; Kline, T.L. Machine learning for medical imaging. Radiographics 2017, 37, 505. [Google Scholar] [CrossRef]
  21. Palimkar, P.; Shaw, R.N.; Ghosh, A. Machine learning technique to prognosis diabetes disease: Random forest classifier approach. In Advanced Computing and Intelligent Technologies; Springer: Berlin/Heidelberg, Germany, 2022; pp. 219–244. [Google Scholar]
  22. Zoabi, Y.; Deri-Rozov, S.; Shomron, N. Machine learning-based prediction of COVID-19 diagnosis based on symptoms. Npj Digit. Med. 2021, 4, 3. [Google Scholar] [CrossRef]
  23. Raihan, M.M.S.; Shams, A.B.; Preo, R.B. Multi-class electrogastrogram (EGG) signal classification using machine learning algorithms. In Proceedings of the 2020 23rd International conference on computer and information technology (ICCIT), DHAKA, Bangladesh, 19–21 December 2020; pp. 1–6. [Google Scholar]
  24. Arumugam, K.; Naved, M.; Shinde, P.P.; Leiva-Chauca, O.; Huaman-Osorio, A.; Gonzales-Yanac, T. Multiple disease prediction using Machine learning algorithms. Mater. Today Proc. 2021, in press. [CrossRef]
  25. Saygılı, A. A new approach for computer-aided detection of coronavirus (COVID-19) from CT and X-ray images using machine learning methods. Appl. Soft Comput. 2021, 105, 107323. [Google Scholar] [CrossRef]
  26. Ahmad, A.; Garhwal, S.; Ray, S.K.; Kumar, G.; Malebary, S.J.; Barukab, O.M. The number of confirmed cases of covid-19 by using machine learning: Methods and challenges. Arch. Comput. Methods Eng. 2021, 28, 2645–2653. [Google Scholar] [CrossRef]
  27. Wroge, T.J.; Özkanca, Y.; Demiroglu, C.; Si, D.; Atkins, D.C.; Ghomi, R.H. Parkinson’s Disease Diagnosis Using Machine Learning and Voice. In Proceedings of the 2018 IEEE Signal Processing in Medicine and Biology Symposium (SPMB), Philadelphia, PA, USA, 1 December 2018; pp. 1–7. [Google Scholar]
  28. Sun, W.; Zhang, P.; Wang, Z.; Li, D. Prediction of cardiovascular diseases based on machine learning. ASP Trans. Internet Things 2021, 1, 30–35. [Google Scholar] [CrossRef]
  29. Chittora, P.; Chaurasia, S.; Chakrabarti, P.; Kumawat, G.; Chakrabarti, T.; Leonowicz, Z.; Jasiński, M.; Jasiński, Ł.; Gono, R.; Jasińska, E. Prediction of chronic kidney disease-a machine learning perspective. IEEE Access 2021, 9, 17312–17334. [Google Scholar] [CrossRef]
  30. Wu, C.-C.; Yeh, W.-C.; Hsu, W.-D.; Islam, M.M.; Nguyen, P.A.A.; Poly, T.N.; Wang, Y.-C.; Yang, H.-C.; Li, Y.-C.J. Prediction of fatty liver disease using machine learning algorithms. Comput. Methods Programs Biomed. 2019, 170, 23–29. [Google Scholar] [CrossRef] [PubMed]
  31. Subramani, P.; BD, P. Prediction of muscular paralysis disease based on hybrid feature extraction with machine learning technique for COVID-19 and post-COVID-19 patients. Pers. Ubiquitous Comput. 2021, 25, 1–14. [Google Scholar] [CrossRef] [PubMed]
  32. Jena, L.; Patra, B.; Nayak, S.; Mishra, S.; Tripathy, S. Risk prediction of kidney disease using machine learning strategies. In Intelligent and Cloud Computing; Springer: Berlin/Heidelberg, Germany, 2021; pp. 485–494. [Google Scholar]
  33. Ardakani, A.A.; Kanafi, A.R.; Acharya, U.R.; Khadem, N.; Mohammadi, A. Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks. Comput. Biol. Med. 2020, 121, 103795. [Google Scholar] [CrossRef] [PubMed]
  34. Han, S.S.; Kim, M.S.; Lim, W.; Park, G.H.; Park, I.; Chang, S.E. Classification of the Clinical Images for Benign and Malignant Cutaneous Tumors Using a Deep Learning Algorithm. J. Investig. Dermatol. 2018, 138, 1529–1538. [Google Scholar] [CrossRef]
  35. Kavitha, M.; Jayasankar, T.; Venkatesh, P.M.; Mani, G.; Bharatiraja, C.; Twala, B. COVID-19 disease diagnosis using smart deep learning techniques. J. Appl. Sci. Eng. 2021, 24, 271–277. [Google Scholar]
  36. Klang, E.; Barash, Y.; Margalit, R.Y.; Soffer, S.; Shimon, O.; Albshesh, A.; Ben-Horin, S.; Amitai, M.M.; Eliakim, R.; Kopylov, U. Deep learning algorithms for automated detection of Crohn’s disease ulcers by video capsule endoscopy. Gastrointest. Endosc. 2020, 91, 606–613.e602. [Google Scholar] [CrossRef]
  37. Oh, S.L.; Hagiwara, Y.; Raghavendra, U.; Yuvaraj, R.; Arunkumar, N.; Murugappan, M.; Acharya, U.R. A deep learning approach for Parkinson’s disease diagnosis from EEG signals. Neural Comput. Appl. 2020, 32, 10927–10933. [Google Scholar] [CrossRef]
  38. Bychkov, D.; Linder, N.; Turkki, R.; Nordling, S.; Kovanen, P.E.; Verrill, C.; Walliander, M.; Lundin, M.; Haglund, C.; Lundin, J. Deep learning based tissue analysis predicts outcome in colorectal cancer. Sci. Rep. 2018, 8, 3395. [Google Scholar] [CrossRef]
  39. Vorontsov, E.; Cerny, M.; Régnier, P.; Di Jorio, L.; Pal, C.J.; Lapointe, R.; Vandenbroucke-Menu, F.; Turcotte, S.; Kadoury, S.; Tang, A. Deep Learning for Automated Segmentation of Liver Lesions at CT in Patients with Colorectal Cancer Liver Metastases. Radiol. Artif. Intell. 2019, 1, 180014. [Google Scholar] [CrossRef]
  40. Morris, S.A.; Lopez, K.N. Deep learning for detecting congenital heart disease in the fetus. Nat. Med. 2021, 27, 764–765. [Google Scholar] [CrossRef] [PubMed]
  41. Cong, L.; Feng, W.; Yao, Z.; Zhou, X.; Xiao, W. Deep Learning Model as a New Trend in Computer-aided Diagnosis of Tumor Pathology for Lung Cancer. J. Cancer 2020, 11, 3615–3622. [Google Scholar] [CrossRef] [PubMed]
  42. Noreen, N.; Palaniappan, S.; Qayyum, A.; Ahmad, I.; Imran, M.; Shoaib, M. A Deep Learning Model Based on Concatenation Approach for the Diagnosis of Brain Tumor. IEEE Access 2020, 8, 55135–55144. [Google Scholar] [CrossRef]
  43. Liu, Y.; Jain, A.; Eng, C.; Way, D.H.; Lee, K.; Bui, P.; Kanada, K.; de Oliveira Marinho, G.; Gallegos, J.; Gabriele, S.; et al. A deep learning system for differential diagnosis of skin diseases. Nat. Med. 2020, 26, 900–908. [Google Scholar] [CrossRef]
  44. Xu, X.; Jiang, X.; Ma, C.; Du, P.; Li, X.; Lv, S.; Yu, L.; Ni, Q.; Chen, Y.; Su, J.; et al. A Deep Learning System to Screen Novel Coronavirus Disease 2019 Pneumonia. Engineering 2020, 6, 1122–1129. [Google Scholar] [CrossRef]
  45. Sun, C.; Xu, A.; Liu, D.; Xiong, Z.; Zhao, F.; Ding, W. Deep Learning-Based Classification of Liver Cancer Histopathology Images Using Only Global Labels. IEEE J. Biomed. Health Inform. 2020, 24, 1643–1651. [Google Scholar] [CrossRef]
  46. Ayan, E.; Ünver, H.M. Diagnosis of Pneumonia from Chest X-Ray Images Using Deep Learning. In Proceedings of the 2019 Scientific Meeting on Electrical-Electronics & Biomedical Engineering and Computer Science (EBBT), Istanbul, Turkey, 24–26 April 2019; pp. 1–5. [Google Scholar]
  47. Goceri, E. Diagnosis of skin diseases in the era of deep learning and mobile technology. Comput. Biol. Med. 2021, 134, 104458. [Google Scholar] [CrossRef]
  48. Doppalapudi, S.; Qiu, R.G.; Badr, Y. Lung cancer survival period prediction and understanding: Deep learning approaches. Int. J. Med. Inform. 2021, 148, 104371. [Google Scholar] [CrossRef]
  49. Jojoa Acosta, M.F.; Caballero Tovar, L.Y.; Garcia-Zapirain, M.B.; Percybrooks, W.S. Melanoma diagnosis using deep learning techniques on dermatoscopic images. BMC Med. Imaging 2021, 21, 6. [Google Scholar] [CrossRef]
  50. Saratxaga, C.L.; Moya, I.; Picón, A.; Acosta, M.; Moreno-Fernandez-de-Leceta, A.; Garrote, E.; Bereciartua-Perez, A. MRI Deep Learning-Based Solution for Alzheimer’s Disease Prediction. J. Pers. Med. 2021, 11, 902. [Google Scholar] [CrossRef]
  51. Placido, D.; Yuan, B.; Hjaltelin, J.X.; Haue, A.D.; Chmura, P.J.; Yuan, C.; Kim, J.; Umeton, R.; Antell, G.; Chowdhury, A. Pancreatic cancer risk predicted from disease trajectories using deep learning. bioRxiv 2022. [Google Scholar] [CrossRef]
  52. He, X.; Yang, X.; Zhang, S.; Zhao, J.; Zhang, Y.; Xing, E.; Xie, P. Sample-efficient deep learning for COVID-19 diagnosis based on CT scans. medRxiv 2020. [Google Scholar] [CrossRef]
  53. Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Skin Cancer Classification using Deep Learning and Transfer Learning. In Proceedings of the 2018 9th Cairo International Biomedical Engineering Conference (CIBEC), Cairo, Egypt, 20–22 December 2018; pp. 90–93. [Google Scholar]
  54. Mohammed, M.; Mwambi, H.; Mboya, I.B.; Elbashir, M.K.; Omolo, B. A stacking ensemble deep learning approach to cancer type classification based on TCGA data. Sci. Rep. 2021, 11, 15626. [Google Scholar] [CrossRef] [PubMed]
  55. Avanzato, R.; Beritelli, F. Automatic ECG Diagnosis Using Convolutional Neural Network. Electronics 2020, 9, 951. [Google Scholar] [CrossRef]
  56. Alanazi, S.A.; Kamruzzaman, M.M.; Islam Sarker, M.N.; Alruwaili, M.; Alhwaiti, Y.; Alshammari, N.; Siddiqi, M.H. Boosting Breast Cancer Detection Using Convolutional Neural Network. J. Healthc. Eng. 2021, 2021, 5528622. [Google Scholar] [CrossRef]
  57. Saranya, N.; Karthika Renuka, D.; Kanthan, J.N. Brain Tumor Classification Using Convolution Neural Network. J. Phys. Conf. Ser. 2021, 1916, 012206. [Google Scholar] [CrossRef]
  58. Badža, M.M.; Barjaktarović, M.Č. Classification of Brain Tumors from MRI Images Using a Convolutional Neural Network. Appl. Sci. 2020, 10, 1999. [Google Scholar] [CrossRef]
  59. Li, L.; Chen, Y.; Shen, Z.; Zhang, X.; Sang, J.; Ding, Y.; Yang, X.; Li, J.; Chen, M.; Jin, C.; et al. Convolutional neural network for the diagnosis of early gastric cancer based on magnifying narrow band imaging. Gastric Cancer 2020, 23, 126–132. [Google Scholar] [CrossRef]
  60. Sathiyamoorthi, V.; Ilavarasi, A.K.; Murugeswari, K.; Thouheed Ahmed, S.; Aruna Devi, B.; Kalipindi, M. A deep convolutional neural network based computer aided diagnosis system for the prediction of Alzheimer’s disease in MRI images. Measurement 2021, 171, 108838. [Google Scholar] [CrossRef]
  61. Sekaran, K.; Chandana, P.; Krishna, N.M.; Kadry, S. Deep learning convolutional neural network (CNN) With Gaussian mixture model for predicting pancreatic cancer. Multimed. Tools Appl. 2020, 79, 10233–10247. [Google Scholar] [CrossRef]
  62. Subramanian, R.R.; Achuth, D.; Kumar, P.S.; Reddy, K.N.k.; Amara, S.; Chowdary, A.S. Skin cancer classification using Convolutional neural networks. In Proceedings of the 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence), Noida, India, 28–29 January 2021; pp. 13–19. [Google Scholar]
  63. Patil, D.D.; Deore, S.G. Medical image segmentation: A review. Int. J. Comput. Sci. Mob. Comput. 2013, 2, 22–27. [Google Scholar]
  64. Miranda, E.; Aryuni, M.; Irwansyah, E. A Survey of Medical Image Classification Techniques. In Proceedings of the 2016 international conference on information management and technology (ICIMTech), Bandung, Indonesia, 16–18 November 2016. [Google Scholar] [CrossRef]
  65. Nisa, S.Q.; Ismail, A.R.; Ali, M.A.B.M.; Khan, M.S. Medical Image Analysis using Deep Learning: A Review. In Proceedings of the 2020 IEEE 7th International Conference on Engineering Technologies and Applied Sciences (ICETAS), Kuala Lumpur, Malaysia, 18–20 December 2020; pp. 1–3. [Google Scholar]
  66. Cheng, D.; Liu, M. Combining convolutional and recurrent neural networks for Alzheimer’s disease diagnosis using PET images. In Proceedings of the 2017 IEEE International Conference on Imaging Systems and Techniques (IST), Beijing, China, 18–20 October 2017; pp. 1–5. [Google Scholar]
  67. Neffati, S.; Taouali, O. An MR brain images classification technique via the Gaussian radial basis kernel and SVM. In Proceedings of the 2017 18th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA), Monastir, Tunisia, 21–23 December 2017; pp. 611–616. [Google Scholar]
  68. Varun Jain, S.G. Analysis of Brain MRI Tumor Detection and Classification using Hybrid Approach. Int. J. Comput. Sci. Commun. IJCSC 2017, 8, 42–47. [Google Scholar]
  69. Edwin, D.; Hariharan, S. Classification of Liver Tumor using Modified SFTAbased Multi Class Support Vector Machine. In Proceedings of the 2017 International Conference on Current Trends in Computer, Electrical, Electronics and Communication (CTCEEC), Mysore, India, 8–9 September 2017; pp. 854–859. [Google Scholar]
  70. Chakraborty, S.; Mali, K.; Chatterjee, S.; Anand, S.; Basu, A.; Banerjee, S.; Das, M.; Bhattacharya, A. Image based skin disease detection using hybrid neural network coupled bag-of-features. In Proceedings of the 2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON), New York, NY, USA, 19–21 October 2017; pp. 242–246. [Google Scholar]
  71. Hasija, Y.; Garg, N.; Sourav, S. Automated detection of dermatological disorders through image-processing and machine learning. In Proceedings of the 2017 International Conference on Intelligent Sustainable Systems (ICISS), Palladam, India, 7–8 December 2017; pp. 1047–1051. [Google Scholar]
  72. Lodha, P.; Talele, A.; Degaonkar, K. Diagnosis of Alzheimer’s Disease Using Machine Learning. In Proceedings of the 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, 16–18 August 2018; pp. 1–4. [Google Scholar]
  73. Keerthana, T.; Xavier, S.B. An Intelligent System for Early Assessment and Classification of Brain Tumor. In Proceedings of the 2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT), Coimbatore, India, 20–21 April 2018; pp. 1265–1268. [Google Scholar]
  74. Thamke, L.A.; Vaidya, M.V. Classification of Lung Diseases Using a Combination of Texture, Shape and Pixel Value by K-NN Classifier. In Proceedings of the 2018 2nd International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC)I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), Palladam, India, 30–31 August 2018; pp. 235–240. [Google Scholar]
  75. Sarwar, A.; Ali, M.; Manhas, J.; Sharma, V. Diagnosis of diabetes type-II using hybrid machine learning based ensemble model. Int. J. Inf. Technol. 2020, 12, 419–428. [Google Scholar] [CrossRef]
  76. Pallavi, B.; Keshvamurthy. A Hybrid Diagnosis System for Malignant Melanoma Detection in Dermoscopic Images. In Proceedings of the 2019 4th International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT), Bangalore, India, 17–18 May 2019; pp. 1471–1476. [Google Scholar]
  77. Neeraj Kumar, V.S. A Hybrid Classification and Prediction Methodology for the Diagonosis of Osteoporosis. Int. J. Innov. Technol. Explor. Eng. 2019, 8, 4648–4653. [Google Scholar] [CrossRef]
  78. Rabi, B.; Attallah, O.; Zaghlool, M.S.; Sharkas, M.A. Automatic Classification of Gastrointestinal Diseases Based on Machine Learning Techniques. In Proceedings of the 2019 29th International Conference on Computer Theory and Applications (ICCTA), Alexandria, Egypt, 29–31 October 2019; pp. 85–89. [Google Scholar]
  79. Sakshi Sharma, M.S.; Baljeet, N. CDCT: CT Scan Images based on Mechanism for Lung Cancer Detection. Int. J. Recent Technol. Eng. IJRTE 2019, 8, 931–935. [Google Scholar]
  80. Yadav, S.S.; Jadhav, S.M. Deep convolutional neural network based medical image classification for disease diagnosis. J. Big Data 2019, 6, 113. [Google Scholar] [CrossRef]
  81. Sannasi Chakravarthy, S.R.; Rajaguru, H. Lung Cancer Detection using Probabilistic Neural Network with modified Crow-Search Algorithm. Asian Pac. J. Cancer Prev. 2019, 20, 2159–2166. [Google Scholar] [CrossRef]
  82. Aamir Yousuf Bhat, A.S. Normal And Abnormal Detection For Knee Osteoarthritis Using Machine Learning Techniques. Int. J. Recent Technol. Eng. 2019, 8, 6026–6033. [Google Scholar] [CrossRef]
  83. Aledhari, M.; Joji, S.; Hefeida, M.; Saeed, F. Optimized CNN-based diagnosis system to detect the pneumonia from chest radiographs. In Proceedings of the 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), San Diego, CA, USA, 18–21 November 2019; pp. 2405–2412. [Google Scholar]
  84. Szymkowski, M.; Saeed, E.; Omieljanowicz, M.; Omieljanowicz, A.; Saeed, K.; Mariak, Z. A Novelty Approach to Retina Diagnosing Using Biometric Techniques With SVM and Clustering Algorithms. IEEE Access 2020, 8, 125849–125862. [Google Scholar] [CrossRef]
  85. Awasthi, S.; Kapoor, E.; Srivastava, A.P.; Sanyal, G. A New Alzheimer’s Disease Classification Technique from Brain MRI images. In Proceedings of the 2020 International Conference on Computation, Automation and Knowledge Management (ICCAKM), Dubai, United Arab Emirates, 9–10 January 2020; pp. 515–520. [Google Scholar]
  86. Suresha, H.S.; Parthasarathy, S.S. Alzheimer Disease Detection Based on Deep Neural Network with Rectified Adam Optimization Technique using MRI Analysis. In Proceedings of the 2020 Third International Conference on Advances in Electronics, Computers and Communications (ICAECC), Bengaluru, India, 11–12 December 2020; pp. 1–6. [Google Scholar]
  87. Chowdhary, C.L.; Mittal, M.; Pattanaik, P.A.; Marszalek, Z. An Efficient Segmentation and Classification System in Medical Images Using Intuitionist Possibilistic Fuzzy C-Mean Clustering and Fuzzy SVM Algorithm. Sensors 2020, 20, 3903. [Google Scholar] [CrossRef]
  88. Gholami, F. Improved fuzzy clustering with swarm intelligence for medical image analysis. In Proceedings of the 2020 6th Iranian Conference on Signal Processing and Intelligent Systems (ICSPIS), Mashhad, Iran, 23–24 December 2020; pp. 1–5. [Google Scholar]
  89. Kaur, J.; Singh, M.; Singla, S.K. Study of Fusion of medical images and classification comparison using different kernels of SVM and K-NN classifiers. In Proceedings of the 2020 First IEEE International Conference on Measurement, Instrumentation, Control and Automation (ICMICA), Kurukshetra, India, 24–26 June 2020; pp. 1–6. [Google Scholar]
  90. Erkal, B.; Başak, S.; Çiloğlu, A.; Şener, D.D. Multiclass Classification of Brain Cancer with Machine Learning Algorithms. In Proceedings of the 2020 Medical Technologies Congress (TIPTEKNO), Antalya, Turkey, 19–20 November 2020; pp. 1–4. [Google Scholar]
  91. Kumar, G.U.S.; Kanth, T.V.R.; Raju, S.V.; Malyala, S. Advanced Analysis of Cardiac Image Processing Using Hybrid Approach. In Proceedings of the 2021 International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT), Bhilai, India, 19–20 February 2021; pp. 1–6. [Google Scholar]
  92. Gharaibeh, N.; Al-hazaimeh, O.M.; Abu-Ein, A.; Nahar, K.M. A hybrid svm naïve-bayes classifier for bright lesions recognition in eye fundus images. Int. J. Electr. Eng. Inf. 2021, 13, 530–545. [Google Scholar] [CrossRef]
  93. Zubair, L.; Irtaza, S.A.; Nida, N.; Haq, N.U. Alzheimer and Mild Cognitive disease Recognition Using Automated Deep Learning Techniques. In Proceedings of the 2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST), Islamabad, Pakistan, 12–16 January 2021; pp. 310–315. [Google Scholar]
  94. Masud, M.; Sikder, N.; Nahid, A.-A.; Bairagi, A.K.; AlZain, M.A. A Machine Learning Approach to Diagnosing Lung and Colon Cancer Using a Deep Learning-Based Classification Framework. Sensors 2021, 21, 748. [Google Scholar] [CrossRef] [PubMed]
  95. Assam, M.; Kanwal, H.; Farooq, U.; Shah, S.K.; Mehmood, A.; Choi, G.S. An Efficient Classification of MRI Brain Images. IEEE Access 2021, 9, 33313–33322. [Google Scholar] [CrossRef]
  96. Hashan, A.M.; Agbozo, E.; Al-Saeedi, A.A.K.; Saha, S.; Haidari, A.; Rabi, M.N.F. Brain Tumor Detection in MRI Images Using Image Processing Techniques. In Proceedings of the 2021 4th International Symposium on Agents, Multi-Agent Systems and Robotics (ISAMSR), Batu Pahat, Malaysia, 6–8 September 2021; IEE; pp. 24–28. [Google Scholar]
  97. Cai, X.; Li, X.; Razmjooy, N.; Ghadimi, N. Breast Cancer Diagnosis by Convolutional Neural Network and Advanced Thermal Exchange Optimization Algorithm. Comput. Math. Methods Med. 2021, 2021, 5595180. [Google Scholar] [CrossRef] [PubMed]
  98. Kumar, M.S.; Rao, K.V.; Kumar, G.A. MRI Image Based Classification Model for Lung Tumor Detection Using Convolutional Neural Networks. Traitement Du Signal 2021, 38, 1837–1842. [Google Scholar] [CrossRef]
  99. Riajuliislam, M.; Rahim, K.Z.; Mahmud, A. Prediction of Thyroid Disease(Hypothyroid) in Early Stage Using Feature Selection and Classification Techniques. In Proceedings of the 2021 International Conference on Information and Communication Technology for Sustainable Development (ICICT4SD), Dhaka, Bangladesh, 27–28 February 2021; pp. 60–64. [Google Scholar]
  100. Patankar, A.M.; Thorat, S.S. Diagnosis of Ophthalmic Diseases in Fundus Image Using various Machine Learning Techniques. In Proceedings of the 2021 6th International Conference on Communication and Electronics Systems (ICCES), Coimbatre, India, 8–10 July 2021; pp. 1114–1118. [Google Scholar]
  101. Parashar, D.R.; Agarwal, D.K. SVM based Supervised Machine Learning Framework for Glaucoma Classification using Retinal Fundus Images. In Proceedings of the 2021 10th IEEE International Conference on Communication Systems and Network Technologies (CSNT), Bhopal, India, 18–19 June 2021; pp. 660–663. [Google Scholar]
  102. Alshammari, M.; Mezher, M. A Modified Convolutional Neural Networks For MRI-based Images For Detection and Stage Classification Of Alzheimer Disease. In Proceedings of the 2021 National Computing Colleges Conference (NCCC), Taif, Saudi Arabia, 27–28 March 2021; pp. 1–7. [Google Scholar]
  103. Al-Smadi, M.; Hammad, M.; Baker, Q.B.; Sa’ad, A. A transfer learning with deep neural network approach for diabetic retinopathy classification. Int. J. Electr. Comput. Eng. 2021, 11, 3492. [Google Scholar] [CrossRef]
  104. Mohagheghi, S.; Alizadeh, M.; Safavi, S.M.; Foruzan, A.H.; Chen, Y.-W. Integration of CNN, CBMIR, and visualization techniques for diagnosis and quantification of covid-19 disease. IEEE J. Biomed. Health Inform. 2021, 25, 1873–1880. [Google Scholar] [CrossRef]
  105. Singh, S.; Ho-Shon, K.; Karimi, S.; Hamey, L. Modality classification and concept detection in medical images using deep transfer learning. In Proceedings of the 2018 International conference on image and vision computing New Zealand (IVCNZ), Auckland, New Zealand, 19–21 November 2018; pp. 1–9. [Google Scholar]
  106. Perumal, S.; Thambusamy, V. Preprocessing by contrast enhancement techniques for medical images. Int. J. Pure Appl. Math. 2018, 118, 3681–3688. [Google Scholar]
  107. Kulwa, F.; Li, C.; Zhao, X.; Cai, B.; Xu, N.; Qi, S.; Chen, S.; Teng, Y. A State-of-the-Art Survey for Microorganism Image Segmentation Methods and Future Potential. IEEE Access 2019, 7, 100243–100269. [Google Scholar] [CrossRef]
  108. Kumar, K.K.; Chaduvula, K.; Markapudi, B. A Detailed Survey On Feature Extraction Techniques In Image Processing For Medical Image Analysis. Clin. Med. 2020, 7, 2020. [Google Scholar]
  109. Lai, Z.; Deng, H. Medical Image Classification Based on Deep Features Extracted by Deep Model and Statistic Feature Fusion with Multilayer Perceptron. Comput. Intell. Neurosci. 2018, 2018, 2061516. [Google Scholar] [CrossRef] [PubMed]
  110. Elaziz, M.A.; Hosny, K.M.; Salah, A.; Darwish, M.M.; Lu, S.; Sahlol, A.T. New machine learning method for image-based diagnosis of COVID-19. PLoS ONE 2020, 15, e0235187. [Google Scholar] [CrossRef] [PubMed]
  111. Khan, W.; Zaki, N.; Ali, L. Intelligent Pneumonia Identification From Chest X-Rays: A Systematic Literature Review. IEEE Access 2021, 9, 51747–51771. [Google Scholar] [CrossRef]
  112. Alloghani, M.; Al-Jumeily, D.; Mustafina, J.; Hussain, A.; Aljaaf, A.J. A Systematic Review on Supervised and unsupervised Machine learning Algorithms for Data Science. In Supervised and Unsupervised Learning for Data Science; Berry, M., Mohamed, A., Yap, B., Eds.; Springer: Cham, Switzerland, 2019; pp. 3–21. [Google Scholar] [CrossRef]
  113. Houssein, E.H.; Emam, M.M.; Ali, A.A.; Suganthan, P.N. Deep and machine learning techniques for medical imaging-based breast cancer: A comprehensive review. Expert Syst. Appl. 2021, 167, 114161. [Google Scholar] [CrossRef]
  114. Yap, M.H.; Pons, G.; Marti, J.; Ganau, S.; Sentis, M.; Zwiggelaar, R.; Davison, A.K.; Marti, R. Automated breast ultrasound lesions detection using convolutional neural networks. IEEE J. Biomed. Health Inform. 2017, 22, 1218–1226. [Google Scholar] [CrossRef]
  115. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  116. Petrellis, N. A Review of Image Processing Techniques Common in Human and Plant Disease Diagnosis. Symmetry 2018, 10, 270. [Google Scholar] [CrossRef]
  117. D’hooge, J.; Fraser, A.G. Learning About Machine Learning to Create a Self-Driving Echocardiographic Laboratory: Technical Considerations. Circulation 2018, 138, 1636–1638. [Google Scholar] [CrossRef]
  118. Buettner, R.; Bilo, M.; Bay, N.; Zubac, T. A Systematic Literature Review of Medical Image Analysis Using Deep Learning. In Proceedings of the 2020 IEEE Symposium on Industrial Electronics & Applications (ISIEA), TBD, Malaysia, 17–18 July 2020; pp. 1–4. [Google Scholar]
Figure 1. Flowchart of the process for article selection.
Figure 1. Flowchart of the process for article selection.
Sensors 22 07065 g001
Figure 2. Classification of medical imaging modalities.
Figure 2. Classification of medical imaging modalities.
Sensors 22 07065 g002
Figure 3. Conceptual map of disease diagnosis system.
Figure 3. Conceptual map of disease diagnosis system.
Sensors 22 07065 g003
Table 1. Distribution of studies for different medical imaging modalities.
Table 1. Distribution of studies for different medical imaging modalities.
Studies (Author (Year) [Ref])Imaging ModalityType of DiseaseMedical Database
Danni Cheng et al. (2017) [66]PETAlzheimer’s diseaseAlzheimer’s Disease Neuroimaging Initiative (ADNI) database (https://adni.loni.usc.edu/) (accessed on 18 June 2022). The dataset contained 339 brain images (93 AD, 146 MCI, 100 NC subjects).
Syrine Neffati et al. (2017) [67]MRIBrain diseaseHarvard Medical School website,
Open Access Series of Imaging Studies (OASIS) website. The dataset contained normal brains and seven types of pathological brains with a total of 226 images (38 normal brains and 188 pathological brains).
Varun Jain et al. (2017) [68]MRIBrain tumorSICAS Medical Image Repository dataset contained 25 MRI brain images (20 benign, 5 malignant).
Anjukrishna et al. (2017) [69]CTLiver cancer tumorTravancore scan center, Thiruvananthapuram (www.liveratlas.org) (accessed on 19 June 2022). The dataset contained 80 abdominal CT images, 20 of a normal liver and the rest images of various liver diseases.
Shouvik Chakraborty et al. (2017) [70]Dermoscopy skin imagingSkin cancersInternational Skin Imaging Collaboration (ISIC) dataset contained images of two classes (skin angioma and basel cell carcinoma).
Soumya Sourav et al. (2017) [71]Dermoscopy skin imagingDermatological diseasesDermatology Online Atlas (www.dermis.net), http://homepages.inf.ed.ac.uk/rbf/DERMOFIT/ (accessed on 19 June 2022). The dataset contained 3000 images of four types (psoriasis, herpes, eczema, and melanoma).
Priyanka Lodha et al. (2018) [72]MRI
PET
Alzheimer’s diseaseADNI database (http://adni.loni.usc.edu) (accessed on 18 June 2022). The ADNI study was applied to people between the age of 55 and 90.
Keerthana T K et al. (2018) [73]MRIBrain tumorBrain MRI medical image dataset which contained normal, benign, and malignant images.
Latika A. Thamke et al. (2018) [74]CTLung diseasesCT scan image dataset collected from patients (age ranges from 35 to 75). The datasets contained 400 images (100 normal, 100 pleural effusion, 100 bronchitis, and 100 emphysema).
Abid Sarwar et al. (2018) [75]Medical data (non-image)Diabetes type-IIThe authors prepared a rich database that included two classes (diabetic and non-diabetic) of 400 people from a large geographical area (age ranges from 5 to 75).
Pallavi. B et al. (2019) [76]Dermoscopy imagingMalignant melanoma skin cancer diseaseThe authors gathered specimen images of the sickly greeneries, then trained and stored them in the database. This database contained normal and abnormal images.
Neeraj Kumar et al. (2019) [77]CTBone disease (osteoporosis)The NCBI dataset associated with osteoporosis. The authenticated medical center Medpix NLM website. The database contained two classes with features (plane, modality, age, fracture, gender, weight, and history).
Rabi et al. (2019) [78]Endoscopic imagesGastrointestinal (GI) diseasesThe KVASIR dataset consisted of 4000 images containing 8 classes of GI diseases. Some of the supplied image classes feature a green image depicting the location and form of the endoscope within the intestine.
Sakshi Sharma et al. (2019) [79]CTLung cancer diseaseDatabase gathered from the IMBA web page contained normal and abnormal CT images of cancers for both males and females.
Smir S. Yadav et al. (2019) [80]X-rayPneumonia diseaseThe dataset was based on previous literature. The dataset contained 5856 images (normal, bacteria, and viruses).
Sannasi Chakravarthy et al. (2019) [81]CTLung cancer diseaseLung Image Database Consortium (LIDC). The dataset was composed of diagnostic and cancer screening thoracic CAT examinations with marked-up interpretations.
Aamir Bhat et al. (2019) [82]X-rayOsteoarthritis diseaseThe datasets were gathered from numerous hospitals. The dataset contained 126 knee joint X-ray images.
Mohammed Aledhari et al. (2019) [83]X-ray chest radiographs Pneumonia diseaseNational Institute of Health (NIH) dataset contained 1431 labeled X-ray images (normal and pneumonia).
Maciej Szymkowski et a l. (2020) [84]Retina color imagesRetina disease diagnosisMedical University of Bialystok (MUB) Clinic Hospital, publicly available DRIVE STARE, Kaggle.
The database contained 500 images (250 healthy samples and 250 pathological samples).
Shashank Awasthi et al. (2020) [85]MRIAlzheimer’s diseaseThe publicly available OASIS dataset contained MRI images of normal and AD patients.
Halebeedu Suresha et al. (2020) [86]MRIAlzheimer’s diseaseThe National Institute of Mental Health and Neurosciences (NIMHANS) dataset contained 800 images for 99 people (60 normal and 39 AD with age ranges from 55 to 87 years). The ADNI dataset contained 819 subjects (229 normal, 192 AD, and 398 MCI).
Chiranji Lal Chowdhary et al. (2020) [87]Breast X-ray (mammogram)Breast cancer diseaseMammography Image Analysis Society (MIAS) dataset contained 320 mammogram images (51 malignant, 63 benign, and 206 normal).
Fateme Gholami et al. (2020) [88]MRIBrain tumorsMRI image dataset gathered by the authors. The number of samples considered for evaluation in this article was 30 gray images.
Jaspreet Kaur et al. (2020) [89]CT
PET
Detecting cancerPET Center, Postgraduate Institute of Medical Education and Research (PGIMER), Chandigarh, India.
The dataset contained 200 medical images.
Begüm Erkal et al. (2020) [90]Medical data (non-image)Brain cancerBroad Institute (http://portals.broadinstitute.org/cgi-bin/cancer/datasets) (accessed on 20 June 2022). The dataset contained 42 samples (7129 features and 5 classes).
G U Santosh Kumar et al. (2021) [91]MRICardiovascular diseases, cardiac attackThe dataset of patients from the York university contained the cardiac MRI DICOM images of patients suffering from various cardiovascular diseases.
Nasr Gharaibeh et al. (2021) [92]Retinal fundus imagesDiabetic retinopathy (DR)The Image-Ret database included two sub-databases (i.e., DIARETDB0 and DIARETDB1).
Laiba Zubair et al. (2021) [93]MRIAlzheimer’s diseaseThe ADNI dataset contained 145 MRI images (39 AD, 45 CN, and 68 MCI).
Mehedi Masud et al. (2021) [94]Histopathological imageLung cancer and colon cancerThe LC25000 dataset contained 25,000 color images of five types of lung and colon tissues (colon adenocarcinoma, benign colonic tissue, lung adenocarcinoma, benign lung tissue, and lung squamous cell carcinoma).
Muhammad Assam et al. (2021) [95]MRIMRI brain images classification Harvard medical school database contained 70 images (25 normal and 45 abnormal, which comprised three different kinds of diseases: brain tumor, acute stroke, and Alzheimer disease).
Antor Hashan et al. (2021) [96]MRIBrain tumorMRI images of the brain were collected from various hospitals and compiled in a Kaggle dataset that contained 400 images (230 brain tumors and 170 normal)
Xiuzhen Cai et al. (2021) [97]Breast X-ray (mammogram)Breast cancerThe MIAS mammogram database (http://peipa.esex.ac.uk/info/mias.html) (accessed on 21 June 2022). The dataset contained 322 mammography images that were taken from the UK National Breast Screening Program.
Makineni Kumar et al. (2021) [98]MRILung cancer diseaseMRI lung cancer image dataset, which contained normal and tumor images.
Md Riajuliislam et al. (2021) [99]Medical data (non-image)Thyroid disease (hypothyroid)The dataset from the registered diagnostic center Dhaka, Bangladesh, contained 519 data with 9 attributes.
Aasawari M. Patankar et al.(2021) [100]Fundus imagesOphthalmic diseasesThe dataset was gathered by the authors and contained various ophthalmic diseases such as macular degeneration, retinopathy, myopia, cataract, and other abnormalities.
Deepak R. Parashar et al. (2021) [101]Retinal fundus imagesGlaucoma classificationRIM-ONE dataset contained 455 fundus photographs.
Drishti-GS1 dataset contained 101 retinal images.
RIM-ONE released 1 dataset which contained 40 images.
Majdah Alshammari et al. (2021) [102]MRIAlzheimer’s diseaseAlzheimer’s dataset (available: https://www.kaggle.com/tourist55/alzheimers) (accessed on 22 June 2022) contained 4 classes of diseases (896 mild demented, 64 moderate demented, 2240 very mild demented, and 3200 non-demented).
Mohammed Al-Smadi et al. (2021) [103]Fundus imagesDiabetic retinopathyKaggle dataset (available: https://www.kaggle.com/c/aptos2019-blindness-detection) (accessed on 22 June 2022) contained 3562 images and was obtained from various clinics in India and represents real-world data.
Saeed Mohagheghi et al. (2021) [104]X-ray
CT
COVID-19 diseaseKaggle dataset contained 1400 healthy and pneumonia images.
J. Cohen’s COVID-19 dataset contained 210 COVID-19, normal, and pneumonia images.
Sonit Singh et al. (2021) [105]CT, MRI, X-ray, PET, US, Microscopy imagesClassifying medical image modalitiesThe authors downloaded 10,000 medical images for each modality from Open-i Biomedical Image Search Engine, National Institute of Health, and U.S. National Laboratory of Medicine.
Table 2. Distribution of studies for different feature extraction methods.
Table 2. Distribution of studies for different feature extraction methods.
Studies (Author (Year) [Ref])Type of FeaturesMethod Used
Syrine Neffati et al. (2017) [67]Texture featuresDWT transforms to extract features.
The kernel PCA (KPCA) technique for feature reduction.
Varun Jain et al. (2017) [68]Texture featuresDWT transform for feature extraction.
PCA technique for diminishing the number of features.
AnjKrishna M et al. (2017) [69]Texture featuresSFTA and modified SFTA algorithms to extract features.
Shouvik Chakraborty et al. (2017) [70]Several key points such as features from which it creates a descriptor for that pointSIFT to detect key (interest) points and feature extraction.
Bag-of-features concept to decrease the number of key points.
Priyanka Lodha, et al. (2018) [72]Cognitive and biological featuresThe full volume of the brain is extracted from MRI images and other cognitive and biological features.
Keerthana T K et al. (2018) [73]Texture featuresGLCM for features extraction.
Latika A. Thamke et al. (2018) [74]Texture and shape features
Pixel coefficient value
GLCM for features extraction.
Moment Invariant (MI).
WHT transforms to calculate the pixel value of the image.
Pallavi. B et al. (2019) [76]Combination of texture and color featuresNine features are mean, standard deviation, entropy, RMS, variance, smoothness, kurtosis, skewness, and inverse difference momentum.
Bahaa Rabi et al. (2019) [78]Texture and shape featuresDWT transforms for feature extraction.
HOG transforms for feature extraction.
PCA and SVD methods for feature reduction.
Sannasi Chakravarthy et al. (2019) [81]Texture featuresGLCM for feature extraction.
CCSA for feature selection manually.
Aamir Bhat et al. (2019) [82]Texture featuresHOG and DWT transform for feature extraction.
Maciej Szymkowsk et al. (2020) [84]Biometric featuresExtract the characteristic points (so-called minutiae) on retina images and count the number of minutiae on the resulting image.
Shashank Awasthi et al. (2020) [85]Combination of fractal and statistical featuresThe features such as mean of zero crossing, mean of IMF, standard deviation, etc.
PCA for feature reduction.
Halebeedu Suresha et al. (2020) [86]Texture featuresHOG transforms for feature extraction.
Chiranji Lal Chowdhary et al. (2020) [87]Vital features for segmentation
Texture features for classification
The vital features are texture, shape, margin, and intensity.
The gray-level histogram computations compute the texture features.
Jaspreet Kaur et al. (2020) [89]Texture featuresGLCM technique for feature extraction.
Nasr Gharaibeh et al. (2021) [92]Texture and shape featuresHaralick and shape-based features.
US-PSO-RR algorithm for feature reduction.
Mehedi Masud et al. (2021) [94]Texture features2D Fourier Features (2D-FFT) and
2D Wavelet Features (2D-DWT).
Muhammad Assam et al. (2021) [95]Color featuresDWT transforms for feature extraction.
Color Moments (CMs) to reduce the number of features.
Xiuzhen Cai et al. (2021) [97]Texture featuresCombination of GLCM and DWT.
Makineni Kumar et al. (2021) [98]Shape featuresDiameter, Perimeter, Entropy, Intensity, and Eccentricity.
Md Riajuliislam et al. (2021) [99]Data of patientsAge, sex, ID, etc.
PCA for feature selection.
Aasawari M. Patankar et al. (2021) [100]Texture, color, and edges features Wavelet transform, DCT approach, and color information.
Deepak R. Parashar et al. (2021) [101]Texture featuresTexture-based Zernike moment, chip histogram, and Haralick descriptors.
Sonit Singh el at. (2021) [105]Statistical and texture featuresLocal binary pattern (LBP) and GLCM.
Table 3. Frequency count of performance metrics utilized in every chosen study.
Table 3. Frequency count of performance metrics utilized in every chosen study.
Studies [Ref]Performance MetricsNo. of Studies
[68,69,71,73,75,76,77,84,85,89,95,100,102,103,105] Accuracy15
[67,79,81,88,92,93,97,101] Accuracy, Specificity, Sensitivity8
[70,82,86,94]Accuracy, Precision, Recall, F1 score4
[72,74,98,104] Accuracy, Specificity, Sensitivity, Precision, Recall, F1 score4
[90,96]Accuracy, F1 score2
[66]Accuracy, Specificity, Sensitivity, AUC1
[80]Accuracy, Specificity, Recall1
[78]Accuracy, Specificity, Precision, Recall, F1 score1
[87]Accuracy, Sensitivity1
[83]Accuracy, Specificity1
[99]Accuracy, Specificity, Sensitivity, F1 score1
Table 4. Distribution of studies for the most common ML classification methods.
Table 4. Distribution of studies for the most common ML classification methods.
Studies (Author (Year) [Ref])TechniquesTaskAccuracy Results
Anju krishna M et al. (2017) [69]Naïve Bayes
and SVM
Classify liver tumor: Normal, Cirrhosis, HCC, and
Hemangioma
78%
92.5%
Priyanka Lodha, et al. (2018) [72]SVM,
Gradient boosting,
NN,
K-NN,
and RF
Alzheimer’s disease97.56%
97.25%
98.36%
95%
97.86%
Keerthana T K et al. (2018) [73]SVMDiagnosis and classification of brain tumor diseaseThe system provided better accuracy with the
genetic algorithm GA-SVM.
Latika A. Thamke et al. (2018) [74]K-NN, Multiclass-SVM, and DTClassify lung disease kinds: Normal, Bronchitis, Pleural,
Emphysema, and Effusion
The K-NN classifier gave better outcomes (97.5%) than other classifiers.
Abid Sarwar et al. (2018) [75]ANN, SVM, K-NN, Naïve Bayes, and EnsembleDiabetes type-IIThe results showed that the ensemble technique provided a superior accuracy of 98.60%.
Pallavi. B et al. (2019) [76]Multi-level SVM Classify skin diseaseA combination of texture and color features outcomes in the highest classification accuracy using multi SVM.
Neeraj Kumar et al. (2019) [77]SVM
and NN
Bone disease prediction of osteoporosisBoth classifiers gave efficient outcomes.
Bahaa Rabi et al. (2019) [78]SVM, K-NN,
LD, and DT
Classify eight GI classesThe highest accuracy of classification was 99.8%,
using the decision tree.
Sannasi Chakravarthy et al. (2019) [81]PNNLung cancer at the early stage90%
Aamir Bhat et al. (2019) [82]SVM
and ANN
Knee osteoarthritis in early stage85.33%
73.82%
Maciej Szymkowski et al. (2020) [84]SVM (linear),
SVM (3rd-degree
polynomial),
K-NN,
and K-Means
The healthy and unhealthy95.25%
96.45%
73.96%
80.42%
Shashank Awasthi et al. (2020) [85]Logistic regression,
Naïve Bayes, and SVM
Alzheimer’s disease classification in MRI image81%
79.88%
92.34%
Chiranji Lal Chowdhary et al. (2020) [87]DT,
RSDA,
SVM,
and FSVM
Detecting breast cancer82.5%
96.1%
88.13%
98.85%
Jaspreet Kaur et al. (2020) [89]SVM
and K-NN
Detecting cancer (cancerous and non-cancerous)The accuracy of SVM
varied from 95.5–98%.
Accuracy of t h e K-
NN classifier varied
from 69.5–95.5%.
Begüm Erkal et al. (2020) [90]Random Forest, K-NN, Bayes, LMT, DT, and Multilayer Perceptron.Detecting brain cancerThe experimental outcomes suggest that the Multilayer Perceptron approach outperforms other machine learning methods in accuracy.
Md Riajuliislam et al. (2021) [99]SVM,
DT,
RF,
Naïve Bayes,
and Logistic egression
Hypothyroid at the early stage99.35%
99.35%
99.35%
94.23%
99.35%
Deepak R. Parashar et al. (2021) [101]Multi-stage classifier (SVM)Glaucoma classification91%
Table 5. Summary of articles that included deep learning in detecting diseases.
Table 5. Summary of articles that included deep learning in detecting diseases.
Studies (Author (Year) [Ref])ArchitectureTaskResults
Danni Cheng et al. (2017) [66]CNN + RNNClassify AD vs. NC
Classify MCI vs. NC
91.19% (Accuracy), 91.40% (Sensitivity), 91.00% (Specificity), and 95.28% (AUC). 78.86% (Accuracy), 78.08% (Sensitivity), 80.00% (Specificity), and 83.90% (AUC).
Smir S. Yadav et al. (2019) [80]VGG16
InceptionV3 Capsule Net
Classify pneumonia
vs. normal
0.923 (Accuracy), 0.926 (Specificity), and 0.923
(Recall)
0.824 (Accuracy), 0.846 (Specificity), and 0.824 (Recall)
Mohammed Aledhari et al. (2019) [83]VGG16
ResNet-50
Inception v3
(fine-tuned VGG16)
Pneumonia vs. normal68% (Accuracy)
58% (Accuracy)
53% (Accuracy)
75% (Accuracy)
Maciej Szymkowski et al. (2020) [84]ResNet50Retina diagnosis86% (Accuracy)
Halebeedu Subbaraya et al. (2020) [86]CNN + Adam OptimizerClassify brain tumors90% (Accuracy) and 0.89 (F1 score)
Laiba Zubair et al. (2021) [93]CNN+
Bayesian optimization
AD vs. CN
vs. MCI
99.3% (Accuracy)
Mehedi Masud et al. (2021) [94]CNNFive types of lungs and colon cancers96.33% (Accuracy), 96.39% (Precision), 96.37% (Recall), 96.38% (F-Measure)
Xiuzhen Cai et al. (2021) [97]CNN + Thermal Exchange
Optimizer
Breast cancer
diagnosis
93.79 (Accuracy), 96.89 (Sensitivity), and 67.7 (Specificity).
Makineni Kumar et al. (2021) [98]LTD-CNNLung tumor detection96% (Accuracy), 96% (Sensitivity), 93% (Specificity), and 94% (Precision).
Majdah Alshammari et al. (2021) [102] CNN + ML+ Adam OptimizerAD vs. mild demented vs. moderate demented vs. very mild demented, vs. non-demented98% accuracy for testing and 97% in training.
Mohammed Al-Smadi et al. (2021) [103]ResNet,
Inception V3
Inception V4 DenseNet
Xception
EfficientNet
Diabetic retinopathy diagnosis77.6% (QWK)
82% (QWK)
79.6% (QWK)
81.8% (QWK)
80.9% (QWK)
80% (QWK)
Saeed Mohagheghi et al. (2021) [104]CNN with CBMIRCOVID-19 disease diagnosis97% (Accuracy), 99% (Sensitivity), 99% (Specificity 97% (Precision), 99% (Recall), and 98% (F-Measure).
Sonit Singh el at. (2021) [105]VGG-16
VGG-19
ResNet-50
Inception-v3
Xception
MobileNet
Inception-ResNet v2
Classifying medical image modalities62% (Accuracy)
98.18% (Accuracy)
90% (Accuracy)
99% (Accuracy)
98.36% (Accuracy)
98.73% (Accuracy)
98.18% (Accuracy)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rashed, B.M.; Popescu, N. Critical Analysis of the Current Medical Image-Based Processing Techniques for Automatic Disease Evaluation: Systematic Literature Review. Sensors 2022, 22, 7065. https://doi.org/10.3390/s22187065

AMA Style

Rashed BM, Popescu N. Critical Analysis of the Current Medical Image-Based Processing Techniques for Automatic Disease Evaluation: Systematic Literature Review. Sensors. 2022; 22(18):7065. https://doi.org/10.3390/s22187065

Chicago/Turabian Style

Rashed, Baidaa Mutasher, and Nirvana Popescu. 2022. "Critical Analysis of the Current Medical Image-Based Processing Techniques for Automatic Disease Evaluation: Systematic Literature Review" Sensors 22, no. 18: 7065. https://doi.org/10.3390/s22187065

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop