Next Article in Journal
Unusual Presentation of Clear Cell Odontogenic Carcinoma: Case Report and Literature Review
Next Article in Special Issue
Bio-Imaging-Based Machine Learning Algorithm for Breast Cancer Detection
Previous Article in Journal
Novel User-Friendly Application for MRI Segmentation of Brain Resection following Epilepsy Surgery
Previous Article in Special Issue
A Non-Invasive Interpretable Diagnosis of Melanoma Skin Cancer Using Deep Learning and Ensemble Stacking of Machine Learning Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Glioma Tumors’ Classification Using Deep-Neural-Network-Based Features with SVM Classifier

1
Faculty of Computer Science and Information Technology, Université du Québec à Chicoutimi, 555 Boulevard de l’Université, Chicoutimi, QC G7H2B1, Canada
2
Department of Computer Science, Prince Mohammad bin Fahd University, Khobar 31952, Saudi Arabia
3
Faculty of Computer Science and Information Technology, Universiti Malaysia Sarawak, Kota Samarahan 94300, Malaysia
4
Department of Computer Engineering, Prince Mohammad bin Fahd University, Khobar 31952, Saudi Arabia
5
Department of Electrical and Computer Engineering, Virginia Military Institute, Lexington, VA 24450, USA
*
Author to whom correspondence should be addressed.
Diagnostics 2022, 12(4), 1018; https://doi.org/10.3390/diagnostics12041018
Submission received: 12 March 2022 / Accepted: 8 April 2022 / Published: 18 April 2022
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer)

Abstract

:
The complexity of brain tissue requires skillful technicians and expert medical doctors to manually analyze and diagnose Glioma brain tumors using multiple Magnetic Resonance (MR) images with multiple modalities. Unfortunately, manual diagnosis suffers from its lengthy process, as well as elevated cost. With this type of cancerous disease, early detection will increase the chances of suitable medical procedures leading to either a full recovery or the prolongation of the patient’s life. This has increased the efforts to automate the detection and diagnosis process without human intervention, allowing the detection of multiple types of tumors from MR images. This research paper proposes a multi-class Glioma tumor classification technique using the proposed deep-learning-based features with the Support Vector Machine (SVM) classifier. A deep convolution neural network is used to extract features of the MR images, which are then fed to an SVM classifier. With the proposed technique, a 96.19% accuracy was achieved for the HGG Glioma type while considering the FLAIR modality and a 95.46% for the LGG Glioma tumor type while considering the T2 modality for the classification of four Glioma classes (Edema, Necrosis, Enhancing, and Non-enhancing). The accuracies achieved using the proposed method were higher than those reported by similar methods in the extant literature using the same BraTS dataset. In addition, the accuracy results obtained in this work are better than those achieved by the GoogleNet and LeNet pre-trained models on the same dataset.

1. Introduction

Billions of neurons exist in the human brain, whose main purpose is to process information and control the operations of body organs. The complexity of the human brain is beyond the current body of knowledge. The human brain is made up of three parts, including the cerebrum, brain stem, and cerebellum [1]. The skull is the protective shield around these parts to protect the brain from damage and injuries from external dangers. However, the skull can do little to protect the brain from internal neurological factors. One of the most dangerous internal factors is tumors, which damage the cellular level of the brain, leading to patients’ death. Research has shown that detecting tumors at the early stages along with early intervention tremendously contribute to patients’ survival. This makes images of the brain very important in the diagnosis process of different injuries and tumors. Several imaging technologies exist, such as Computed Tomography (CT), X-ray images, and Magnetic Resonance (MR) images. The rapid advancement of computer technology has a direct relationship with the advancement in imaging technology. MR Imaging (MRI) is the most advanced imaging technology, as it is used to visualize the internal structure and body functionality [2]. With the aid of a powerful magnet, the MRI machine can generate meticulous anatomical information of soft tissues in the different parts of the human body. Already, MRI has proven to be a success in detecting brain tumors and heart abnormalities.
Magnetic fields, radio waves, and other technologies are used to produce images of the brain tissues using MRI technology. Four different types of images (known as modalities) are generated based on the variables of the signal frequency and magnetic field strength. These include Fluid-attenuated inversion recovery (Flair), longitudinal relation time-weighted (T1-weighted), T1-contrasted, and transverse relaxation time-weighted (T2-weighted) [3]. In these images, each of the four types of modalities can be distinguished through color, contrast, and various other variables. For example, T1 is the darker portions of the image; T2 is the brighter portions; Flair shows water and macro-molecules. Due to the differences in the physical properties of the various states of the human body tissues (e.g., bleeding, swelling, inflammation, and tumors), the MRI image can be appropriately used to minutely distinguish among these different states.
Even though medical doctors and clinical technicians have extensive skills and expertise to identify the presence or absence of Glioma tumor in a brain MRI, it takes significant time and effort to actually arrive at a conclusive diagnosis. Apart from time and effort, there may be an increase in the risk of misdiagnosis, if the doctor has many MRI images (causing fatigue) to observe in a short span of time. In order to address these concerns, researchers have resorted to automated diagnostic systems based on image identification and classification. Deep Learning (DL) is one such approach where images can be classified with high accuracy in an automated manner [4]. Moreover, these approaches can also extract features from the images (without manual intervention), which are further utilized to classify MRI images into two categories, namely malignant or benign. Convolutional Neural Networks (CNN) are one the most popular DL techniques that have found immense applications in the field of medical diagnosis [5]. Apart from binary classifications (malignant or benign), the MRI images can also be classified into multiple classes to identify the different types of Glioma tumors. Hence, this research paper proposes an innovative method for the detection of multiple classes (Edema, Necrosis, Enhancing, and Non-enhancing) of Glioma brain tumors from MR images using deep-learning-based features with an SVM classifier. The proposed method achieved higher classification accuracy compared to those reported in previous literature using the same dataset (BraTS).
The organization of the rest of the paper is as follows. Section 2 presents a detailed literature review about the machine-learning-based approaches used in tumor identification. Section 3 describes the proposed tumor classification methodology. Section 4 details the performance results of the proposed model along with a comparative study with existing works. Discussions related to the relevance of the results are presented in Section 5. Finally, in Section 6, conclusions are drawn and future research directions are suggested.

2. Literature Review

The task of tumor identification is very complicated and requires specialized knowledge, skills, and analysis techniques to correctly locate the tumor. This task requires capturing a high-resolution image of the internal structure of the brain. Three image modalities are used by doctors to diagnose brain tumors, which are Positron Emission Tomography (PET), Computed Tomography (CT), and Magnetic Resonance Imaging (MRI). MRI uses radio waves, powerful magnets, and computing equipment to capture the details of the brain’s internal structures. Compared to the other modalities, MRI provides better contrast, brightness, and picture details because of the properties of tissue relaxation (T1 and T2), which makes them the preferred method for diagnosis by doctors [6]. However, the diagnosis by doctors and technicians is a slow and lengthy process [7]. Based on different variables such as brightness, contrast, repetition time, and time to echo, the MRI machine produces four different images (T1, T2, T1c, and Flair) [8]. The four scans are utilized in image processing methods for the automatic diagnosis and classification of tumors through feature extraction.
The accuracy of the classification techniques is dependent on the criteria used for feature selection. Various feature extraction techniques have been proposed in recent research, such as Gabor features [9], wavelet transform features, discrete cosine features, discrete Fourier transform features, and statistical features [10,11]. A feature-based automated system for brain MR image classification is proposed in which the tumorous slice classification is presented within the pre-processing stage [12]. Block-based feature extraction is proposed along withRandom Forests (RFs) for the binary classification of MR images to tumorous and normal. Each image is divided into 8 × 8 overlapping blocks in which three Haralick features are extracted; energy, Directional Moment (DM), and Inverse Difference Moment (IDM) from each block. The BraTS 2015 dataset was used, which consists of 274 multi sequence MR images for Glioma patients. Metrics used for validating the results include specificity, sensitivity, Missed Alarm (MA), accuracy, and False Alarm (FA). The results obtained were 94% sensitivity and specificity, 95% accuracy with an error rate of 1%, and 3% spuriously classified as tumorous. However, the study contained several limitations. Firstly, the method only performs the binary classification of the images to tumorous and normal images. Secondly, the method utilizes only three Haralick features due to the division of the image into 8 × 8 overlapping blocks; hence, the processing time is increased. A detailed comparison of discrete cosine features, wavelet transform features, discrete Fourier transform-feature-based classification of MR images into tumorous and non-tumorous, which shows promising results, was performed [13]. However, these experiments were performed using a small dataset consisting of 255 slices.
Another feature-based study was conducted to compare various methods for multi-class Glioma brain tumor classification [14]. The authors used the BraTS dataset and divided the data into 30 volumes for training and 57 for testing. Features were retrieved for the four modalities; T1, T1 contrast-enhanced, T2, and Flair. The features included gradient magnitude, intensity, Laplacian, standard deviation, range, skewness, entropy, kurtosis, and the minimum and maximum in five voxel neighborhoods. All these are commonly used features in brain tumor segmentation and classification. The authors tested with various classifiers available in WEKA. These included Random Forest (RF), Decision Stump, Extra Trees, J.48 Trees, Hoeffding Tree, Conjunctive Rule, Decision Table, Decision Table/Naive Bayes (DTNB), Fuzzy Unordered Rule Induction Algorithm (FURIA), One Rule (OneR), Lazy Family, KStar, Instance-Based Learning (IBL), Locally Weighted Learning (LWL), Functions Family, Linear Discrimination Analysis (LDA), Support Vector Machines (SVMs), Large Linear Classification (LINEAR), Multi-Layer Perceptron (MLP), Fuzzy Lattice Reasoning (FLR), Hyperpipes, and the Input Mapped Classifier. The results showed that RF outperformed all other classifiers for the selected features and achieved an average accuracy of 80.86%. The limitation of this study is evident in which the authors used only statistical features that are general in nature and have been suggested in other brain tumor classification studies. Furthermore, the authors tested for various classifiers within the WEKA software without much novelty in the selection process of the features that would enhance the classification process, as well as the best-performing classifier (RF).
In a different study, the authors proposed a method using features from multi-modal MRI, including structural MRI and Anisotropic and Isotropic components derived from Diffusion Tensor Imaging (DTI) [15]. The proposed method is based on a 3D-supervoxel-based learning method for the segmentation of brain tumors into multi-modal brain images. A variety of features were extracted, including the histogram calculated using a set of Gabor filters with various sizes and orientations, as well as the first-order intensity statistical feature. In their study, the authors used the RF classifier. Two different datasets were considered: the BraTS 2013 dataset was used along with an in-house clinical dataset consisting of 11 multi-modal images of patients [16]. The in-house dataset produced an average detection sensitivity of 86% compared to the 96% achieved in the case of BraTS, while the segmentation results against the ground truth were 0.84 for the in-house dataset and 0.89 for BraTS. This study showed that supervoxels, in general, have a limitation of segmenting small volumes with additional limitations at the tissue boundary, causing an overlap with other tissue types.
The handcrafted feature extraction techniques for brain tumor classification are more prone to over-fitting and may lead to high misclassification rates due to the dependency on the experience of the designer [7,17,18]. Performance improvement of brain tumor classification is still possible through the optimization of deep CNN as a feature extractor and deep CNN as a classifier [4,19]. It is also important to mention the observation that most of the research performed on brain tumor classification is conducted using relatively small datasets.

3. Methodology

The proposed methodology consists of multiple steps. Figure 1 details the proposed architecture for the classification of Glioma tumors into its four classes: (1) Edema, (2) Necrosis, (3) Enhancing, and (4) Non-enhancing. As shown, the MR images of size 241 × 241 (obtained by converting the original image size of 240 × 240) are provided as the input to the proposed CNN model by converting them into three channels. The details of the proposed 17-layer CNN model for feature extraction are presented next. As a result of the proposed CNN architecture, a total of 4096 features are extracted. These features are then used as input to the following four classifiers: Random Forests (RFs), Multi-Layer Perceptron (MLP), Support Vector Machines (SVMs), and Naive Byes (NB), with the goal of classifying the tumor into one of the aforementioned tumor classes.

3.1. Proposed Convolutional Network for Feature Extraction

Convolutional Neural Networks (CNNs) are used in deep learning for feature extraction, which automatically entails outperforming other feature extraction techniques. This is done by using filters that glide over the input image to produce what is referred to as a feature map. The produced features are affected by the type of filter being used in the process. The number of image features are affected by the number of filters used. To control the size of the feature map, the parameters such as stride (pixel numbers covered by the filter), zero padding (the process of bordering the image with zeros), and depth (number of filters) need to be controlled.
Since real data are mostly non-linear, thus the ReLU function, which is a non-linear operation, was used. The main task of ReLU is to replace the negative pixel values with zeros, which results in the so-called rectified feature map. To reduce the sensitivity to the network and speed up the training process, a normalization layer is required. This entails the subtraction of the mini-batch mean followed by the division by the standard deviation of the mini-batch. The mini-batch input B = x 1 , m with the parameters β and γ , produces the output layer { y i = B N γ , β ( x i ) } , as shown in Equations (1)–(4).
μ B 1 m i = 1 m x i
σ B 2 1 m i = 1 m ( x i μ B ) 2
x i = x i μ B σ B 2 + ϵ
y i γ x i + β B N γ , β ( x i )
A pooling layer is used to reduce the spatial dimensions and computational complexity, in addition to addressing the over-fitting problem. Though various pooling functions can be utilized, max pooling is the most widely used and was used in the current model with a filter size of 2 × 2. A Fully Connected Layer (FCL) refers to the fact that each neuron in the consecutive layers is connected to the SoftMax, which can be used for the non-linear combinations of the features.
The proposed convolutional network of 17 layers is used in the CNN feature extraction method for MR image feature extraction, as depicted in Figure 1. The proposed CNN model is a modified version of the LeNet architecture, which has a total of 25 layers [5,20]. The choice of LeNet is due to the fact that it has a good performance on grayscale images, as well as its simplicity compared to GoogleNet, as GoogleNet resulted in over-fitting when tested [21]. A high correlation with neighboring pixels exists for tumorous and non-tumorous pixels in MR images. The output of the convolutional layer is normalized using Local Response Normalization (LRN), which takes the mean of the local pixels, and this method is adopted in AlexNet and the proposed model in this paper.
In the proposed model, the input MR image passes through 96 filters of size 9 × 9 with a stride of 4 × 4. The image is minimized to a size of 96 × 59 × 59. The activation function ReLU, which outputs the input directly if positive or zero otherwise, is then applied to the input function. To stabilize the training, a 5-channel normalization is applied. The gradient is computed based on the maximum value in the window using the SoftMax pooling function of size 3 × 3 with a stride of 2 × 2. Then, 256 filters of size 7 × 7 and a stride of 1 × 1 along with a 2 × 2 padding are applied to the output image of size 96 × 29 × 29. This produces an image of size 256 × 27 × 27, which is again treated with the ReLU function followed by 5-channel normalization preceded by SoftMax pooling to produce an image of size 256 × 13 × 13. Further, 384 filters of size 3 × 3 with stride 1 × 1 and padding 1 × 1 are applied to the output phase to produce an image of size 384 × 13 × 13. The ReLU function is applied three times on the image and then followed by a SoftMax pooling of size 3 × 3 and stride 2 × 2 to output an image of size 256 × 6 × 6. Primitive features are captured using the first layers, which are combined in the later layers in order to form high-level features of the image in preparation for the recognition phase. This phase results in the extraction of 4096 features, which are input to RF, SVM, MLP, and NB classifiers.

3.2. Classification

Different types of classifiers were used in this research work, which included SVM, Random Forest, MLP, and Naive Bayes.
Support Vector Machine (SVM) is a type of classifier algorithm described by a separate hyperplane. Support Vector Machine’s goal is to discover a hyperplane within N-dimensional space, which helps in classifying the points of data clearly. These points of data are known as the support vector, which is nearer to the hyperplane and affects the hyperplane’s orientation and position. It helps in increasing the classifier’s margin. Points of data that fall on every side of the hyperplane can be assigned to various classes, and the hyperplane’s dimension relies on the number of features. For example, if the number of input features is 2, then the hyperplane is a line, and if the number of input features is 3, then the hyperplane is a 2D plane [22].
Multilayer Perceptron (MLP) is a supervised machine learning approach that has its basis the Artificial Neural Network (ANN) architecture [23]. The idea of ANN is borrowed from the structure and functioning of the human brain. As we know, the human brain consists of billions of neurons, which communicate electrical signals to and from the rest of the human body for their proper functioning and control. The basic unit of the human brain (neuron) is modeled artificially as a perceptron in the ANN. A collection of such multiple perceptrons in a row-wise and columnwise arrangement (layers) results in an MLP. At the basic level, there is a single vertical layer at the input, another similar layer at the output, and one (or more) hidden layer in between to realize an MLP. The MLP takes the input data via the input layer and builds a non-linear model of the data to provide an estimate of the desired target variable at the output. Since the layers are inter-connected with each other, the more input data are provided to the MLP, the more it tries to learn the model by adjusting the various weights in the model through the Back-Propagation Algorithm. The strength and the number of inter-connections affect two things, namely the accuracy of the model and its ability to generalize. A balance of over-fitting and under-fitting is desired so that the MLP can perform well in both aspects.
Random Forest (RF) is the most common ensemble approach used in classification to construct predictive models. In Random Forest, the model generates a complete forest with decision trees that are not correlated. Then, these decision trees are combined to obtain an accurate prediction. While trees keep growing, more randomness are added to the model by Random Forest. It looks for the best characteristic among various random characteristics instead of looking for the most significant characteristic when a node is splitting. Then, only one random subdivision of the characteristic is considered for the splitting of a node. Random Forest is a widely used algorithm because of its simplicity and better results [24].
Naive Bayes (NB) is used to achieve more accurate classifications, and a high training speed based on a random process, which is relied on for local classifiers [10]. It does not need an iterative process for its training and needs only a small amount of training in order to predict the parameter. Furthermore, it has high classification accuracy in terms of the performance results. Bayes’ theorem of probability is the basic theorem used for the Naive Bayes classifier. The main conditional probability in Bayes’ theorem is that an event W related to a random class H possibly can be computed from theprospect of discovering a specific event in every class and the ultimate prospects of the incident in every class. Assuming d belongs to D and Z classes and D is a random value, the probability of s related to a class H is calculated by:
P Z h d = P ( Z h ) P Z h d P ( d )

3.3. Dataset Description

The dataset used in most brain tumor segmentation is the MICCAI BraTS 2018 dataset [16,25]. It is the dataset of choice for most research performed on machine learning and Glioma tumors [2,7,11,14,15,26,27] and many others. The MICCAI BraTS 2018 dataset is especially used in testing the different strategies used in the segmentation of brain tumors in multi-modal images. Further, BraTS 2018 is mainly focused on the MRI method used in surgeries. They are used for the segmentation of specific brain tumors called Glioma tumors. Furthermore, the segmentation is usually performed in the manifestation, shape, and histology. They also ensure that the patient’s life is protected against any mistakes during the medical procedures. As we know, Glioma tumors are considered to be the most widespread brain malignancy in the world. It has been observed that, once the patients is diagnosed with an advanced stage Glioma tumor, he/she is usually left with another two years of life. Therefore, it is imperative that the earlier this tumor is diagnosed, there is a higher chance of extending the survival duration of the patients. In the dataset, for each event, the multi-channel data are made of 4 different 3D MRIs. There are a total of 56 cases; 39 HGG (High-Grade Glioma) and 26 LGG (Low-Grade Glioma). The total number of images for the 56 cases are 40,300 divided into 24,180 images of HGG and 16,120 images of LGG. The images are of size 240 × 240 (which we changed to 241 × 241 for the purpose of our proposed methodology) The dataset contains a total of 4430 tumorous MR images in each sequence type, out of which 1551 are HGG and 2879 are LGG. In addition, the dataset contains a total of 4250 non-tumorous MR images in each sequence type, of which 2169 are HGG and 2081 are LGG. The 4 modalities are Flair, T1-weighted, T1-contrasted, and T2-weighted. The dataset comes with some pre-processing steps already applied such as the re-calculation of the equal 1 m m 3 resolution, skull-stripping, and a co-registration of all scan cases to magnify the unhealthy tissues. The segmentation of the dataset was performed by experts to obtain the ground truth of the segmentation mask. The dataset also includes patient age and resection status, as well as the overall survival data specified in days.

4. Experimental Results

The proposed method is a feature extraction method using the CNN followed by classification using the different classifiers, namely RF, MLP, NB, and SVM. The accuracy results for these four classifiers for multi-class classification are presented in Table 1. As stated in Section 3, the CNN model extracts a total of 4,096 features. As shown in Table 1, for HGG, the average accuracy using the proposed CNN features and RF classifiers ranged from 91.39% to 92.51%. The Naive Bayes classifier with CNN features performed the worst among the four classifiers in this study. The SVM with CNN features performed the best among the four classifiers tested on HGG tumor, achieving an average accuracy between 95.29% and 96.19% with the highest accuracy achieved using the Flair modality. The highest average accuracy for SVM was 96.19% achieved for Flair MR images with a precision of 0.958, a recall of 0.851, and a F1-measure of 0.870.
When applied to LGG MR images, the range of the average accuracy for all four classes was from 74.13% to a maximum of 95.46%. Applying the Naive Bayes classifier with CNN features achieved the highest average accuracy of 77.44% for Flair images, and for the SVM with CNN features, the highest average accuracy was 95.46% using the T2 modality, as shown in Table 1. The highest average accuracy of 95.46% for all four classes was achieved by the SVM with a precision of 0.890, a recall of 0.861, and an F1-measure of 0.889. In conclusion, the SVM classifier used in conjunction with the CNN-extracted features achieved the highest accuracies among all the different classifiers tested in this work. The proposed method outperformed the results of methods reported in previous literature using the same dataset, as further discussed in this section and the Discussions.
Figure 2 shows a comparison of the misclassification of the four Glioma tumor classes using the four classifiers used in this paper. It is evident that for both LGG and HGG, Naive Bayes performed poorly by misclassifying the four classes between 25% and 30% of the time. SVM outperformed the other classifiers for HGG and LGG with a misclassification rate of less than 5% for all classes. For HGG, RF and the MLP had almost the same performance in terms of the misclassification rate; however, RF was slightly better in performance with a misclassification rate of less than 8% for all classes.

5. Discussions

The proposed CNN feature-based method was used to classify the four classes of Glioma tumor of the HGG and LGG types. Following the feature extraction step, typical classifiers such as RF, Naive Bayes, SVM, and the MLP were applied, and accuracy-based performance was compared. The CNN features were used for both types of HGG and LGG, and the best accuracy was achieved, as presented in Table 1, with the SVM classifier, where the highest average accuracy of 96.19% was observed for HGG, while using RF, the highest average accuracy was 92.51%; the lowest accuracies were observed with Naive Bayes, where the average accuracy was 74.17%. A similar trend in the results was observed for LGG classification, where the best accuracy was achieved when the SVM classifier was used, as shown in Table 1.
To illustrate the strength of the proposed model in the classification of the four Glioma tumor classes, well-known CNN models such as GoogleNet and LeNet were tested using the BraTS dataset. As can be seen in Table 2, for HGG, the best accuracy was achieved using GoogleNet for the T1c class with an average accuracy of 87.75%, a precision of 0.884, a recall of 0.839, and an F1-measure of 0.860. Compared to an accuracy of 95.98% for the same class using SVM in the proposed model shown in Table 1, the proposed model showed an increase of in the accuracy performance of 8.23% for the same class and even higher for other classes. For LGG, again, GoogleNet outperformed LeNet for the Flair modality with an average accuracy of 85.50%, a precision of 0.867, a recall of 0.722, and an F1-measure of 0.757. Again, comparing that with the result obtained using the proposed model and the SVM classifier, which provided an average accuracy of 95.02% for the same class, showed that the proposed model had an increase of 9.52% and an even higher increase for the other classes.
As shown in Table 3, the proposed technique using CNN features achieved an average accuracy of 95.83% for multi-class classification. When compared with other recent techniques from the literature, it is evident that the proposed technique outperformed the existing approaches as listed in Table 3.
For the purpose of validation, two independent datasets, AANLIB and PIMS, were used [11,28]. The AANLIB dataset is available on the Harvard Medical School website, which consists of 90 Flair modality MR images with 62 non-tumorous and 28 tumorous images [28]. The PIMS MRI dataset consists of 258 T1 modality MR images including 144 normal and 114 tumorous images [11]. The results obtained using these two datasets for the proposed CNN feature-based approach are shown in Table 4. The results were reaffirmed and validated, showing that for both the AANLIB and PIMS datasets, the proposed CNN feature-based approach was in the excellent category by achieving a 100% classification accuracy.
The manual procedure of diagnosis will allow only the diagnosis of patients receiving a brain MRI after all other lab and medical symptoms indicate a possible tumor. This would indicate that early diagnosis is achieved only in rare cases, and in most cases, the diagnosis is performed when the tumor is at later stages and untreatable. If an automatic detection and diagnosis method were developed, in that case, MRI could be a normal procedure in an annual checkup and allow early diagnosis of Glioma brain tumors, which will have these benefits: (1) dramatically decreasing the cost of detection and diagnosis procedures, (2) decreasing the patient’s healthcare costs because early detection will allow for medical procedures and medication at reasonable prices compared to medical procedures performed on patients during a late-stage cancer, (3) reducing the cost of treatment for both hospitals and insurance companies, (4) prolonging the patient’s life and possibly curing the patient of the tumor, and (5) decreasing the healthcare costs at the national level because cancer treatment centers are costly and patient’s waiting time is long. With all the benefits of an automatic detection and diagnosis procedure, computers and servers can work around the clock to go through all the patient’s MR images, flagging any image that shows signs of a Glioma brain tumor. The research community is therefore working on identifying innovative methods for the automatic detection and diagnosis procedures.

6. Conclusions

The complexity of diagnosing Glioma brain tumors is apparent from the complexity of the human brain tissue and its anatomy. The manual diagnosis process requires many years of training for both technicians and specialized medical doctors, and even then, it is a time-consuming and costly procedure. Hence, automatic detection and diagnosis methods are proposed and developed for the multiple types of Glioma tumor classification. In this paper, we proposed using a Deep Learning Neural Network to extract the features of the MR image, which are then given as the input to various classifiers (NB, RF, SVM, and MLP), with the SVM classifier achieving the highest accuracy. With the proposed technique, a 96.19% accuracy was achieved for the HGG type with the Flair modality and 95.46% for the LGG tumor type with the T2 modality. Compared to similar methods using the BraTS dataset, the proposed technique produced far better results. Well-known CNN models were used on the same dataset to show the strength of the proposed model. The well-known CNN models such as GoogleNet and LeNet were used with GoogleNet outperforming LeNet for both LGG and HGG. GoogleNet produced an accuracy of 87.75% for T1c HGG and 85.5% for Flair LGG. However, the proposed model outperformed the well-known CNN models and other models in the extant literature, with the SVM classifier producing an average accuracy of 95.83%. Future work will include developing and enhancing the proposed technique detailed in this paper to cover not only the Glioma tumor classification, but other medical conditions such as skin cancers, breast cancers, lung cancers, and others.

Author Contributions

Conceptualization, G.L., D.N.F.A.I. and J.A.; methodology, G.L. and G.B.B.; software, A.B. and G.L.; validation, A.B. and D.N.F.A.I.; formal analysis, D.N.F.A.I. and J.A.; investigation, G.L. and G.B.B.; resources, A.B. and G.B.B.; data curation, G.L.; writing—original draft preparation, G.L., G.B.B., A.B. and J.A.; writing—review and editing, A.B. and D.N.F.A.I.; visualization, D.N.F.A.I. and J.A.; supervision, A.B. and J.A.; project administration, G.B.B.; funding acquisition, A.B. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by the Deanship of Research at Prince Mohammad bin Fahd University, Al-Khobar, Saudi Arabia.

Institutional Review Board Statement

Ethical review and approval were waived for this study, due to the usage of the dataset available from the public domain (University of Pennsylvania) governed by the ethics and privacy laws at: https://www.med.upenn.edu/sbia/brats2018/data.html [Last accessed: 12 March 2022].

Informed Consent Statement

Since the dataset was taken from the University of Pennsylvania (public domain), informed consent was not applicable in our case.

Data Availability Statement

The dataset used in this research work was taken from the public domain (University of Pennsylvania) at: https://www.med.upenn.edu/sbia/brats2018/data.html [Last accessed: 12 March 2022].

Acknowledgments

The authors would like to acknowledge the support of Prince Mohammad bin Fahd University, KSA, for providing the facilities in the College of Computer Engineering and Science to perform this research work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhong, W.; Huang, Z.; Tang, X. A study of brain MRI characteristics and clinical features in 76 cases of Wilson’s disease. J. Clin. Neurosci. 2019, 59, 167–174. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Latif, G.; Iskandar, D.; Alghazo, J.; Jaffar, A. Improving Brain MR Image Classification for Tumor Segmentation using Phase Congruency. Curr. Med. Imaging 2018, 14, 914–922. [Google Scholar] [CrossRef]
  3. Amin, J.; Sharif, M.; Gul, N.; Yasmin, M.; Shad, S.A. Brain tumor classification based on DWT fusion of MRI sequences using convolutional neural network. Pattern Recognit. Lett. 2020, 129, 115–122. [Google Scholar] [CrossRef]
  4. Sharif, M.I.; Khan, M.A.; Alhussein, M.; Aurangzeb, K.; Raza, M. A decision support system for multimodal brain tumor classification using deep learning. Complex Intell. Syst. 2021, 2198–6053. [Google Scholar] [CrossRef]
  5. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  6. Lord, S.; Lei, W.; Craft, P.; Cawson, J.; Morris, I.; Walleser, S.; Griffiths, A.; Parker, S.; Houssami, N. A systematic review of the effectiveness of magnetic resonance imaging (MRI) as an addition to mammography and ultrasound in screening young women at high risk of breast cancer. Eur. J. Cancer 2007, 43, 1905–1917. [Google Scholar] [CrossRef] [PubMed]
  7. Latif, G.; Iskandar, D.N.F.A.; Alghazo, J. Multiclass Brain Tumor Classification Using Region Growing Based Tumor Segmentation and Ensemble Wavelet Features. In Proceedings of the 2018 International Conference on Computing and Big Data, New York, NY, USA, 15–17 November 2018; pp. 67–72. [Google Scholar]
  8. Gordillo, N.; Montseny, E.; Sobrevilla, P. State of the art survey on MRI brain tumor segmentation. Magn. Reson. Imaging 2013, 31, 1426–1438. [Google Scholar] [CrossRef]
  9. Zhu, X.; He, X.; Wang, P.; He, Q.; Gao, D.; Cheng, J.; Wu, B. A method of localization and segmentation of intervertebral discs in spine MRI based on Gabor filter bank. Biomed. Eng. Onlineg 2016, 15, 1–15. [Google Scholar] [CrossRef] [Green Version]
  10. Zhou, X.; Wang, S.; Xu, W.; Ji, G.; Phillips, P.; Sun, P.; Zhang, Y. Detection of Pathological Brain in MRI Scanning Based on Wavelet-Entropy and Naive Bayes Classifier. In Bioinformatics and Biomedical Engineering; Ortuño, F., Rojas, I., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2015; pp. 201–209. [Google Scholar]
  11. Latif, G.; Iskandar, D.N.F.A.; Alghazo, J.M.; Mohammad, N. Enhanced MR Image Classification Using Hybrid Statistical and Wavelets Features. IEEE Access 2019, 7, 9634–9644. [Google Scholar] [CrossRef]
  12. Sriramakrishnan, P.; Kalaiselvi, T.; Nagaraja, P.; Mukila, K. Tumorous Slices Classification from MRI Brain Volumes using Block based Features Extraction and Random Forest Classifier. Int. J. Comput. Sci. Eng. 2018, 6, 191–196. [Google Scholar]
  13. Ayadi, W.; Elhamzi, W.; Charfi, I.; Atri, M. A hybrid feature extraction approach for brain MRI classification based on Bag-of-words. Biomed. Signal Process. Control 2019, 48, 144–152. [Google Scholar] [CrossRef]
  14. El-Melegy, M.T.; El-Magd, K.M.A.; Ali, S.A.; Hussain, K.F.; Mahdy, Y.B. A comparative study of classification methods for automatic multimodal brain tumor segmentation. In Proceedings of the 2018 International Conference on Innovative Trends in Computer Engineering (ITCE), Aswan, Egypt, 19–21 February 2018; pp. 36–41. [Google Scholar]
  15. Soltaninejad, M.; Yang, G.; Lambrou, T.; Allinson, N.; Jones, T.L.; Barrick, T.R.; Howe, F.A.; Ye, X. Supervised learning based multimodal MRI brain tumour segmentation using texture features from supervoxels. Comput. Methods Programs Biomed. 2018, 157, 69–84. [Google Scholar] [CrossRef] [PubMed]
  16. Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans. Med. Imaging 2015, 34, 1993–2024. [Google Scholar] [CrossRef] [PubMed]
  17. Mzoughi, H.; Njeh, I.; Wali, A.; Slima, M.B.; BenHamida, A.; Mhiri, C.; Mahfoudhe, K.B. Deep Multi-Scale 3D Convolutional Neural Network (CNN) for MRI Gliomas Brain Tumor Classification. J. Digit. Imaging 2020, 33, 903–915. [Google Scholar] [CrossRef]
  18. Sengupta, A.; Ramaniharan, A.K.; Gupta, R.K.; Agarwal, S.; Singh, A. Glioma grading using a machine-learning framework based on optimized features obtained from T1 perfusion MRI and volumes of tumor components. J. Magn. Reson. Imaging 2019, 50, 1295–1306. [Google Scholar] [CrossRef]
  19. Kumar, R.L.; Kakarla, J.; Isunuri, B.V.; Singh, M. Multi-class brain tumor classification using residual network and global average pooling. Multimed. Tools Appl. 2021, 80, 13429–13438. [Google Scholar] [CrossRef]
  20. Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Classification of skin lesions using transfer learning and augmentation with Alex-net. PLoS ONE 2019, 14, e0217293. [Google Scholar] [CrossRef] [Green Version]
  21. Tang, P.; Wang, H.; Kwong, S. G-MS2F: GoogLeNet based multi-stage feature fusion of deep CNN for scene recognition. Neurocomputing 2017, 225, 188–197. [Google Scholar] [CrossRef]
  22. Baek, J.; Swanson, T.A.; Tuthill, T.; Parker, K.J. Support vector machine (SVM) based liver classification: Fibrosis, steatosis, and inflammation. In Proceedings of the 2020 IEEE International Ultrasonics Symposium (IUS), Las Vegas, NV, USA, 6–11 September 2020; pp. 1–4. [Google Scholar]
  23. Latif, G.; Mohsin Butt, M.; Khan, A.H.; Omair Butt, M.; Al-Asad, J.F. Automatic Multimodal Brain Image Classification Using MLP and 3D Glioma Tumor Reconstruction. In Proceedings of the 2017 9th IEEE-GCC Conference and Exhibition (GCCCE), Manama, Bahrain, 8–11 May 2017; pp. 1–9. [Google Scholar]
  24. Latif, G.; AlAnezi, F.Y.; Iskandar, D.; Bashar, A.; Alghazo, J. Recent Advances in Classification of Brain Tumor from MR Images—State of the Art Review from 2017 to 2021. Curr. Med. Imaging 2022. ahead of print. [Google Scholar] [CrossRef]
  25. Online. MICCAI BraTS 2018 Dataset. Available online: https://www.med.upenn.edu/sbia/brats2018/data.html (accessed on 16 November 2021).
  26. Xue, Y.; Yang, Y.; Farhat, F.G.; Shih, F.Y.; Boukrina, O.; Barrett, A.M.; Binder, J.R.; Graves, W.W.; Roshan, U.W. Brain Tumor Classification with Tumor Segmentations and a Dual Path Residual Convolutional Neural Network from MRI and Pathology Images; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 360–367. [Google Scholar]
  27. Sajjad, M.; Khan, S.; Muhammad, K.; Wu, W.; Ullah, A.; Baik, S.W. Multi-grade brain tumor classification using deep CNN with extensive data augmentation. J. Comput. Sci. 2019, 30, 174–182. [Google Scholar] [CrossRef]
  28. Summers, D. Harvard Whole Brain Atlas. J. Neurol. Neurosurg. Psychiatry 2003, 74, 288. Available online: www.med.harvard.edu/AANLIB/home.html (accessed on 12 March 2022). [CrossRef] [Green Version]
Figure 1. The architecture of the proposed CNN Features based Multiclass Tumor Classification.
Figure 1. The architecture of the proposed CNN Features based Multiclass Tumor Classification.
Diagnostics 12 01018 g001
Figure 2. Comparison of the average misclassification of four Glioma Tumor classes using different Classifiers.
Figure 2. Comparison of the average misclassification of four Glioma Tumor classes using different Classifiers.
Diagnostics 12 01018 g002
Table 1. Comparison of Multi-class Glioma Tumor Classification using CNN Features with typical Classifiers.
Table 1. Comparison of Multi-class Glioma Tumor Classification using CNN Features with typical Classifiers.
Glioma TypeClassifierModalityIndividual AccuraciesAverage Measures
NecrosisEdemaNon-EnhancingEnhancingAccuracyPrecisionRecallF1-Measure
HGGRFFlair90.4199.2990.3190.0292.510.9300.7930.805
T188.5399.3288.6689.0591.390.9140.7720.792
T1c90.9099.2288.8989.8692.220.7950.7810.787
T290.8699.3289.3489.8992.350.9270.7880.808
MLPFlair89.9999.1391.0690.8692.760.9280.7780.928
T186.7899.4286.6586.8889.930.8990.7500.899
T1c90.2599.2288.0589.4191.730.9180.7680.918
T289.9999.2989.2888.8691.850.9190.7690.919
NBFlair65.0889.8969.9770.4973.860.7200.7820.715
T157.7989.7366.2867.5770.340.6960.7240.671
T1c70.3688.5368.2269.5874.170.7150.7670.708
T260.9789.2167.6468.9771.700.7110.7510.693
SVMFlair94.9599.4595.3095.0496.190.9580.8510.870
T193.3999.2594.1094.4395.290.9150.8300.848
T1c95.2199.4294.6694.6295.980.9180.8440.861
T294.5399.3894.9894.7295.900.9560.8330.849
LGGRFFlair93.8310090.7489.5693.530.8120.8010.804
T191.7810091.7691.3293.720.8240.8080.810
T1c93.8310090.0090.8893.680.8160.7910.800
T293.6910092.6592.6594.750.8250.8110.815
MLPFlair92.2199.5693.2491.6294.150.9420.7920.942
T192.5099.8594.8592.2194.850.9490.7990.949
T1c92.6599.8592.6590.7493.970.9400.7900.939
T293.9799.8592.9492.5094.820.8450.7980.948
NBFlair72.1099.7173.2464.7177.440.7350.7270.726
T165.4910075.8858.8275.050.7480.6790.705
T1c66.8110075.4461.1875.860.7400.6870.707
T267.1110073.6855.7474.130.7370.6820.702
SVMFlair93.8299.9392.5793.7595.020.8700.8600.864
T193.8299.9392.5794.0495.090.8770.8610.868
T1c93.8299.9392.5094.3495.150.8730.8540.862
T294.2499.9392.7294.9395.460.8900.8610.889
Table 2. Multiclass Glioma Tumor using other well-known CNN models (GoogleNet and LeNet) for brain MR images.
Table 2. Multiclass Glioma Tumor using other well-known CNN models (GoogleNet and LeNet) for brain MR images.
Glioma TypeCNN ModelModalityIndividual AccuraciesAverage Measures
NecrosisEdemaNon-EnhancingEnhancingAccuracyPrecisionRecallF1-Measure
HGGLeNetFlair85.6697.9973.5574.3582.890.8110.7660.787
T170.4397.9967.8769.9476.560.7340.8170.770
T1c75.5098.3267.3971.9178.280.7720.7730.768
T276.0797.9973.4177.8781.330.7890.8250.805
GoogleNetFlair74.7397.9975.9377.6481.570.7910.8280.809
T176.0296.6476.1376.0981.220.8010.8010.801
T1c86.7797.3280.8886.0387.750.8840.8390.860
T280.5199.3374.8279.1883.460.8190.8260.822
LGGLeNetFlair75.6398.2582.4074.3382.650.7650.7140.726
T170.2998.2570.7166.4976.430.7760.7070.733
T1c66.7998.2564.5761.1172.680.7020.7670.729
T276.3298.2564.5774.4378.390.7930.7170.751
GoogleNetFlair81.5498.2582.5579.6785.500.8670.7220.757
T180.60100.0078.0272.3382.740.8130.8100.811
T1c79.24100.0071.8871.8680.750.7650.8680.811
T276.1298.2575.3971.6380.350.7630.8590.807
Table 3. Comparison of the proposed method for Glioma Tumor Classification with the latest literature techniques.
Table 3. Comparison of the proposed method for Glioma Tumor Classification with the latest literature techniques.
MethodDataset NameAccuracy
Proposed Method (CNN features from Model 1, SVM as the classifier)BraTS95.83%
Texture Features from Supervoxels and Random Forest as the Classifier, 2018 [15]BraTS80%
Ten Statistical Features and Random Forest as the Classifier, 2019 [14]BraTS80.85%
Dual-Path Residual Convolutional Neural Network, 2020 [26]BraTS84.90%
Deep CNN with Extensive Data Augmentation, 2019 [27]BraTS94.58%
Table 4. Experimental Results for Validation Datasets (PIMS-MRI and AANLIB) using the proposed CNN feature-based method New validation results table.
Table 4. Experimental Results for Validation Datasets (PIMS-MRI and AANLIB) using the proposed CNN feature-based method New validation results table.
DatasetClassifierAccuracyPrecisionRecallF-Measure
AANLIB (two-class dataset)RF100111
MLP94.120.92310.96
SVM100111
NB88.240.85710.923
PIMS-MRI (two-class dataset)RF100111
MLP94.230.90610.951
SVM100111
NB76.470.7660.7650.761
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Latif, G.; Ben Brahim, G.; Iskandar, D.N.F.A.; Bashar, A.; Alghazo, J. Glioma Tumors’ Classification Using Deep-Neural-Network-Based Features with SVM Classifier. Diagnostics 2022, 12, 1018. https://doi.org/10.3390/diagnostics12041018

AMA Style

Latif G, Ben Brahim G, Iskandar DNFA, Bashar A, Alghazo J. Glioma Tumors’ Classification Using Deep-Neural-Network-Based Features with SVM Classifier. Diagnostics. 2022; 12(4):1018. https://doi.org/10.3390/diagnostics12041018

Chicago/Turabian Style

Latif, Ghazanfar, Ghassen Ben Brahim, D. N. F. Awang Iskandar, Abul Bashar, and Jaafar Alghazo. 2022. "Glioma Tumors’ Classification Using Deep-Neural-Network-Based Features with SVM Classifier" Diagnostics 12, no. 4: 1018. https://doi.org/10.3390/diagnostics12041018

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop