Next Article in Journal
Salivary Biomarkers Associated with Psychological Alterations in Patients with Diabetes: A Systematic Review
Next Article in Special Issue
Current Evidence, Limitations and Future Challenges of Survival Prediction for Glioblastoma Based on Advanced Noninvasive Methods: A Narrative Review
Previous Article in Journal
Burnout Syndrome among Otorhinolaryngologists during the COVID-19 Pandemic
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Modal Brain Tumor Detection Using Deep Neural Network and Multiclass SVM

by
Sarmad Maqsood
,
Robertas Damaševičius
* and
Rytis Maskeliūnas
Faculty of Informatics, Kaunas University of Technology, LT-51386 Kaunas, Lithuania
*
Author to whom correspondence should be addressed.
Medicina 2022, 58(8), 1090; https://doi.org/10.3390/medicina58081090
Submission received: 6 July 2022 / Revised: 3 August 2022 / Accepted: 6 August 2022 / Published: 12 August 2022

Abstract

:
Background and Objectives: Clinical diagnosis has become very significant in today’s health system. The most serious disease and the leading cause of mortality globally is brain cancer which is a key research topic in the field of medical imaging. The examination and prognosis of brain tumors can be improved by an early and precise diagnosis based on magnetic resonance imaging. For computer-aided diagnosis methods to assist radiologists in the proper detection of brain tumors, medical imagery must be detected, segmented, and classified. Manual brain tumor detection is a monotonous and error-prone procedure for radiologists; hence, it is very important to implement an automated method. As a result, the precise brain tumor detection and classification method is presented. Materials and Methods: The proposed method has five steps. In the first step, a linear contrast stretching is used to determine the edges in the source image. In the second step, a custom 17-layered deep neural network architecture is developed for the segmentation of brain tumors. In the third step, a modified MobileNetV2 architecture is used for feature extraction and is trained using transfer learning. In the fourth step, an entropy-based controlled method was used along with a multiclass support vector machine (M-SVM) for the best features selection. In the final step, M-SVM is used for brain tumor classification, which identifies the meningioma, glioma and pituitary images. Results: The proposed method was demonstrated on BraTS 2018 and Figshare datasets. Experimental study shows that the proposed brain tumor detection and classification method outperforms other methods both visually and quantitatively, obtaining an accuracy of 97.47% and 98.92%, respectively. Finally, we adopt the eXplainable Artificial Intelligence (XAI) method to explain the result. Conclusions: Our proposed approach for brain tumor detection and classification has outperformed prior methods. These findings demonstrate that the proposed approach obtained higher performance in terms of both visually and enhanced quantitative evaluation with improved accuracy.

1. Introduction

Among cancers, brain tumors now have the greatest preliminary cost per patient. The tremendous expansion in parts of cells in the brain can cause tumors in people of all ages. Brain tumors are produced by unconstrained enlargement of tissue in the brain or central spine that can interfere with normal brain function [1]. Based on the area, size, and position, these large tumor cells can be split into two types: cancerous (malignant) and non-cancerous (benign) cells [2]. The acute parts of cancer cells are known as primary and secondary tumor areas. The earliest stages of cancer cells are called benign and are declared as the primary tumor area. Primary brain tumors arise from brain cells and can be cured, their growth can be controlled by taking pertinent medications. Secondary (metastatic) brain tumors start in another part of the body and then spread to the brain. This tumor can only be cured, if the pretentious patient receives appropriate surgery or radiotherapy [3]. Because brain tumors harm the surrounding brain tissue, their progression should be closely monitored to ensure patient survival [4].
Meningiomas are tumors that invade the brain and spinal cord. The tumors are made of three layers of membranes known as meninges [5]. Meningiomas often present as off-axis lobar masses with well-defined edges [6]. Meningioma patients’ survival rates are determined by tumor size and location, and the patient’s age. Meningioma symptoms include clinging, headaches, and limb weakness. Most malignant meningiomas can be cured with early detection and effective treatment. Benign meningioma tumors are less than 2 mm in diameter while malignant meningioma tumors are up to 5 cm in diameter [7].
Magnetic Resonance Imaging (MRI) has become one of the most frequent procedures for detecting brain cancers, and many MRI methods can be utilized [8]. Accurate diagnosis and necessary treatment of patients is essential, as brain tumors can be hazardous, and the brain tumor disease at an early stage can be prevented only by complete brain area scanning to detect the tumor. Each MRI method has a varied composure time and can be utilized to detect different brain tissues [9]. On account of the uncertain structure and location of brain tumors, a single MRI modality is insufficient to detect irregularly shaped tumors in all brain regions. The MRI protocols of different sequences provide important contradictory information to identify tumor regions [10]. The application of different pulse sequences results in different types of MRIs, including: T1-weighted MRI that distinguishes tumor from healthy tissue, T2-weighted MRI outline areas of edema, resulting in clear image areas, T4-Gd MRI showed a bright signal at the tumor edge when contrast enhancement was used, and FLAIR MRI use water molecules to suppress signals to distinguish cerebrospinal fluid (CSF) from areas of edema.
Due to the structural complexity and variability of brain tumors, high volatility, and inherent properties of MRI data, i.e., variability of tumor size and shape, calculation of area, determination of uncertainty in segmentation area, and tumor segmentation are difficult tasks [11]. Some tumors, such as meningiomas, are simple to separate, but others, such as gliomas and glioblastomas, are more difficult [12]; hence, creating manual tumor segmentation is a tedious task, and in some instances, oncologists may observe changes in segmentation results due to differences in tumor appearance and shape. Therefore, it is imperative to present an automatic segmentation method to assist this strenuous task.
Manual recognition of brain tumors and tracking their progression is a time-consuming and error-prone operation [13]. We require an automated method to substitute the manual systems. Traditional methods involve labeling methods to detect diseased areas in the brain, and current methods cannot detect internal peripheral pixels, which are irreconcilable with brain tumor detection procedures. Owing to the area highlighted by the contrast agent and its clarity, we prefer MRI over Computed Tomography (CT). As a result, MRI modalities are used in numerous methods to detect brain cancers.
In recent years, many methods have been presented for the automatic classification of brain tumors, which can be divided into Machine Learning (ML) and Deep Learning (DL) methods based on feature fusion, feature selection and the learning mechanism. In ML methods, feature extraction and feature selection are fundamental to classification [14,15]. However, DL methods learn by extracting features directly from images. New DL methods, especially CNNs, offer excellent accuracy and are greatly employed in medical image analysis, including MRI analysis [16,17,18]. The disadvantages compared to traditional ML methods are also that it requires a large training dataset, high complexity of time, low accuracy for applications where small datasets are available and expensive GPUs which eventually elevate the user’s cost, although these disadvantages can be alleviated by using transfer learning [19]. In addition, choosing the accurate deep learning model can be an intimidating task, requiring knowledge of numerous parameters, training methods, and topologies. Numerous machine learning-based classifiers have been utilized for the brain tumor classification and detection, i.e., Support Vector Machine (SVM), Random Forest (RF), fuzzy C-mean (FCM), Convolutional Neural Network (CNN), Naïve Bayes (NB), K-Nearest Neighbor (KNN), Sequential Minimal Optimization (SMO), and Decision Tree (DT). The CNN implementation is very simple and requires less computational and spatial complexity. In general, these classifiers have received significant research attention due to the small dataset required for training, low computational complexity, and ease of adoption by unskilled individuals.
The following contributions in this work are proposed by the new brain tumor segmentation and classification method.
  • A linear contrast stretching method is used to improve the edge details of the original image as a pre-processing step;
  • Designed a custom 17-layered CNN architecture for brain tumor segmentation, which is trained from the scratch to recognize the tumor area;
  • We used transfer learning from modified MobileNetV2 to retrieve the selected datasets for the deep feature extraction;
  • To optimize feature selection, we use an entropy-based controlled method, where the best features are selected based on the entropy value. The final features are classified using a multi-class SVM classifier;
  • To confirm the stability of the proposed algorithm, a complete statistical analysis and comparison with the most modern methods are conducted.
The rest of the paper is organized as follows. The relevant study on brain tumor detection is described in Section 2. The proposed methodology is outlined in Section 3. The simulation setup and assessment matrices are specified in Section 4. Section 5 compares the performance of the proposed method with other current methods, and Section 6 gives a conclusion with future research aims.

2. Related Work

MR imaging is actively used in contemporary medical procedures to diagnose brain cancer [8,14]. This section thoroughly examines the reputation for excellence in the detection and classification of brain tumors.
In recent years, many researchers performed work on the detection, segmentation, and classification of brain tumors. The importance of this topic is pertinacious in the medical community [20,21,22]. This research work describes methods for the detection and segmentation of brain tumors. Methods to diagnose brain tumors include generative and discriminatory methods to distinguish brain images [17,23]. Maqsood et al. [4] demonstrated a brain tumor detection method based on fuzzy logic and the U-NET CNN architecture. Contrast enhancement, the fuzzy logic-based edge detection method, and U-NET CNN classification were used in this method. A contrast enhancement method is applied to the source images for pre-processing, followed by an edge detection method based on fuzzy logic to discover the edges in the contrast enhanced images, and finally a dual tree-complex wavelet transform is applied at various scale levels. The characteristics are generated from decomposed sub-band images, which are then classed using the U-NET CNN classification method, which distinguishes between meningioma and non-meningioma in brain imaging. The presented method was compared against various recently developed algorithms, and achieved an accuracy rate of 98.59%.
Sobhaninia et al. [24] used a LinkNet network with a CNN model for segmentation using brain MRI to train the model from different angles and perspectives to obtain good results and scores and achieved a dice score of 0.79. However, this network looks complex. Johnpeter et al. [25] detect and localize the tumors in brain MRI using an adaptive neuro-fuzzy inference classification method. This method used the histogram equalization method to enhance the tumor areas without using edge detection on the brain images. This work obtained an accuracy rate of 98.80%.
Togacar et al. [26] developed a BrainMRNet network using the modulo and hypercolumn method. First, the source images were pre-processed and afterwards they proceeded to the attention modulo. The attention modulo regulates the main areas of the image and directs the image to the convolutional layer. One of the primary strategies utilized in the convolutional layers of the BrainMRNet model is the hypercolumn. With this method, the attributes extracted from each layer are retained in the array tree of the last layer and attained an accuracy rate of 96.05%. Kibriya et al. [27] presented a feature fusion-based brain tumor classification method. The source images are pre-processed by minimum-maximum normalization method and then massive data extension is employed to pre-processed images to overwhelm the data problem. GoogLeNet and ResNet18 deep CNN models are used for the transfer learning and create a one feature vector and an SVM and KNN classifier is used for the final output and the obtained 97.7% accuracy. Sajjad et al. [28] developed a CNN based brain tumor detection and classification method. The authors used a Cascade CNN algorithm for the brain tumor segmentation and a fine tuned VGG19 is used for the tumor classification and attained an accuracy of 94.58%. Shanthakumar [29] used the MRI of the brain, using watershed segmentation to identify tumor regions. The segmentation method employs a series of predetermined labeling systems to maximize the accuracy of tumor segmentation and obtained an accuracy of 94.52%. Prastawa et al. [30] demonstrate how to segregate tumor areas in brain MRIs by detecting borderline pixels. This method detects only the aberrant borders of the tumor region however, not the inner border of the tumor region and hence achieved an accuracy of 88.17%.
Gumaei et al. [31] proposed a hybrid feature extraction method for brain tumor classification using a regularized extreme learning machine (RELM). The min–max normalization contrast enhancement method is used as a preprocessing step and the hybrid PCA-NGIST method is used for the feature extraction, and the RELM method is employed for the classification of the brain tumor. This work obtained an accuracy rate of 94.23%. Swati et al. [32] used a fine-tuned pre-trained VGG19 model on contrast-enhanced MRI (CE-MRI) to improve the results and obtained an average accuracy rate of 94.82%. Kumar et al. [33] proposed a brain tumor method using ResNet50 CNN model and global average pooling to resolve the problem of overfitting and obtained an average accuracy rate of 97.48%.
Although better results have been obtained with all of the above methods, however, there are still some shortcomings, i.e., many conventional methods use labeling methods to detect abnormal pixels in brain areas and current methods cannot diagnose the inside of the edge pixels, which is not suitable for many brain tumor detection algorithms [34,35]. Table 1 illustrates some of the current works with dataset information and results.

3. The Proposed Framework

The proposed computer-aided design (CAD) method for the detection and classification of brain tumors includes contrast enhancement, image segmentation, feature extraction, feature selection and classification. The source brain images are first preprocessed using the linear contrast stretching method for the better visualization and the 17-layer CNN model is proposed for the tumor segmentation. A modified MobileNetV2 deep CNN model is used for the feature extraction, the entropy controlled method is employed for the feature selection and finally the M-SVM classifier is utilized for the brain tumor classification. Figure 1 shows a detailed scheme for the brain tumors’ segmentation and classification.
Figure 2 and Figure 3 illustrate the benign and malignant meningioma and benign and malignant non-meningioma brain images, respectively.

3.1. Contrast Enhancement

Contrast enhancement plays an important role and is the most effective method for refining the images that have low contrast [36]. Contrast stretching is performed in this task to enhance the visual contrast of tumors on MR imaging. Source MR imaging has several challenges, i.e., low contrast and similarity amongst healthy and diseased areas. Delineating the boundary between benign and malignant meningioma and non-meningioma complicates the detection process. Therefore, the linear contrast stretching is performed in order to refine the contrast while preserving the source MR image average brightness.
Let, r ( x , y ) denote the source image having size 256 × 256. The i n , n = 0, 1, 2, 3, ..., m− 1 is the starting points of r ( x , y ) and j n and n = 0, 1, 2, 3, ..., m− 1 is the starting position of linear output enhanced image. The transformation function m− 1 is mathematically defined as follows:
ζ c s ( x , y ) = j n j n 1 i n i n 1 × [ x i n 1 ] + i n 1 ,
where ζ c s ( x , y ) represents the linear stretched image. This image is further enhanced by function of contrast stretching, which is mathematically defined as follows:
ζ c e ( x , y ) = φ × log [ ζ c s ( x , y ) + r ] ,
φ ( x ) = x , y = 0 1 ζ c s ( x , y ) ,
where φ ( x ) represents the weighted value between 0 and 1.
Figure 4 shows the improvement of the brain MR image after employing linear contrast enhancement, the image gradients are well refined while preserving the information of the original image.

3.2. Tumor Segmentation

The 17-layered CNN architecture (Figure 5) is proposed for brain tumor segmentation. This architecture consists of six layers of convolution, two layers of max-pooling, one transpose layer, five ReLU activation functions, a Softmax layer, and the pixel classification layer. The kernel size of the convolution layer is 3 × 3. The number of channels for the convolution layer are 32, 64, 128, 128, 256 and 2, respectively with stride of [1 1]. The enhanced image, with a dimension of 256 × 256 × 3 ,is fed to the network for tumor segmentation. More or fewer CNN layers have also been implemented for tumor detection but the proposed 17-layered CNN architecture is a unique model to accurately detect the tumor region in the brain MRI.
The first activation of the convolution layer is 256 × 256 × 32, the size of the weight matrix is 3 × 3 × 3 × 32, and the bias weight matrix with a dimension of 1 × 1 × 32. After the multi-pass modification, the convolution layer 2 weighting matrix is upgraded to the size of 3 × 3 × 32 × 64 and the upgraded bias matrix size is 1 × 1 × 64. A transposed convolution layer is used in layer number 14 with a 3 × 3 convolution function and 256 channels. The weight matrix size is 3 × 3 × 256 × 256 and the dimension of the bias matrix is 1 × 1 × 256. The output of the convolution layer is 256 × 256 × 2, which is transferred to the Softmax classifier. The layers are trained using the Adam optimizer with the mini-batch size of 128, the learning rate is 0.001 and the epochs number is 50.
The cross entropy function is then used to add a pixel label classification layer for the tumor segmentation. This function is mathematically stated as:
δ ( υ , G ) = 1 W i = 1 W l n ( C R ) ,
where υ denotes a size patch of 256 × 256 × 3, the true labels are denoted by G, the patch in the i-th image is denoted by W, and C R specifies the posterior probability of the real class R. Table 2 provides a detailed description of each layer used to train the neural network (NN).
Finally, we use morphological techniques to eliminate extraneous segments generated after segmentation or to improve the region of interest formed after segmentation. Opening, closure, erosion, and dilation are morphological processes. Morphological dilatation is used to enlarge the region of interest, whereas morphological erosion is used to eliminate the undesirable clusters created during segmentation. This frequently aids in the removal of undesirable picture areas following image segmentation, which is followed by an opening operation of erosion and dilation.

3.3. Modified MobileNetV2 for Feature Extraction

MobileNetV2 is a deep CNN framework designed for portable and resource-constrained situations. This model is based on an inverse residual structure, where the residual structure is linked to the bottleneck layer [37]. The motivation behind using the MobileNetV2 network has a reduced parameters number, is faster in performance, small size, and low-latency. MobileNetV2 has a total of 153 layers and the input layer size is 224 × 244 × 3.
As a unique solution to the inverse problem associated with representing brain tumors, we suggest a hybrid Long Short-Term Memory (LSTM) recurrent neural network integrated with reworked MobileNetV2 (as a base model), which is inspired by [38,39,40]. The hybrid model needs to estimate the system’s parameters when modeling different grades of tumor, taking into account tumor mass simulations generated by titrating the rates of proliferation, concentration-driven motility, and angiogenesis, as well as other factors associated with pathological and radiological features. The model needs to be capable of detecting changes in tumor model parameters. The reasoning is based on the fact that certain brain tumors differentiate to a higher, more malignant grade. This process is usually accompanied by an increase in the rates of proliferation, motility, or angiogenesis. The implementation of this goal is the early detection of a grade change and hence the output being a provision of the possible timely treatment action.
First, we have modified the MobileNetV2 architecture with a completely new convolutional layer that includes benign and malignant meningioma and non-meningioma classes. These classes are called target labels. Then we use transfer learning (TL) to transfer the knowledge from the original network to the target network to acquire a new fitting CNN model. TL is used to train the fine-tuned network to extract features from the GAP layer for classification purposes, which are further used to help feed the LSTM (Figure 6). This element of the model can provide a labeled matrix with values for distinct picture areas and ridge lines, which aids in tumor detection. As a result, we take the complement of our image, apply RNN on the complemented image, and then negate the distance to discover the bright catchment basins that represent distinct areas.
As was mentioned, in contrast to other papers, where tumor segmenters were trained using only CNN versions, our technique used the impact of the LSTM memory cells to overcome the excessive vanishing error issue. One of the primary advantages of RNN modeling is that the LSTM can recall dependencies inside the sequence to establish the set of PDEs that the tumor is classified by, thereby improving system efficiency. The value of neuron in different layers and the mean centers initialized are fully dependent on categorizing the tumor using the RNN technology, the LSTM’s spatiotemporal parameters aid the model in recognizing concealed outlines in difficult frame-to-frame sequences.
Our hybrid technique divides images into dynamic zones. The RNN network creates the layers and neuron centroids. As a consequence, the picture from linear space is reconverted to the spatial domain, and the individual classification results are sub-displayed. For the majority of the brain images, the tumor is removed and is shown as one of the class findings based on the selected cluster.

3.4. Deep Feature Extraction Using Transfer Learning

A well-known deep learning method called transfer learning enables the use of a pre-trained model on a challenging research problem [22]. Utilizing TL has the significant benefit of requiring fewer input data while producing excellent results. It seeks to transfer knowledge from a source domain to a targeted domain, where the proposed problem with few labels is the targeted domain and the source domain is a pre-trained model with a large dataset. Typically, ImageNet, a sizable high-resolution image dataset, is used in the source domain [5]. There are 1000 image categories and more than 15 billion labels. The modified MobileNetV2 based CNN model is retrained using our datasets using transfer learning based feature extraction. TL is defined mathematically as follows:
The source domain ζ s is defined as:
ζ s = ( m 1 s , n 1 s ) , , ( m j s , n j s ) , , ( m z s , n z s ) .
The learning tasks are L s , L ζ , m x s , n x s ϕ .
The target domain ζ t is defined as:
ζ t = ( m 1 t , n 1 t ) , , ( m j t , n j t ) , , ( m y t , n y t ) .
The learning tasks are L t , m y t , n y t ϕ ; (x,y) is the training size data, where yx and n j s and m j t are the labels for training data. The pre-trained model is trained on the target dataset according to this specification.

3.5. Feature Selection and Classification

Feature selection is a significant step in the applications area of deep learning [41]. Feature selection is utilized to enhance classification accuracy, remove redundancy amongst features and surpass only robust features for best classification. Here we used an entropy-based controlled method to choose the best features based on the entropy value. The method removes unnecessary and redundant attributes and selects just the highest priority features. Let F ( x ) be the feature vector of the texture along the P × Q dimensions, the entropy of the vector extracted F ( x ) is formulated as:
F ( x ) = s 1 x s 2 x J ( s 1 , s 2 ) log J ( s 1 , s 2 ) ,
J ( x ) = g x = 1 Q v x ln v x ,
where s 1 and s 2 represent the minimum existing and previous distance according to the selected features, v x represents the value of probability for each x-th feature, J ( x ) represents the computed entropy vector. A threshold function is used for the newly formed entropy vector, which returns only objects greater than the maximum probability feature H k D . The threshold function is computed as follows:
ζ s ( x ) = X ( x ) , if H k D J ( x ) 0 , Otherwise .
Finally, the specified vector X ( x ) is forwarded to a multi-class SVM (M-SVM) classifier for the final classification where X ( x ) ζ s ( x ) . An approach for supervised machine learning method called SVM can be applied to classification issues. The data are transformed using a method known as the kernel trick, and based on these transformations, it determines the best output boundary. SVM works decently when there is a significant separation margin between categories, which is more efficient in high-dimensional spaces, and memory effective. The M-SVM classification identifies the benign and malignant meningioma and non-meningioma with the corresponding category.

4. Experimental Setup

This section discusses the simulation setup, dataset description of MR imaging, and evaluation measures.

4.1. Simulation Setup

The proposed method was implemented in the MATLAB R2021b on a laptop equipped with an Intel(R) Core(TM) i7-9750H processor, 16 GB RAM, an NVIDIA GTX 1650 GPU, and using the Microsoft Windows 11 environment.

4.2. Dataset

The experiment was assessed on brain MR images of meningiomas, gliomas, and pituitary. We assessed the proposed method on two publicly accessible datasets of brain tumor detection, i.e., figshare [42] and BraTS 2018 [43]. The brain figshare MRI dataset [42] contains entire 3064 T1-weighted contrast enhanced images of 233 patients including benign and malignant brain images. There is an MRI of the brain with meningioma containing a group of 708, an MRI of the brain with a glioma of a group containing 1426 images, and an MRI of the pituitary tumor of the brain group containing 930 images. These two datasets were utilized to compute the effectiveness of the proposed method for the detection, segmentation, and the classification of brain tumors.

4.3. Evaluation Matrices

To evaluate the proposed method performance, Specificity (Spe), Sensitivity (Sen), Accuracy (Acc), and Dice coefficient index (Dci) metrics were used in this work. The highest of these stats indicates a superior performance. These metrics are described as follows:
A c c = A T P + A T N A T P + A T N + A F N + A F P × 100 %
S e n = A T P A T P + A F N × 100 %
S p e = A T N A T N + A F P × 100 %
D c i = 2 × A T P 2 × A T P + A F N + A F P × 100 % ,
where A F P signifies the false positive, A T P signifies the true positive, A F N signifies the false negative, and A T N signifies the true negative.

5. Results and Discussion

5.1. Brain Tumor Segmentation Results

Segmentation has a significant role in medical imaging for preoperative and postoperative planning and early detection. Image segmentation divides an image into parts or areas based on the properties of the pixels in the image. In this work, the 17-layered CNN architecture is proposed for the tumor segmentation. Figure 7 displays the segmentation and tumor detection of the brain image.

5.2. Classification Results

Using two datasets—figshare and BraTS 2018—we report the classification results for the proposed M-SVM classifier. A 70:30 strategy was utilized to validate the proposed method and a 5-fold cross-validation was implemented.

5.2.1. BraTS 2018 Dataset Results

As illustrated in Table 3, the simulation results of our approach attain the highest test detection accuracy of 95.41%. From Table 3, M-SVM stands out in A c c , S e n , S p e , and D c i by 97.47%, 97.22%, 97.94%, and 96.71%, respectively, using the BraTS 2018 dataset. It can be discerned from Table 4 that the proposed brain tumor detection method achieves a superior performance with M-SVM classification.
The proposed method is compared with other modern methods in terms of classification accuracy as shown in Table 4. All the methods were compared using the BraTS 2018 dataset and the performance was quantitatively assessed. All methods obtained a decent accuracy rate but were still unable to achieve the highest accuracy. Irfan et al. [44], Amin et al. [45], Narmatha et al. [46], and Khan et al. [47] have a classification accuracy rate of 92.50%, 93.85%, 92.50%, and 93.40%, respectively. Compared to all the existing methods the proposed method exhibits superior performance. The M-SVM classification method was utilized for the brain tumors detection and segmentation and attained a classification accuracy of 97.47%. From Table 4 we can conclude that the proposed method attains better classification accuracy than other methods.

5.2.2. Figshare Dataset Results

As displayed in Table 5, the simulation results of our method achieve the highest test detection accuracy of 96.2%. From Table 5, M-SVM excels in A c c , S e n , S p e , and D c i by 98.92%, 98.82%, 99.02%, and 97.87%, respectively. It can be observed from Table 6 that the proposed brain tumor detection method achieves a superior performance with M-SVM classification.
The proposed method is compared with other modern methods in terms of classification accuracy as illustrated in Table 6. All the methods were compared using the figshare dataset and the performance was quantitatively assessed. All methods obtained a decent accuracy rate but were still unable to achieve the highest accuracy. Linear discriminant analysis (LDA) obtained a classification accuracy rate of 93.60%, while CNN and SVM achieved an accuracy rate of 96.50% and 94.63%, respectively. U-NET CNN and DarkNet-53 achieved better accuracy rates as compared to all the remaining methods with 98.59% and 98.59%, respectively, which is also very close to the proposed method. Noreen et al. [52], Anaraki et al. [54], Gumaei et al. [31], Sajjad et al. [28], and Swati et al. [32] all have almost the same classification accuracy rates of 94.34%, 94.20%, 94.23%, 94.58%, and 94.82%, respectively. Compared to all the existing methods the proposed method exhibits superior performance. The M-SVM classification method was utilized for brain tumor detection and segmentation, and 708 meningioma MRIs precisely classified 700 meningioma brain images with a classification accuracy of 98.92%. From Table 6 we can conclude that the proposed method attains better classification accuracy than other methods. The proposed method performance is also resolute using the Confusion Matrix and Receiver Operational Characteristics (ROC) curve. The Confusion Matrix and the ROC curve of the proposed method are illustrated in Figure 8 and Figure 9, respectively. The total execution time of the test is 15.64 s.

5.3. Explainability of the Results

The areas of an input image that contribute to the CNN final prediction can be visualized using Gradient-weighted Class Activation Mapping (Grad-CAM). Grad-CAM can provide a unique visualization for each class that is present in the image because it is class-specific. Grad-CAM creates a coarse localization map that highlights the key areas in the image for concept prediction by using the gradients of the target concept flowing into the final convolutional layer [56]. The localization of the tumor was performed using MR imaging classified into meningioma and non-meningioma tumor categories based on test data. Tumors were also located using Grad-CAM [56]. Red color represents the predicted tumor on the MR image and dark blue represents the background region of the MR image of the brain. Tumor localization imaging helps color map-based superpixel methods to enhance the localization of tumor pixels, which also leads to an increase in the dice index. Figure 10 shows localized tumor results for Grade I, Grade II, and Grade III.

5.4. Ablation Study

The ablation studies are performed to assess the influence of each component in the proposed methodology. The proposed method uses linear contrast stretching to refine the edges and a pre-trained MobileNetV2 is used for the feature extraction. For segmentation, the 17-layered CNN framework is developed to segment out the tumor region and the layers are trained using the Adam optimizer. The proposed system further explores the following research question: (1) How does the performance on the selected brain MRI dataset change when using different pre-trained CNN networks? (2) What effect does the optimizer have on the best pre-trained network’s performance? (3) How do changes to the cross-validation system affect MobileNetV2 classification performance? (4) How different multi-class classifiers affect MobileNetV2 classification performance on brain MRI datasets?
First, the performance of various pre-trained CNN architectures on the brain MRI dataset is evaluated. Table 7 briefly summarizes the performance comparison parameters and shows that MobileNetV2 outperforms other pre-trained deep learning networks.
The performance of the MobileNetV2 network is assessed using three optimization functions: (a) RMSprop; (b) stochastic gradient descent with momentum (sgdm); and (c) Adam. Table 8 illustrates that the MobileNetV2 network obtained the best performance with the Adam optimizer on the test dataset.
Different state-of-the-art deep learning networks are compared for the same brain tumor dataset using 5-fold cross-validation. The proposed method outperforms other methods in terms of accuracy as shown in Table 9.
The different multi-class classifiers (Fine tree, E-Bst tree, Fine KNN and M-SVM) are used for the selected dataset and Table 10 illustrates that MobileNetV2 obtained the best performance with M-SVM classifier with an accuracy of 98.92%, while its time performance is 15.64 s.

5.5. Limitations and Future Work

Our proposed model outperforms its competitors in terms of classification accuracy. The following advantages belong to our model as well.
  • Because the suggested model employs a custom CNN, automatic feature extraction has been realized;
  • Computational time is reduced because of the use of MobileNetV2;
  • Because the Adam Optimizer is being used, the proposed method achieves quicker convergence;
  • The entropy-based controlled feature selection scheme is employed to select the best features. Based on the entropy value, the entropy removes unnecessary and redundant attributes and selects only the highest priority features.
The major limitations of this study are as follows: (a) the methodology was implemented for 2-D MRI images; and (b) the feature selection method is slightly more time consumable. In the future, we will further use 3D brain imaging to achieve even more effective segmentation of brain tumors.

6. Conclusions

In recent years, demand for image-processing-based diagnostic computer systems has grown, enabling radiologists to speed up diagnosis while simultaneously assisting patients. The most deadly and life-threatening cancer, which affects many individuals globally, is the brain tumor. A variety of brain tumor segmentation and classification methods have been suggested to enhance medical image analysis. These algorithms, however, suffer from a number of drawbacks, including low contrast images, incorrect tumor region segmentation caused by some artifacts, a computationally complex method that needs more treatment time to correctly identify the tumor region, and existing deep learning methods need a large amount of training data to overcome overfitting.
The proposed brain tumor detection and classification scheme in this paper aims to address the aforementioned concerns. In our study, as a processing step, we used linear contrast stretching to refine detail at the edges of an image. The 17-layered CNN architecture is proposed for brain tumor segmentation and a modified MobileNetV2 architecture is used for feature extraction and trained using the transfer learning. Then, the features are selected using the entropy-based controlled method and the M-SVM framework is used to detect brain tumors.
An experimental study reveals that the proposed method obtained an enhanced performance in visual and comprehensive information extraction compared to current methods. The proposed classification method for the detection of brain tumors achieves an accuracy of 97.47% and 98.92%. The proposed method outperforms existing methods in terms of the detection and classification of brain tumors using MRI, as well as being more aesthetically pleasing and yielding superior results.

Author Contributions

Conceptualization: S.M.; methodology: S.M. and R.D.; software: S.M.; validation: S.M.; formal analysis: S.M. and R.D.; investigation: S.M. and R.D.; data curation: S.M. and R.D.; writing—original draft preparation: S.M.; writing—review & editing, R.D.; supervision: R.D. and R.M.; funding acquisition: R.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data will be available upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bauer, S.; May, C.; Dionysiou, D.; Stamatakos, G.; Buchler, P.; Reyes, M. Multiscale modeling for image analysis of brain tumor studies. IEEE Trans. Biomed. Eng. 2011, 59, 25–29. [Google Scholar] [CrossRef] [PubMed]
  2. Havaei, M.; Davy, A.; Warde-Farley, D.; Biard, A.; Courville, A.; Bengio, Y.; Larochelle, H. Brain tumor segmentation with deep neural networks. Med. Image Anal. 2017, 35, 18–31. [Google Scholar] [CrossRef] [PubMed]
  3. Khan, M.A.; Arshad, H.; Nisar, W.; Javed, M.Y.; Sharif, M. An integrated design of fuzzy C-means and NCA-based multi-properties feature reduction for brain tumor recognition. In Signal and Image Processing Techniques for the Development of Intelligent Healthcare Systems; Springer: Berlin/Heidelberg, Germany, 2021; pp. 1–28. [Google Scholar]
  4. Maqsood, S.; Damasevicius, R.; Shah, F.M. An efficient approach for the detection of brain tumor using fuzzy logic and U-NET CNN classification. In International Conference on Computational Science and Its Applications; Springer: Cham, Switzerland, 2021; pp. 105–118. [Google Scholar]
  5. Amin, J.; Sharif, M.; Yasmin, M.; Fernandes, S.L. Big data analysis for brain tumor detection: Deep convolutional neural networks. Future Gener. Comput. Syst. 2018, 87, 290–297. [Google Scholar] [CrossRef]
  6. Ostrom, T.Q.; Gittleman, H.; Liao, P.; Vecchione-Koval, T.; Wolinsky, Y.; Kruchko, C.; Barnholtz-Sloan, J.S. CBTRUS statistical report: Primary brain and other central nervous system tumors diagnosed in the United States in 2010–2014. In Neuro-oncology; Oxford University Press: Oxford, UK, 2017; Volume 19, pp. v1–v88. [Google Scholar] [CrossRef]
  7. Nawaz, S.A.; Khan, D.M.; Qadri, S. Brain Tumor Classification Based on Hybrid Optimized Multi-features Analysis Using Magnetic Resonance Imaging Dataset. Appl. Artif. Intell. 2022, 36, 1–27. [Google Scholar] [CrossRef]
  8. Ke, Q.; Zhang, J.; Wei, W.; Damaševičius, R.; Woźniak, M. Adaptive independent subspace analysis of brain magnetic resonance imaging data. IEEE Access 2019, 7, 12252–12261. [Google Scholar] [CrossRef]
  9. Jansson, D.; Dieriks, V.B.; Rustenhoven, J.; Smyth, L.C.; Scotter, E.; Aalderink, M.; Dragunow, M. Cardiac glycosides target barrier inflammation of the vasculature, meninges and choroid plexus. Commun. Biol. 2021, 4, 1–17. [Google Scholar] [CrossRef]
  10. Pereira, S.; Pinto, A.; Alves, V.; Silva, C.A. Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Trans. Med. Imaging 2016, 35, 1240–1251. [Google Scholar] [CrossRef]
  11. Wadhwa, A.; Bhardwaj, A.; Singh Verma, V. A review on brain tumor segmentation of MRI images. Magn. Reson. Imaging 2019, 61, 247–259. [Google Scholar] [CrossRef]
  12. Ohgaki, H.; Kleihues, P. Population-based studies on incidence, survival rates, and genetic alterations in astrocytic and oligodendroglial gliomas. J. Neuropathol. Exp. Neurol. 2005, 64, 479–489. [Google Scholar] [CrossRef]
  13. Dong, H.; Yang, G.; Liu, F.; Mo, Y.; Guo, Y. Automatic brain tumor detection and segmentation using U-Net based fully convolutional networks. In Annual Conference on Medical Image Understanding and Analysis; Springer: Cham, Switzerland, 2017; pp. 506–517. [Google Scholar]
  14. Abd-Ellah, M.K.; Awad, A.I.; Khalaf, A.A.M.; Hamed, H.F.A. A review on brain tumor diagnosis from MRI images: Practical implications, key achievements, and lessons learned. Magn. Reson. Imaging 2019, 61, 300–318. [Google Scholar] [CrossRef]
  15. Sharma, A.K.; Nandal, A.; Dhaka, A.; Dixit, R. A survey on machine learning based brain retrieval algorithms in medical image analysis. Health Technol. 2020, 10, 1359–1373. [Google Scholar] [CrossRef]
  16. Ali, S.; Li, J.; Pei, Y.; Khurram, R.; Rehman, K.; Mahmood, T. A comprehensive survey on brain tumor diagnosis using deep learning and emerging hybrid techniques with multi-modal MR image. Arch. Comput. Methods Eng. 2022, 1–26. [Google Scholar] [CrossRef]
  17. Arabahmadi, M.; Farahbakhsh, R.; Rezazadeh, J. Deep learning for smart Healthcare—A survey on brain tumor detection from medical imaging. Sensors 2022, 22, 1960. [Google Scholar] [CrossRef] [PubMed]
  18. Magadza, T.; Viriri, S. Deep learning for brain tumor segmentation: A survey of state-of-the-art. J. Imaging 2021, 7, 19. [Google Scholar] [CrossRef] [PubMed]
  19. Valverde, J.M.; Imani, V.; Abdollahzadeh, A.; De Feo, R.; Prakash, M.; Ciszek, R.; Tohka, J. Transfer learning in magnetic resonance brain imaging: A systematic review. J. Imaging 2021, 7, 66. [Google Scholar] [CrossRef] [PubMed]
  20. Muzammil, S.R.; Maqsood, S.; Haider, S.; Damaševičius, R. CSID: A novel multimodal image fusion algorithm for enhanced clinical diagnosis. Diagnostics 2020, 10, 904. [Google Scholar] [CrossRef]
  21. Maqsood, S.; Damaševičius, R.; Maskeliūnas, R. TTCNN: A Breast Cancer Detection and Classification towards Computer-Aided Diagnosis Using Digital Mammography in Early Stages. Appl. Sci. 2022, 12, 3273. [Google Scholar] [CrossRef]
  22. Maqsood, S.; Damaševičius, R.; Maskeliūnas, R. Hemorrhage detection based on 3D CNN deep learning framework and feature fusion for evaluating retinal abnormality in diabetic patients. Sensors 2021, 21, 3865. [Google Scholar] [CrossRef]
  23. Maqsood, S.; Damasevicius, R.; Siłka, J.; Woźniak, M. Multimodal Image Fusion Method Based on Multiscale Image Matting. In International Conference on Artificial Intelligence and Soft Computing; Springer: Cham, Switzerland, 2021; pp. 57–68. [Google Scholar]
  24. Sobhaninia, Z.; Rezaei, S.; Noroozi, A.; Ahmadi, M.; Zarrabi, H.; Karimi, N.; Samavi, S. Brain tumor segmentation using deep learning by type specific sorting of images. arXiv 2018, arXiv:1809.07786. [Google Scholar]
  25. Johnpeter, J.H.; Ponnuchamy, T. Computer aided automated detection and classification of brain tumors using CANFIS classification method. Int. J. Imaging Syst. Technol. 2019, 29, 431–438. [Google Scholar] [CrossRef]
  26. Toğaçar, M.; Ergen, B.; Cömert, Z. BrainMRNet: Brain tumor detection using magnetic resonance images with a novel convolutional neural network model. Med. Hypotheses 2020, 134, 109531. [Google Scholar] [CrossRef] [PubMed]
  27. Kibriya, H.; Amin, R.; Alshehri, A.H.; Masood, M.; Alshamrani, S.S.; Alshehri, A. A Novel and Effective Brain Tumor Classification Model Using Deep Feature Fusion and Famous Machine Learning Classifiers. Comput. Intell. Neurosci. 2022, 7897669. [Google Scholar] [CrossRef] [PubMed]
  28. Sajjad, M.; Khan, S.; Muhammad, K.; Wu, W.; Ullah, A.; Baik, S.W. Multi-grade brain tumor classification using deep CNN with extensive data augmentation. J. Comput. Sci. 2019, 30, 174–182. [Google Scholar] [CrossRef]
  29. Shanthakumar, P.; Ganesh Kumar, P. Computer aided brain tumor detection system using watershed segmentation techniques. Int. J. Imaging Syst. Technol. 2015, 25, 297–301. [Google Scholar] [CrossRef]
  30. Prastawa, M.; Bullitt, E.; Ho, S.; Gerig, G. A brain tumor segmentation framework based on outlier detection. Med. Image Anal. 2004, 8, 275–283. [Google Scholar] [CrossRef] [PubMed]
  31. Gumaei, A.; Hassan, M.M.; Hassan, M.R.; Alelaiwi, A.; Fortino, G. A hybrid feature extraction method with regularized extreme learning machine for brain tumor classification. IEEE Access 2019, 7, 36266–36273. [Google Scholar] [CrossRef]
  32. Swati, Z.N.K.; Zhao, Q.; Kabir, M.; Ali, F.; Ali, Z.; Ahmed, S.; Lu, J. Brain tumor classification for mr images using transfer learning and fine-tuning. Comput. Med. Imaging Graph. 2019, 75, 34–46. [Google Scholar] [CrossRef]
  33. Kumar, R.L.; Kakarla, J.; Isunuri, B.V.; Singh, M. Multi-class brain tumor classification using residual network and global average pooling. Multimed. Tools Appl. 2021, 80, 13429–13438. [Google Scholar] [CrossRef]
  34. Kadry, S.; Taniar, D.; Damasevicius, R.; Rajinikanth, V. Automated detection of schizophrenia from brain MRI slices using optimized deep-features. In Proceedings of the 2021 IEEE 7th International Conference on Bio Signals, Images and Instrumentation, ICBSII 2021, Chennai, India, 25–27 March 2021. [Google Scholar] [CrossRef]
  35. Odusami, M.; Maskeliūnas, R.; Damaševičius, R. An intelligent system for early recognition of Alzheimer’s disease using neuroimaging. Sensors 2022, 22, 740. [Google Scholar] [CrossRef]
  36. Maqsood, S.; Javed, U.; Riaz, M.M.; Muzammil, M.; Muhammad, F.; Kim, S. Multiscale image matting based multi-focus image fusion technique. Electronics 2020, 9, 472. [Google Scholar] [CrossRef]
  37. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  38. Jang, B.-S.; Park, A.J.; Jeon, S.H.; Kim, I.H.; Lim, D.H.; Park, S.-H.; Lee, J.H.; Chang, J.H.; Cho, K.H.; Kim, J.H.; et al. Machine Learning Model to Predict Pseudoprogression Versus Progression in Glioblastoma Using MRI: A Multi-Institutional Study (KROG 18-07). Cancers 2020, 12, 2706. [Google Scholar] [CrossRef] [PubMed]
  39. Vankdothu, R.; Hameed, M.A.; Fatima, H. A Brain Tumor Identification and Classification Using Deep Learning based on CNN-LSTM Method. Comput. Electr. Eng. 2022, 101, 107960. [Google Scholar] [CrossRef]
  40. Fasihi, M.S.; Mikhael, W.B. Brain tumor grade classification Using LSTM Neural Networks with Domain Pre-Transforms. In Proceedings of the 2021 IEEE International Midwest Symposium on Circuits and Systems (MWSCAS), Lansing, MI, USA, 9–11 August 2021. [Google Scholar]
  41. Kale, G.A.; Yüzgeç, U. Advanced strategies on update mechanism of Sine Cosine Optimization Algorithm for feature selection in classification problems. Eng. Appl. Artif. Intell. 2022, 107, 104506. [Google Scholar] [CrossRef]
  42. Nanfang Hospital and General Hospital, Tianjin Medical University: Tianjin, China. Available online: https://figshare.com/articles/dataset/brain_tumor_dataset/1512427/5 (accessed on 9 June 2022).
  43. Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Van Leemput, K. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 2014, 34, 1993–2024. [Google Scholar] [CrossRef] [PubMed]
  44. Sharif, M.I.; Li, J.P.; Khan, M.A.; Saleem, M.A. Active deep neural network features selection for segmentation and recognition of brain tumors using MRI images. Pattern Recognit. Lett. 2020, 129, 181–189. [Google Scholar] [CrossRef]
  45. Amin, J.; Sharif, M.; Raza, M.; Saba, T.; Sial, R.; Shad, S.A. Brain tumor detection: A long short-term memory (LSTM)-based learning model. Neural Comput. Appl. 2020, 32, 15965–15973. [Google Scholar] [CrossRef]
  46. Narmatha, C.; Eljack, S.M.; Tuka, A.A.R.M.; Manimurugan, S.; Mustafa, M. A hybrid fuzzy brain-storm optimization algorithm for the classification of brain tumor MRI images. J. Ambient. Intell. Humaniz. Comput. 2020, 1–9. [Google Scholar] [CrossRef]
  47. Khan, M.A.; Ashraf, I.; Alhaisoni, M.; Damaševičius, R.; Scherer, R.; Rehman, A.; Bukhari, S.A.C. Multimodal brain tumor classification using deep learning and robust feature selection: A machine learning application for radiologists. Diagnostics 2020, 10, 565. [Google Scholar] [CrossRef]
  48. Cheng, J.; Huang, W.; Cao, S.; Yang, R.; Yang, W.; Yun, Z.; Feng, Q. Enhanced performance of brain tumor classification via tumor region augmentation and partition. PLoS ONE 2015, 10, e0140381. [Google Scholar] [CrossRef]
  49. Badža, M.M.; Barjaktarović, M.Č. Classification of brain tumors from MRI images using a convolutional neural network. Appl. Sci. 2020, 10, 1999. [Google Scholar] [CrossRef]
  50. Tripathi, P.C.; Bag, S. Non-invasively grading of brain tumor through noise robust textural and intensity based features. In Computational Intelligence in Pattern Recognition; Springer: Singapore, 2020; pp. 531–539. [Google Scholar]
  51. Ahuja, S.; Panigrahi, B.K.; Gandhi, T.K. Enhanced performance of Dark-Nets for brain tumor classification and segmentation using colormap-based superpixel techniques. Mach. Learn. Appl. 2022, 7, 100212. [Google Scholar] [CrossRef]
  52. Noreen, N.; Palaniappan, S.; Qayyum, A.; Ahmad, I.O.; Alassafi, M. Brain Tumor Classification Based on Fine-Tuned Models and the Ensemble Method. Comput. Mater. Contin. 2021, 67, 3967–3982. [Google Scholar] [CrossRef]
  53. Bodapati, J.D.; Shaik, N.S.; Naralasetti, V.; Mundukur, N.B. Joint training of two-channel deep neural network for brain tumor classification. Signal Image Video Process. 2020, 15, 753–760. [Google Scholar] [CrossRef]
  54. Anaraki, A.K.; Ayati, M.; Kazemi, F. Magnetic resonance imaging-based brain tumor grades classification and grading via convolutional neural networks and genetic algorithms. Biocybern. Biomed. Eng. 2019, 39, 63–74. [Google Scholar] [CrossRef]
  55. Deepak, S.; Ameer, P.M. Brain tumor classification using deep cnn features via transfer learning. Comput. Biol. Med. 2019, 111, 103345. [Google Scholar] [CrossRef]
  56. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. Int. J. Comput. Vis. 2019, 128, 336–359. [Google Scholar] [CrossRef]
Figure 1. Proposed brain tumor segmentation and classification framework.
Figure 1. Proposed brain tumor segmentation and classification framework.
Medicina 58 01090 g001
Figure 2. Examples of non-meningioma benign brain images.
Figure 2. Examples of non-meningioma benign brain images.
Medicina 58 01090 g002
Figure 3. Examples of malignant meningioma brain images.
Figure 3. Examples of malignant meningioma brain images.
Medicina 58 01090 g003
Figure 4. Linear contrast stretch outcomes. (a) Input brain MRI, (b) Final contrast stretch image.
Figure 4. Linear contrast stretch outcomes. (a) Input brain MRI, (b) Final contrast stretch image.
Medicina 58 01090 g004
Figure 5. Proposed custom 17-layered CNN architecture for brain tumor segmentation.
Figure 5. Proposed custom 17-layered CNN architecture for brain tumor segmentation.
Medicina 58 01090 g005
Figure 6. Structure of the MobileNetV2 and LSTM hybrid network ( σ o -output gate; σ i -input gate; σ f -forget gate).
Figure 6. Structure of the MobileNetV2 and LSTM hybrid network ( σ o -output gate; σ i -input gate; σ f -forget gate).
Medicina 58 01090 g006
Figure 7. Meningioma brain image. (a) Source MRI, (b) Segmented tumor image, and (c) Extraction of tumor.
Figure 7. Meningioma brain image. (a) Source MRI, (b) Segmented tumor image, and (c) Extraction of tumor.
Medicina 58 01090 g007
Figure 8. Receiver operating characteristic (ROC) curve of the proposed meningioma detection method.
Figure 8. Receiver operating characteristic (ROC) curve of the proposed meningioma detection method.
Medicina 58 01090 g008
Figure 9. Confusion matrix for the classification of brain tumors.
Figure 9. Confusion matrix for the classification of brain tumors.
Medicina 58 01090 g009
Figure 10. Localization of tumor using Grad-CAM on brain MRI.
Figure 10. Localization of tumor using Grad-CAM on brain MRI.
Medicina 58 01090 g010
Table 1. Detailed summaries of current research on the detection and classification of brain tumors.
Table 1. Detailed summaries of current research on the detection and classification of brain tumors.
ReferencesMethod and Methods UsedModalityResults
Maqsood et al. [4]Fuzzy logic and U-NET CNN classificationMRIAccuracy = 98.59%
Sobhaninia et al. [24]Linknet networksMRIDice Score = 0.79
Johnpeter et al. [25]Fusion based CANFIS classifierMRIAccuracy = 98.80%
Togacar et al. [26]BrainMRNetMRIAccuracy = 96.05%
Kibriya et al. [27]CNN, SVM, and KNNMRIAccuracy = 97.70%
Sajjad et al. [28]Cascade CNN and VGG19MRIAccuracy = 94.58%
Shanthakumar [29]Gray Level Co-occurrence and SVMMRIAccuracy = 94.52%
Prastawa et al. [30]Geometric and Spatial ConstraintsMRIAccuracy = 88.17%
Gumaei et al. [31]PCA-NGIST and RELMMRIAccuracy = 94.23%
Swati et al. [32]Fine-tuned VGG19MRIAccuracy = 94.82%
Kumar et al. [33]ResNet50 and Global Average PoolingMRIAccuracy = 97.48%
Table 2. Proposed CNN architecture layers.
Table 2. Proposed CNN architecture layers.
LayersNameTypeActivationsLearnables
1InputImage
256 × 256 × 3 images with
“zero center” normalization
Input Image256 × 256 × 3-
2Conv_1
32 3 × 3 × 3 convolution with
stride [1 1] and padding ’same’
Convolution256 × 256 × 32Weights 3 × 3 × 3 × 32
Bias 1 × 1 × 32
3ReLu_1
relu
ReLu256 × 256 × 32-
4Conv_2
64 3 × 3 × 32 convolution with
stride [1 1] and padding ’same’
Convolution128 × 128 × 64Weights 3 × 3 × 32 × 64
ias 1 × 1 × 64
5ReLu_2
relu
ReLu128 × 128 × 64-
6Conv_3
128 3 × 3 × 64 convolution with
stride [1 1] and padding ’same’
Convolution128 × 128 × 128Weights 3 × 3 × 64 × 128
Bias 1 × 1 × 128
7ReLu_3
relu
ReLu128 × 128 × 128-
8Maxpool_1
5 × 5 max pooling with
stride [1 1] and padding ’same’
Max Pooling64 × 64 × 128-
9Conv_4
128 3 × 3 × 128 convolution with
stride [1 1] and padding ’same’
Convolution64 × 64 × 256Weights 3 × 3 × 128 × 256
Bias 1 × 1 × 256
10ReLu_4
relu
ReLu64 × 64 × 256-
11Maxpool_2
5 × 5 max pooling with
stride [1 1] and padding ’same’
Max Pooling32 × 32 × 256-
12Conv_5
512 3 × 3 × 256 convolution with
stride [1 1] and padding ’same’
Convolution32 × 32 × 512Weights 3 × 3 × 256 × 512
Bias 1 × 1 × 512
13ReLu_5
relu
ReLu32 × 32 × 512-
14Transposed conv
256 3 × 3 × 512 transposed convolution
stride [1 1] and cropping ’same’
Transposed
Convolution
32 × 32 × 512Weights 3 × 3 × 256 × 512
Bias 1 × 1 × 512
15Conv_6
1024 3 × 3 × 512 convolution with
stride [1 1] and padding ’same’
Convolution16 × 16 × 1024Weights 3 × 3 × 256 × 1024
Bias 1 × 1 × 1024
16SoftmaxSoftmax1 × 1 × 256-
17Pixel class
Cross entropy loss
Pixel Classification--
Table 3. Quantitative assessment of the proposed method using M-SVM classification.
Table 3. Quantitative assessment of the proposed method using M-SVM classification.
Proposed Method
Evaluation MetricsPerformance
Accuracy ( A c c )97.47%
Sensitivity ( S e n )97.22%
Specificity ( S p e )97.94%
Dice coefficient index ( D c i )96.71%
Table 4. Performance comparison with existing methods.
Table 4. Performance comparison with existing methods.
AuthorsMethodsAccuracy of Classification
Irfan et al. [44]CNN, LBP, & PSO92.50%
Amin et al. [45]LSTM93.85%
Narmatha et al. [46]Brain-storm optimization92.50%
Khan et al. [47]DCT, CNN, & ELM93.40%
Proposed Method17-layered CNN, MobileNetV2 & M-SVM97.47%
Table 5. Quantitative assessment of the proposed method using M-SVM classification.
Table 5. Quantitative assessment of the proposed method using M-SVM classification.
Proposed Method
Evaluation MetricsPerformance
Accuracy ( A c c )98.92%
Sensitivity ( S e n )98.82%
Specificity ( S p e )99.02%
Dice coefficient index ( D c i )97.87%
Table 6. Performance comparison with existing methods.
Table 6. Performance comparison with existing methods.
AuthorsMethodsAccuracy of Classification
Maqsood et al. [4]U-NET CNN98.59%
Sajjad et al. [28]VGG19 & image augmentation94.58%
Gumaei et al. [31]Regularized Extreme Learning MAchine94.23%
Swati et al. [32]Fine-tuned VGG1994.82%
Kumar et al. [33]ResNet50 & Global Average Pooling97.48%
Cheng et al. [48]Linear discriminant analysis (LDA)93.60%
Badza et al. [49]CNN96.50%
Tripathi et al. [50]SVM94.63%
Ahuja et al. [51]DarkNet-5398.15%
Noreen et al. [52]InceptionV3 & ensemble of KNN, SVM & RF94.34%
Bodapati et al. [53]Two channel DNN97.23%
Anaraki et al. [54]CNN & Genetic Algorithm94.20%
Deepak et al. [55]GoogleNet97.10%
Proposed Method17-layered CNN, MobileNetV2 & M-SVM98.92%
Table 7. Performance comparison of various pre-trained models on brain MRI dataset.
Table 7. Performance comparison of various pre-trained models on brain MRI dataset.
NetworkImages SizeNumber of
Parameters
(in Millions)
DepthUpdated LayersTraining Accuracy
ResNet18224 × 224 × 312187190.3%
DenseNet201224 × 224 × 32020170891.5%
SqueezeNet227 × 227 × 32186892.7%
Inceptionv3299 × 299 × 3244831595.3%
DarkNet19256 × 256 × 321196497.7%
MobileNetV2224 × 224 × 345315498.8%
Table 8. Optimizer function’s performance with fine-tuned MobileNetV2 model on brain MRI dataset.
Table 8. Optimizer function’s performance with fine-tuned MobileNetV2 model on brain MRI dataset.
OptimizerAccuracySensitivitySpecificity
Sgdm98.16%97.71%98.25%
RMSprop98.78%97.89%98.86%
Adam99.31%98.76%99.42%
Table 9. Accuracy comparison using different cross-validation system for brain MRI dataset.
Table 9. Accuracy comparison using different cross-validation system for brain MRI dataset.
MethodCross-ValidationAccuracy
GoogleNet (Deepak et al. [55])5-fold97.10%
DarkNet-53 (Ahuja et al. [51])5-fold98.15%
U-Net CNN (Maqsood et al. [4])5-fold98.59%
Proposed5-fold98.92%
Table 10. MobileNetv2 based classification results for brain MRI dataset.
Table 10. MobileNetv2 based classification results for brain MRI dataset.
MethodSensitivityAccuracyTime (s)
Fine tree89.00%89.20%28.60
E-Bst tree96.25%96.40%577.68
Fine KNN97.50%97.70%37.78
M-SVM98.82%98.92%15.64
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Maqsood, S.; Damaševičius, R.; Maskeliūnas, R. Multi-Modal Brain Tumor Detection Using Deep Neural Network and Multiclass SVM. Medicina 2022, 58, 1090. https://doi.org/10.3390/medicina58081090

AMA Style

Maqsood S, Damaševičius R, Maskeliūnas R. Multi-Modal Brain Tumor Detection Using Deep Neural Network and Multiclass SVM. Medicina. 2022; 58(8):1090. https://doi.org/10.3390/medicina58081090

Chicago/Turabian Style

Maqsood, Sarmad, Robertas Damaševičius, and Rytis Maskeliūnas. 2022. "Multi-Modal Brain Tumor Detection Using Deep Neural Network and Multiclass SVM" Medicina 58, no. 8: 1090. https://doi.org/10.3390/medicina58081090

Article Metrics

Back to TopTop