Next Article in Journal
Nonlinear T-symmetry Quartic, Sextic, Octic Oscillator Models under Real Spectra
Previous Article in Journal
A Novel Radio Geometric Mean Algorithm for a Graph
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advanced Deep Learning Approaches for Accurate Brain Tumor Classification in Medical Imaging

1
Department of Computer Science, Faculty of Computers and Information, Kafrelsheikh University, Kafrelsheikh 33516, Egypt
2
Department of Computer and Information Systems, Sadat Academy for Management Sciences, Cairo 11728, Egypt
3
Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
4
Department of Computer Applications, Govt. Degree College Sumbal, Bandipora, Jammu and Kashmir 193501, India
5
Radiological Sciences Department, College of Applied Medical Sciences, King Khalid University, Abha 61421, Saudi Arabia
6
BioImaging Unit, Space Research Centre, Michael Atiyah Building, University of Leicester, Leicester LE1 7RH, UK
7
Electrical Engineering Department, College of Engineering, King Khalid University, Abha 61421, Saudi Arabia
8
Electronics and Communications Department, College of Engineering, Delta University for Science and Technology, Gamasa 35712, Egypt
9
Prince Laboratory Research, ISITcom, University of Sousse, Hammam Sousse 4023, Tunisia
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(3), 571; https://doi.org/10.3390/sym15030571
Submission received: 6 January 2023 / Revised: 8 February 2023 / Accepted: 14 February 2023 / Published: 22 February 2023

Abstract

:
A brain tumor can have an impact on the symmetry of a person’s face or head, depending on its location and size. If a brain tumor is located in an area that affects the muscles responsible for facial symmetry, it can cause asymmetry. However, not all brain tumors cause asymmetry. Some tumors may be located in areas that do not affect facial symmetry or head shape. Additionally, the asymmetry caused by a brain tumor may be subtle and not easily noticeable, especially in the early stages of the condition. Brain tumor classification using deep learning involves using artificial neural networks to analyze medical images of the brain and classify them as either benign (not cancerous) or malignant (cancerous). In the field of medical imaging, Convolutional Neural Networks (CNN) have been used for tasks such as the classification of brain tumors. These models can then be used to assist in the diagnosis of brain tumors in new cases. Brain tissues can be analyzed using magnetic resonance imaging (MRI). By misdiagnosing forms of brain tumors, patients’ chances of survival will be significantly lowered. Checking the patient’s MRI scans is a common way to detect existing brain tumors. This approach takes a long time and is prone to human mistakes when dealing with large amounts of data and various kinds of brain tumors. In our proposed research, Convolutional Neural Network (CNN) models were trained to detect the three most prevalent forms of brain tumors, i.e., Glioma, Meningioma, and Pituitary; they were optimized using Aquila Optimizer (AQO), which was used for the initial population generation and modification for the selected dataset, dividing it into 80% for the training set and 20% for the testing set. We used the VGG-16, VGG-19, and Inception-V3 architectures with AQO optimizer for the training and validation of the brain tumor dataset and to obtain the best accuracy of 98.95% for the VGG-19 model.

1. Introduction

Symmetry refers to the property of a shape or object that remains unchanged under certain transformations, such as reflection or rotation. Asymmetry, on the other hand, refers to the lack of symmetry or the property of a shape or object that is not unchanged under the same transformations. These concepts can be applied to various fields, such as mathematics, physics, art, and biology. In physics, symmetry plays an important role in the laws of nature, such as in the laws of conservation of energy and momentum. In biology, symmetry is also observed in living organisms and is used to classify them into different groups. In the field of brain tumor classification, the symmetry/asymmetry phenomenon plays a crucial role in determining the type and grade of a tumor. Tumors that exhibit symmetry in their shape and structure are typically classified as low-grade tumors, while tumors that are asymmetrical are classified as high-grade tumors. This is because symmetrical tumors tend to grow slowly and have a lower risk of spreading to other parts of the brain, while asymmetrical tumors tend to grow rapidly and have a higher risk of spreading.
Additionally, the use of imaging techniques such as MRI and CT scans can help to identify the symmetry/asymmetry of a tumor, providing important information for the diagnostic process. For example, a symmetrical tumor will have a distinct, well-defined border and a smooth surface, while an asymmetrical tumor will have an irregular border and a rough surface. Overall, the symmetry/asymmetry phenomenon plays a key role in the diagnosis and classification of brain tumors, helping to determine the appropriate course of treatment for patients.
Tumors of the brain are collections of aberrant cells that develop inside the brain’s tissues. Benign brain tumors and malignant brain tumors are the two main categories of brain tumors. Surgery may remove benign tumors of the brain; nevertheless, malignant tumors of the brain are among the worst cancers and can result in death [1]. It is also possible to separate primary brain tumors from metastatic ones that have spread from other regions of the body to the brain. Among adults, primary central nervous system lymphoma and gliomas are the most prevalent primary brain tumors, with gliomas accounting for over 80% of all malignant brain tumors. Distinct symptoms, such as headache, vomiting, visual deterioration, seizures, and disorientation, emerge with different lesion regions. The more aggressive a brain tumor is, the higher the grade it falls under in a more thorough categorization. There will be about 308,000 new instances of brain malignancies in 2020 [2], accounting for approximately 1.6 percent of all new cancer cases, and approximately 251,000 deaths from brain tumors, accounting for approximately 2.5 percent of all cancer fatalities.
In the United States, brain tumors were the greatest cause of cancer-related mortality in children (ages 0–14) in 2016 and were rated higher than leukemia [3]. A tumor of the central nervous system (CNS) or brain is the third most prevalent malignancy among adolescents and young adults (15–39 years old) [4]. Tumors in the brain need a variety of treatment options. Tumors in traditional computer-aided diagnostic systems are discovered and segmented before classification into several classes. Feature extraction and classification are performed on the segmented area after tumor mass segmentation.
The effective treatment of brain cancers requires early identification [5]. The advancement of medical imaging methods has made it possible for clinicians to get a clear picture of the human brain structure, which is helpful in the diagnosis and treatment of brain tumors. Using these imaging tools, physicians may get a better idea of the size, shape, and location of a patient’s brain tumors, which helps them choose the best course of therapy. In neurology, magnetic resonance imaging (MR imaging) is a frequent scanning approach. When a very powerful magnetic field and radiofrequency signals are used to stimulate a target tissue, the result is a picture of the tissue inside. Soft tissue contrast is enhanced, yet ionizing radiation exposure is avoided. To identify brain lesions, MR imaging is more appropriate [6].
As a result of its success in the area of intelligent health, artificial intelligence has gained a lot of attention recently. In the field of medical image processing, classifying and segmenting MR images using artificial intelligence approaches has become a hot issue [7]. The classification of brain pictures into normal and abnormal, that is, whether or not the image includes a tumor, and classification within aberrant brain images, that is, discrimination between several kinds of brain tumors, are the two primary applications of brain tumor classification. To properly classify the many types of brain tumors, it is necessary to use more than a simple binary system. Regarding permeable brain tumors, their appearance is very variable, their position is unpredictably random, and the voxels in each subregion vary greatly providing the greatest problem [8,9].
The purpose of this research is to develop a brain tumor MR image classification approach that is both automated and effective so that clinicians can make better decisions. In this paper, a new model is implemented for detecting and classifying brain tumors based on pre-trained CNNs such as VGG16, VGG19, and InceptionV3. The learned parameters from the pre-trained models are transferred to the brain tumor model to improve model performance. The AQO algorithm is applied for network optimization.
The rest of this paper is organized as follows. Section 2 discusses the related work, whereas a description of the presented model for brain tumor classification is presented in Section 3. The experimental results are presented in Section 4. Finally, the conclusion is presented in Section 5.

2. Related Work

For the purpose of classifying different types of brain tumors, Shaveta and Meghna conducted a comparative examination of several CNN-based transfer learning models as well as an exploratory investigation of brain MRI images for the categorization of brain tumors in MRI images. They used several methods including medical image processing, pattern analysis, and computer vision to enhance, segment, and classify brain diagnoses to accurately detect and categorize brain malignancies [10].
In order to predict brain stroke from CT/MRI scan images, Meenakshi and Revathy devised a deep learning model that makes use of a back propagation neural network classifier. The suggested model’s effectiveness and accuracy are assessed and contrasted with those of existing models, and it yields results with high sensitivity, specificity, precision, and accuracy [11].
Khan et al. introduced an automated multimodal classification method utilizing deep learning. There are five main steps in the suggested procedure. In the first step, edge-based histogram equalization and the discrete cosine transform are used to implement the linear contrast stretching (DCT).
DL feature extraction is carried out in the second stage. Two pre-trained convolutional neural network (CNN) models, namely VGG16 and VGG19, were employed for feature extraction by using transfer learning. The extreme learning machine (ELM) and a correntropy-based joint learning strategy were both used in the third stage to choose the best features. The robust covariant features based on partial least squares (PLS) were combined into one matrix in the fourth phase. The suggested method was tested on the BraTS datasets, and for BraTs2015, BraTs2017, and BraTs2018 accuracies of 97.8%, 96.9%, and 92.5%, respectively, were attained [12].
The essential phases of the Deep Learning-based Brain Tumor Classification (BTC) technique, including pre-processing, function extraction, and identification, as well as their benefits and limitations, were covered by Padmapriya et al.
By conducting in-depth tests using transfer learning but without record extension, they discussed BTC’s convolutional neural network frames. They introduced a thorough study of surveys that had already been done and the most recent BTC deep learning techniques in this analysis [13].
For identifying different forms of brain tumors, Somaya et al. suggested a convolutional neural network architectural model. The proposed network structure was discovered to give a noteworthy performance, with a 96.05% overall best accuracy. The results showed that the suggested model can categorize brain tumors for a variety of applications, and they also support the idea that proper pre-processing and data augmentation will result in a precise classification. The effectiveness of various existing object detecting techniques is assessed [14].
Sohaib et al. identified specific brain cancers using MRI, by setting out to build a reliable and effective system based on the transfer learning technique. To extract the deep features from brain MRI, pre-trained models such as Xception, NasNet Large, DenseNet121, and InceptionResNetV2 were employed. Two benchmark datasets that are freely available online were used in the experiment. On a brain MRI dataset, deep transfer learning models were trained and assessed using three different optimization strategies (ADAM, SGD, and RMSprop). Accuracy, sensitivity, precision, specificity, and F1-score are a few performance metrics that are used to assess the performance of transfer learning models. Their proposed CNN model, which is built on the Xception architecture and uses an ADAM optimizer, is superior to the other three proposed models, according to the experimental data. On the MRI-large dataset, the Xception model obtained accuracy, sensitivity, precision specificity, and F1-score values of 99.67%, 99.68%, 99.66%, and 99.68%, respectively; on the MRI-small dataset, it obtained accuracy, sensitivity, precision specificity, and F1-score values of 91.94%, 96.55%, 87.50%, 87.88%, and 91.80%, respectively. The suggested method outperforms the available literature, demonstrating that it is capable of rapidly and correctly identifying brain cancers [15].
Using Deep Learning (DL) and Machine Learning (ML) methodologies, Hareem et al. investigated multiclass brain tumor classification methods. In the beginning, end-to-end Convolutional Neural Network (CNN) models were used to classify the brain MRI pictures. Using a Support Vector Machine, the deep features collected from the CNN models were also categorized (SVM). The proposed method was tested and evaluated on 15,320 MRI images, and the CNN-SVM-based method had the greatest accuracy of 98 percent. Their suggested method fared better than the current systems for identifying brain tumors and can help doctors find brain tumors and make important treatment decisions for patients [16].
Dinesh et al. proposed a novel deep learning method for classifying brain tumors on MRI images. MR images were used to pre-train a deep neural network as a discriminator in a generative adversarial network (GAN) utilizing a multi-scale gradient GAN (MSGGAN) with auxiliary categorization. One of the fully linked blocks served as an auxiliary classifier while the other fully connected block served as an adversary in the discriminator. The auxiliary block’s fully connected layers were precisely tailored to categorize the type of tumor. The suggested method was evaluated using four different types of brain cancers from two publicly accessible MRI datasets (glioma, meningioma, pituitary, and no tumor). Their suggested strategy outperformed state-of-the-art techniques with an accuracy rate of 98.57 percent. Additionally, our approach seems to be a practical solution in cases when there are few medical picture resources available [17].
Two techniques are presented by Soukaina and Hamdi for the recognition of brain tumors in medical imaging. The first is based on Deep Learning and uses the U-net architecture, which has demonstrated reliability when it comes to picture segmentation, particularly for medical imaging. A second method, which makes use of LBP and k-means methodologies, will compare the outcomes. By computing the class correlation, the Markov method’s identified classes are enhanced. The comparison was conducted using the identical BraTS2019 dataset, which will allow us to gauge how well each performed [18].
Arumaiththurai and Mayurathan [19] described two methods for categorizing brain tumors using ML and DL methods. The first suggested methodology, which is based on ML techniques, makes use of 16 statistical characteristics. For classification, decision trees and SVM are also utilized. Thus, the second suggested DL method classifies tumors using a pre-trained CNN model termed VGG19 and ResNet152. Their suggested solution, which is based on CNN, obtains 94.67 percent accuracy based on their trial results and performs better in terms of classification performance. As a result, the performance of the suggested technique, which uses statistical characteristics and an SVM classifier, is only 80.54 percent. It may be said that a CNN-based strategy outperforms other suggested approaches and cutting-edge methods in terms of classification performance [20].
Sasmitha et al. suggested an interpretable deep learning model that is more human-understandable than current black-box methods. To segment and define brain cancers using MRI, they verified the system using the MICCAI 2020 Brain Tumor dataset, and their suggested model generates a heat map illustrating the importance of each region of the input to the classification result [21].
In addition, the Br35H dataset [22] was used by Naseer et al. and Kang et al. to test out their respective tumor and non-tumor classification approaches. Twelve layers were implemented by Naseer et al. Overall accuracy was 98.80% for a CNN model trained with the Leaky ReLU [23] activation function. By combining the strengths of three different convolutional neural network (CNN) models—DenseNet121, ResNet101, and NasNet—Kang et al. achieved an overall accuracy of 98.67%.
The literature review covers various studies on the use of deep learning and machine learning for the classification of brain tumors in MRI images. These studies suggest the use of Convolutional neural networks (CNN), transfer learning, deep features extraction, edge-based histogram equalization, and discriminator in generative adversarial networks (GAN) for the classification of brain tumors. However, the proposed work focuses on the optimization of the CNN models using the Aquila Optimizer (AQO) for the best accuracy of 98.95% for the VGG-19 model. On the other hand, the related work mentions the comparison between the proposed methods an existing model in terms of accuracy, sensitivity, specificity, and F1-score. The proposed work, on the other hand, only mentions the use of the VGG-16, VGG-19, and Inception-V3 architectures and the AQO optimizer for training and validation. Overall, the proposed work is a more specific and focused study on the optimization of CNN models for the classification of brain tumors using AQO optimizer, while the related work provides a broader overview of the use of deep learning and machine learning methods for the same task.

3. Materials and Methods

This section shows a discussion of the deep learning techniques used and the optimization algorithm, which is known as data-driven AI. Moreover, the mathematical formulation of the feature selection method used to reduce the number of features is introduced.

3.1. Data Collection

In this research, Table 1 describes the training data that was collected from Kaggle (the dataset that was used for this problem is Br35H: Brain Tumor Detection 2020 [22]). It consists of MRI scans of two classes: NO—no tumor, encoded as 0, and YES—tumor, encoded as 1. The dataset contains three folders: yes, no, and pred, which contains 3060 Brain MRI Images.
The next step is to create a copy of the used images for exploration and visualization of data as shown in Figure 1.

3.1.1. Data Preparation

The used data was split into three categories: 2400 MRI images for the training set, 550 MRI images for the validation set, and 50 MRI images for the testing set, as shown in Figure 2.

3.1.2. Data Preprocessing

The preprocessing of data in this research consisted of four main steps to improve the brain images, as shown in Figure 3.
  • CLAHE (Construct limited histogram equalization)
CLAHE is a method of computer image processing used to increase contrast in images [23]. The adaptive approach is different from traditional histogram equalization in that it computes many histograms, each corresponding to a different area of the image, and then uses them to redistribute the image’s intensity values. Therefore, it is appropriate for improving the definition of edges in each area of an image as well as the local contrast.
  • Morphological analysis
Morphological analysis is used to remove any non-brain regions before segmentation. The morphological operations are illustrated in Figure 4.

3.1.3. Data Segmentation

A threshold-based segmentation method for automatic patch extraction can save computation time and focus the analysis on the area most affected by cancer [7,8].
  • Resizing Images
In this step, images are resized to be (224 × 224 × 3) as the pre-trained networks input size.
  • Data Augmentation
An enhanced version of the used images was augmented with rotation, manipulating brightness, horizontal flip, vertical flip, etc., which will benefit the model to be more efficient and not overfit for the given problem, as shown in Figure 5.
The Brain Data Augmentation (BDA) algorithm was used in the proposed research which is based on rotation and flipping techniques for preventing the overfitting problem. The BDA steps are illustrated in Algorithm 1.
Algorithm 1 BDA
Input:
-
Abnormal A segmented Brain MR data
-
Normal N segmented Brain MR data
Processing:
-
Step 1: ∀ A, rotate to 45o, 90o, 180o, 270o
-
Step 2: Flipping step 1 output
-
Step 3: ∀ N, rotate to 45o, 90o, 180o, 270o
-
Step 4: Flipping step 3 output
-
Repeat for all training data
Output:
Save steps 1, 2, 3, 4, 5, 6

3.2. Methods

3.2.1. Aquila Optimizer (AQO): A Meta-Heuristic Optimization Algorithm

Aquila Optimizer [24] is a revolutionary population-based optimization approach that is based on Aquila’s behavior while it hunts. It is therefore possible to express the optimization processes of the proposed AQO algorithm in four ways: high soar with a vertical stoop; contour flight with short glide attack; low flight with slow descent assault; and swooping by walk and capture prey, all of which may be applied to the search space.
To begin the process of AQO, the population of potential solutions (X) is created stochastically between the upper bound (UB) and lower bound (LB) of the given issue, and the optimization rule is derived from this population. During each iteration, the best-obtained solution is found to be an approximate optimum solution for the problem at hand.
The AQO algorithm can transfer from exploration steps to exploitation steps using different behaviors based on this condition: if t ≤ (2/3)*T the exploration steps will be excited; otherwise, the exploitation steps will be executed. As a mathematical optimization paradigm, Aquila’s behavior was characterized as discovering the optimum solution given a set of specified restrictions. AQO’s mathematical model is presented in the following manner.
  • Generation of Initial Population
To demonstrate the effectiveness of the provided AQO, the tested benchmark data is first divided into a training set consisting of 80% of the data and a testing set consisting of 20% of the data. Equation (1) creates the initial population X, which is made up of N solutions:
Xi = LB + rand (1,D) × (UB − LB)
where D is the number of features; rand (1,D) is a random D-dimensional vector. The search space’s perimeters are symbolized by LB and UB.
  • Updating Population
The following Equation, Equation (2), transforms Xij, i = 1, 2, ..., N into its Boolean value BXij at the beginning of this step.
B X i j = f x = 1 ,   i f   B X i j > 0.5 0 ,   o t h e r w i s e
It is possible to limit feature selection based on the Equation (2) result by discarding the useless features that have zero values in BXi. Once the fitness value has been determined, it may be calculated using Equation (3):
F i t i = λ × γ i + 1 λ × B X i D
This is followed by a determination of the best fit and its associated agent Xb (i.e., the best one). Then AQO operators are added to the present agents.
  • Terminal Criteria
At this step, the stopping criteria are evaluated, and, if they are not fulfilled, the updated stage is done again. In this case, the learning process is ended, and Xb is used as the result to reduce the testing set in the following step.
  • Validation Stage
It is necessary to minimize the testing set’s characteristics to assess how well AQO performs as an FS strategy. Finally, several performance indicators are used to evaluate the classification process’s quality based on the reduced characteristics.

3.2.2. VGG-16 Model

This model was presented by Karen Simonyan and Andrew Zisserman of Oxford University [25]; it is a Convolutional Neural Network (CNN) model, as seen in Figure 6. The model itself was submitted in 2014 for the ILSVRC ImageNet Challenge; however, the initial concept was provided in 2013. In a yearly competition called the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), researchers tested and compared several methods of large-scale picture categorization (including object identification). They performed well in the competition but came up short.
Examples of the usages of the VGG-16 model are as follows:
  • VGG-16 can recognize and categorize images for the purpose of medical imaging diagnostics, such as x-ray and MR images. Furthermore, it may be used in the ability to read street signs while in motion.
  • Although its detection capabilities were not covered in the introduction, VGG-16 can achieve excellent results in image detection use cases: notably, it triumphed in 2014′s ImageNet detection contest (where it ended up as the first runner-up for the classification challenge).
  • The model may be trained to generate image embedding vectors, which can then be utilized for tasks such as face verification inside a VGG-16-based Siamese network. This is made possible by removing the top output layer.

3.2.3. VGG-19 Model

The convolutional neural network VGG-19, introduced by Simonyan and Zisserman [26], consists of 19 layers, including 16 convolution layers and three fully connected ones, and can be used to categorize photos into 1000-item categories. VGG-19 is educated using the 1,000,000+ picture ImageNet database, which is divided into 1000 categories. The 33 filters used by each convolutional layer contribute to the method’s widespread adoption for application in image classification. The diagrammatical representation of VGG-19′s internal structure is represented in Figure 6. This demonstrates the usage of 16 convolutional layers for feature extraction and the effectiveness of the subsequent 3 layers for classification.
Five distinct sets of feature extraction layers are then followed by max-pooling layers. This model accepts a 224-by-224-pixel picture as input and returns the image’s label. For feature extraction, the article uses a pre-trained VGG-19 model, while other machine learning techniques are used for classification. The massive parameters computed by the CNN model after feature extraction need dimensionality reduction to reduce the size of the feature vector. Locality Preserving Projection is used to accomplish dimensionality reduction, and then a classification technique is used [27].

3.2.4. Inception-V3 Model

Inception-v3 was presented by the Google group [28] as network architecture, as shown in Figure 6, built on adaptations of AlexNet. The use of Inception-v3 in the field of image recognition has become widespread. With this procedure, the network’s calculation complexity was lowered, and the model’s feature expressiveness was stabilized, allowing for the efficient extraction of visual features across many scales. All these analyses demonstrate Inception-v3′s efficacy in recognizing images and spotting targets. Inception-v3, with its deeper network design and 299 input size, can perform calculations more quickly than Inception-v1 and Inception-v2. There are fewer inputs and less time spent in training for this model [29].
On the basis of these strengths of Inception-v3, this research presented a transfer learning-fused Inception-v3 model to finish identifying ancient mural dynasties. The mural pictures used to evaluate the proposed model’s categorization abilities mirrored the many artistic styles and eras that produced them [30].

4. Experimental Setup and Results

4.1. Experimental Design

The goal of the present study is to provide an efficient classification model using deep learning techniques based on the AQO framework. To determine the best-performing technique that can be used in the prediction model, the framework shown in Figure 7 was developed.

4.2. VGG-16 Model Validation

Using the VGG-16 model with AQO optimizer, we got results with validation accuracy of (97.2%) with loss equal to 0.052, trained on the dataset, which was obtained from kaggle, as presented in Figure 8 and the confusion matrix is shown in Figure 9.
Then, by adding some modifications to AQO, we obtain a better result with validation accuracy of (98.66%), as presented in Figure 10; the confusion matrix is shown in Figure 11.

4.3. VGG-19 Model Validation

Using the VGG-19 model with AQO optimizer, we obtained results with validation accuracy of (98.95%) with loss equal to 0.46, trained on the dataset, which was obtained from kaggle, as presented in Figure 12; the confusion matrix is shown in Figure 13.

4.4. Inception-V3 Model Validation

Using the Inception-V3 model with AQO optimizer, we obtained results with validation accuracy of (97.38%) with loss equal to 0.66, trained on the dataset, which was obtained from kaggle, as presented in Figure 14.

4.5. Results

The main purpose of this research was to build an optimized CNN model to be used in brain tumor classification based on MR image scanning. AQO was used for initial population generation and modification on the tested benchmark data which was first divided into a training set consisting of 80% of the data and a testing set consisting of 20% of the data. We used the VGG-16, VGG-19, and Inception-V3 architectures with AQO optimizer for the training and validation of the brain tumor dataset. We used accuracy as a metric to justify the models’ performance, as presented in Table 2. The results show that VGG-19 achieved the best results, with 98.95%, 99.1%, and 99.6% for accuracy, sensitivity, and specificity, respectively. On the other hand, VGG-16 with modified AQO applied the second-best results, with overall accuracy of 98.66%. The detailed results per class are presented in Table 3.

5. Conclusions

Recently, machine learning, and deep learning precisely, is being viewed as the most important field for the classification of large datasets, particularly in the medical domain. Its techniques improve the capability of humans in treating large datasets by finding the important attributes in the dataset. This study explores the importance of CNN models (VGG-16, VGG-19, and Inception-V3) by performing different measurements in a brain tumor dataset. The accuracy, sensitivity, and specificity of the utilized models were recorded and compared. The accuracy of the classifiers ranged from 97.2% to 98.95%. The VGG-19 model with AQO optimizer produced the highest accuracy and showed better results when compared with other models.
Deep learning remains at the forefront of future studies in healthcare applications. It can be used to identify and diagnose diseases based on its ability to classify data. This not only reduces the length of the diagnosis process but also reduces mistakes made by doctors, as medical training takes a long time. The methodology applied here can be used for medical imaging diagnosis, which is promising where a combination of data from multiple data sources can lead to a different progression. Moreover, it will be interesting to implement the algorithm for crowdsourcing data collection and analysis. Finally, there are various domains for DL application in healthcare.

Author Contributions

All authors contributed equally to the conceptualization, formal analysis, investigation, methodology, and writing and editing of the original draft. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R321), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University (KKU) for funding this work through the Research Group Program Under the Grant Number:(R.G.P.1/224/43).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets used during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, J.; Lv, X.; Zhang, H.; Liu, B. AResU-Net: Attention Residual U-Net for Brain Tumor Segmentation. Symmetry 2020, 12, 721. [Google Scholar] [CrossRef]
  2. Nirmalapriya, G.; Agalya, V.; Regunathan, R.; Ananth, M.B.J. Fractional Aquila spider monkey optimization based deep learning network for classification of brain tumor. Biomed. Signal Process. Control 2023, 79, 104017. [Google Scholar] [CrossRef]
  3. Paul, J.S.; Plassard, A.J.; Landman, B.A.; Fabbri, D. Deep learning for brain tumor classification. In Proceedings of the SPIE 10137, Medical Imaging 2017: Biomedical Applications in Molecular, Structural, and Functional Imaging, Orlando, FL, USA, 12–14 February 2017. [Google Scholar]
  4. Masood, M.; Nazir, T.; Nawaz, M.; Mehmood, A.; Rashid, J.; Kwon, H.-Y.; Mahmood, T.; Hussain, A. A Novel Deep Learning Method for Recognition and Classification of Brain Tumors from MRI Images. Diagnostics 2021, 11, 744. [Google Scholar] [CrossRef] [PubMed]
  5. Ali, M.; Shah, J.H.; Khan, M.A.; Alhaisoni, M.; Tariq, U.; Akram, T.; Kim, Y.J.; Chang, B. Brain tumor detection and classification using pso and convolutional neural network. Comput. Mater. Contin. 2022, 73, 4501–4518. [Google Scholar] [CrossRef]
  6. Zhang, Y.; Liu, X.; Wa, S.; Liu, Y.; Kang, J.; Lv, C. GenU-Net++: An Automatic Intracranial Brain Tumors Segmentation Algorithm on 3D Image Series with High Performance. Symmetry 2021, 13, 2395. [Google Scholar] [CrossRef]
  7. Saber, A.; Keshk, A.; Abo-Seida, O.; Sakr, M. Tumor detection and classification in breast mammography based on fine-tuned convolutional neural networks. IJCI Int. J. Comput. Inf. 2022, 9, 74–84. [Google Scholar]
  8. Saber, A.; Sakr, M.; Abo-Seida, O.; Keshk, A.; Chen, H. A novel deep-learning model for automatic detection and classification of breast cancer using the transfer-learning technique. IEEE Access 2021, 9, 71194–71209. [Google Scholar] [CrossRef]
  9. Elmezain, M.; Mahmoud, A.; Mosa, D.; Said, W. Brain Tumor Segmentation Using Deep Capsule Network and Latent-Dynamic Conditional Random Fields. J. Imaging 2022, 8, 190. [Google Scholar] [CrossRef] [PubMed]
  10. Arora, S.; Sharma, M. Deep Learning for Brain Tumor Classification from MRI Images. In Proceedings of the Sixth International Conference on Image Information Processing (ICIIP), Shimla, India, 26–28 November 2021; pp. 409–412. [Google Scholar]
  11. Meenakshi, A.; Revathy, S. An Efficient Model for Predicting Brain Tumor using Deep Learning Techniques. In Proceedings of the 5th International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 10–12 June 2020; pp. 1000–1007. [Google Scholar]
  12. Khan, M.A.; Ashraf, I.; Alhaisoni, M.; Damaševičius, R.; Scherer, R.; Rehman, A.; Bukhari, S.A.C. Multimodal Brain Tumor Classification Using Deep Learning and Robust Feature Selection: A Machine Learning Application for Radiologists. Diagnostics 2020, 10, 565. [Google Scholar] [CrossRef] [PubMed]
  13. Hossain, A.; Islam, M.T.; Abdul Rahim, S.K.; Rahman, M.A.; Rahman, T.; Arshad, H.; Khandakar, A.; Ayari, M.A.; Chowdhury, M.E.H. A Lightweight Deep Learning Based Microwave Brain Image Network Model for Brain Tumor Classification Using Reconstructed Microwave Brain (RMB) Images. Biosensors 2023, 13, 238. [Google Scholar] [CrossRef]
  14. Xenya, M.C.; Wang, Z. Brain Tumour Detection and Classification using Multi-level Ensemble Transfer Learning in MRI Dataset. In Proceedings of the International Conference on Artificial Intelligence, Big Data, Computing and Data Communication Systems (icABCD), Durban, South Africa, 5–6 August 2021; pp. 1–7. [Google Scholar]
  15. El-Feshawy, S.A.; Saad, W.; Shokair, M.; Dessouky, M. Brain Tumour Classification Based on Deep Convolutional Neural Networks. In Proceedings of the International Conference on Electronic Engineering (ICEEM), Menouf, Egypt, 3–4 July 2021; pp. 1–5. [Google Scholar]
  16. Asif, S.; Yi, W.; Ain, Q.U.; Hou, J.; Yi, T.; Si, J. Improving Effectiveness of Different Deep Transfer Learning-Based Models for Detecting Brain Tumors From MR Images. IEEE Access 2022, 10, 34716–34730. [Google Scholar] [CrossRef]
  17. Kibriya, H.; Masood, M.; Nawaz, M.; Rafique, R.; Rehman, S. Multiclass Brain Tumor Classification Using Convolutional Neural Network and Support Vector Machine. In Proceedings of the Mohammad Ali Jinnah University International Conference on Computing (MAJICC), Karachi, Pakistan, 15–17 July 2021; pp. 1–7. [Google Scholar]
  18. Yerukalareddy, D.R.; Pavlovskiy, E. Brain Tumor Classification based on MR Images using GAN as a Pre-Trained Model. In Proceedings of the IEEE Ural-Siberian Conference on Computational Technologies in Cognitive Science, Genomics and Biomedicine (CSGB), Novosibirsk, Russia, 26–28 May 2021; pp. 380–388. [Google Scholar]
  19. El kaitouni, S.E.I.; Tairi, H. Segmentation of medical images for the extraction of brain tumors: A comparative study between the Hidden Markov and Deep Learning approaches. In Proceedings of the International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco, 9–11 June 2020; pp. 1–5. [Google Scholar]
  20. Arumaiththurai, T.; Mayurathan, B. The Effect of Deep Learning and Machine Learning Approaches for Brain Tumor Recognition. In Proceedings of the 10th International Conference on Information and Automation for Sustainability (ICIAfS), Negambo, Sri Lanka, 11–13 August 2021; pp. 185–193. [Google Scholar]
  21. Dasanayaka, S.; Silva, S.; Shantha, V.; Meedeniya, D.; Ambegoda, T. Interpretable Machine Learning for Brain Tumor Analysis Using MRI. In Proceedings of the 2nd International Conference on Advanced Research in Computing (ICARC), Belihuloya, Sri Lanka, 23–24 February 2022; pp. 212–219. [Google Scholar]
  22. Available online: https://www.kaggle.com/datasets/ahmedhamada0/brain-tumor-detection (accessed on 20 December 2022).
  23. Mishra, A. Contrast Limited Adaptive Histogram Equalization (CLAHE) Approach for Enhancement of the Microstructures of Friction Stir Welded Joints. 2021. Available online: https://arxiv.org/ftp/arxiv/papers/2109/2109.00886.pdf (accessed on 30 January 2023).
  24. Abualigaha, L.; Yousrib, D.; Elaziz, M.A.; Eweesd, A.A.; Al-qanesse, M.A.A.; Gandomi, A.H. Aquila Optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  25. Li, H.; Wang, X. Robustness Analysis for VGG-16 Model in Image Classification of Post-Hurricane Buildings. In Proceedings of the 2nd International Conference on Big Data & Artificial Intelligence & Software Engineering (ICBASE), Zhuhai, China, 24–26 September 2021; pp. 401–409. [Google Scholar]
  26. Shaha, M.; Pawar, M. Transfer Learning for Image Classification. In Proceedings of the Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 29–31 March 2018; pp. 656–660. [Google Scholar]
  27. Zhenga, Y.; Yangb, C.; Merkulovb, A. Breast Cancer Screening Using Convolutional Neural Network and Follow-up Digital Mammography. Comput. Imaging III 2018, 10669, 1066905. [Google Scholar]
  28. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  29. Jakhar, S.P.; Nandal, A.; Dixit, R. Classification and Measuring Accuracy of Lenses Using Inception Model V3. In Advances in Intelligent Systems and Computing; Book Series (AISC); Springer: Berlin/Heidelberg, Germany, 2020; p. 1189. [Google Scholar]
  30. Andrew, L.M.; Hannun, A.Y.; Ng, A.Y. Rectifier nonlinearities improve neural network acoustic models. In Proceedings of the 30th International Conference on International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; Volume 30, p. 3. [Google Scholar]
Figure 1. Creating copies of data images.
Figure 1. Creating copies of data images.
Symmetry 15 00571 g001
Figure 2. Count of classes in each set.
Figure 2. Count of classes in each set.
Symmetry 15 00571 g002
Figure 3. Data preprocessing steps.
Figure 3. Data preprocessing steps.
Symmetry 15 00571 g003
Figure 4. The morphological operations [8].
Figure 4. The morphological operations [8].
Symmetry 15 00571 g004
Figure 5. Sample of the augmented images.
Figure 5. Sample of the augmented images.
Symmetry 15 00571 g005
Figure 6. Inception v3, VGG-16, and VGG-19 architectures.
Figure 6. Inception v3, VGG-16, and VGG-19 architectures.
Symmetry 15 00571 g006
Figure 7. The proprosed flowchart.
Figure 7. The proprosed flowchart.
Symmetry 15 00571 g007
Figure 8. VGG-16 using AQO optimizer.
Figure 8. VGG-16 using AQO optimizer.
Symmetry 15 00571 g008
Figure 9. The confusion matrix of the model.
Figure 9. The confusion matrix of the model.
Symmetry 15 00571 g009
Figure 10. Vgg-16 using modified AQO optimization.
Figure 10. Vgg-16 using modified AQO optimization.
Symmetry 15 00571 g010
Figure 11. The confusion matrix of the model with a modified optimizer.
Figure 11. The confusion matrix of the model with a modified optimizer.
Symmetry 15 00571 g011
Figure 12. Vgg-19 using AQO optimization.
Figure 12. Vgg-19 using AQO optimization.
Symmetry 15 00571 g012
Figure 13. The confusion matrix of the model.
Figure 13. The confusion matrix of the model.
Symmetry 15 00571 g013
Figure 14. Inception-V3 Using AQO Optimization.
Figure 14. Inception-V3 Using AQO Optimization.
Symmetry 15 00571 g014
Table 1. Brain Tumor Detection dataset.
Table 1. Brain Tumor Detection dataset.
FolderDescription
YesFolder yes contains 1500 Brain MRI Images that are tumors
NoFolder no contains 1500 Brain MRI Images that are non-tumorous
PredThis folder contains 60 Brain MRI Images that are both tumors and non-tumorous to be used to validate the model in the end
Table 2. Comparison of models’ performance.
Table 2. Comparison of models’ performance.
CNN ModelAccuracySensitivitySpecificity
VGG-16 with AQO97.298.2398.55
VGG-16 with modified AQO98.6699.0599.4
VGG-19 with AQO98.9599.199.6
Inception-V3 with AQO97.3897.1897.61
Table 3. Comparison of models’ performance per class.
Table 3. Comparison of models’ performance per class.
CNN ModelClassClassifier Performance
Accuracy (%)SensitivitySpecificity
VGG-16 with AQONormal96.8997.4698.9
Abnormal97.529998.2
VGG-16 with modified AQONormal99.1299.299.2
Abnormal98.598.999.6
VGG-19 with AQONormal99.5298.8100
Abnormal98.3999.4199.2
Inception-V3 with AQONormal96.997.3197.21
Abnormal97.8797.098.02
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mahmoud, A.; Awad, N.A.; Alsubaie, N.; Ansarullah, S.I.; Alqahtani, M.S.; Abbas, M.; Usman, M.; Soufiene, B.O.; Saber, A. Advanced Deep Learning Approaches for Accurate Brain Tumor Classification in Medical Imaging. Symmetry 2023, 15, 571. https://doi.org/10.3390/sym15030571

AMA Style

Mahmoud A, Awad NA, Alsubaie N, Ansarullah SI, Alqahtani MS, Abbas M, Usman M, Soufiene BO, Saber A. Advanced Deep Learning Approaches for Accurate Brain Tumor Classification in Medical Imaging. Symmetry. 2023; 15(3):571. https://doi.org/10.3390/sym15030571

Chicago/Turabian Style

Mahmoud, Amena, Nancy Awadallah Awad, Najah Alsubaie, Syed Immamul Ansarullah, Mohammed S. Alqahtani, Mohamed Abbas, Mohammed Usman, Ben Othman Soufiene, and Abeer Saber. 2023. "Advanced Deep Learning Approaches for Accurate Brain Tumor Classification in Medical Imaging" Symmetry 15, no. 3: 571. https://doi.org/10.3390/sym15030571

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop