1. Introduction
Brain tumors are among the most dangerous types of brain diseases that can develop as a result of abnormal cell growth within the skull. There are two types of brain tumors: primary tumors and secondary tumors. Primary brain tumors account for 70% of all tumors and spread only in the brain, whereas secondary brain tumors form in other organs before migrating to the brain, such as the breast, kidney, and lung. According to a National Brain Tumor Foundation (NBTF) study, approximately 29,000 cases of primary brain tumors are diagnosed in the United States each year, resulting in the deaths of 13,000 people [
1]. According to R.J. Young [
2], roughly 42,000 people in the United Kingdom die each year from primary brain tumors. Gliomas, meningiomas, and pituitary tumors are the three most common forms of brain tumors. Gliomas are caused by the unregulated growth of glial cells, which constitute approximately 80 percent of the brain’s tissue. This primary cancer has the greatest death rate of any cancer. Meningioma tumors form in the spinal cords meninges, the membrane that protects the brain. The pituitary tumor, on the other hand, develops within the pituitary gland. This gland produces many important hormones. Although pituitary tumors are benign, they can induce hormonal imbalances and irreversible visual loss. It is critical to obtain an accurate and timely diagnosis of brain tumors [
3] to prevent patients from negative consequences. Multiple imaging techniques can be used to diagnose brain tumors, each with a slightly different focus. Three of the most used tools are computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound [
4]. MRI, or magnetic resonance imaging, is the most widely used non-invasive imaging method. As opposed to X-rays, MRI scans do not produce any hazardous ionizing radiation. In addition, it may provide a variety of modalities by employing different parameters including FLAIR, T1, and T2 to create clear images of soft tissues [
5].
The challenge of correctly identifying a tumor’s type is challenging because tumors typically differ in form, severity, size, and location. Typically, the medical staff carefully mark out the tumor locations in the photos after visually inspecting them. Tumor borders are typically masked by healthy tissues around them. Therefore, optical-inspection-based manual tumor identification is labor-intensive and fraught with opportunities for error. Furthermore, the radiologist’s experience is crucial for manual tumor detection [
6]. It is important to remember that the distinct grayscales shown in MRI scans are not visible to the human eye. It depends on the radiologist’s experience to distinguish between malignant and benign lesions using MR images. To distinguish between malignant and benign lesions for non-experts, we developed a simplified systematic MRI approach using depth, size, and heterogeneity on T2-weighted MR images (T2WI), and we assessed its diagnostic efficacy. In [
7,
8], we evaluated 13 pre-trained deep convolutional neural networks and 9 ML classifiers on three brain tumor datasets (BT-small-2c, BT-large-2c, and BT-large-4c). The experimental results indicate that from our architecture, (1) the DenseNet-169 deep feature alone is a good choice if the MRI dataset is small and there are two classes (normal, tumor), (2) the ensemble of DenseNet-169, Inception V3, and ResNeXt-50 deep features is a good choice if the MRI dataset is large and there are two classes (normal, tumor), and (3) the ensemble of DenseNet-169, ShuffleNet V2, and MnasNet deep is a good choice with the classes normal, glioma tumor, meningioma tumor, and pituitary tumor. The traditional machine-learning-based approaches have the drawback of requiring the extraction of features manually. To classify photos, first, the features that were collected from the training set are retrieved [
9].
Brain tumor classification techniques are classified as machine learning (ML) or deep learning (DL) methods. Before classification, ML-based systems use time-consuming and error-prone handcrafted feature extraction and manual segmentation. To discover optimal feature extraction and segmentation algorithms for proper tumor identification, these strategies typically necessitate the assistance of a trained expert who can choose the most appropriate feature extraction and segmentation algorithms for accurate tumor detection. Because of this, there is inconsistency in the performance of these systems when dealing with larger datasets [
10]. Meanwhile, DL-based algorithms have proven to be effective in many areas, including the interpretation of medical images, and take care of these tasks automatically. Convolutional neural networks (CNN) are popular deep learning models because of their reliable performance and weight-sharing architecture. High-level and low-level features can be automatically extracted from the training data. Academics and scientists are thus becoming increasingly interested in these phenomena [
11,
12].
This research integrates deep data collected by three popular CNNs to propose an automated strategy for detecting and classifying brain tumors (AlexNet, ResNet18, and GoogLeNet). The method has multiple applications for detecting and classifying brain tumors. During feature fusion, many feature vectors, both low-level and high-level, are combined into a single feature vector.
Thus, the model’s discriminative performance has improved since it no longer relies on a single feature vector. For accurate tumor classification, it is necessary to create informative and discriminative features from MRIs, and this in turn pushes the adoption of a feature fusion technique. The correct classification of tumors depends on this. We put our model through its paces using a well-known dataset on brain tumors and several quantitative characteristics. To demonstrate the efficacy of the suggested method, we used multiple quantitative measures to assess our model on a well-known brain tumor dataset [
13]. The following is a list of the key contributions that the proposed system will make.
In this paper, the authors present a completely automated hybrid technique for classifying different types of brain tumors by using (a) ML classifiers and (b) convolutional neural network (CNN) models trained with transfer learning for deep feature extraction. This approach was created to help with the challenging task of precisely detecting brain tumors.
The proposed method consists of six core steps: (a) adaptive fuzzy filtering (AFF); (b) brain cancer diagnosis using MR images, which requires (c) a large-scale dataset extension and (d) feature extraction using multiple convolutional neural networks (CNNs), such as ResNet 18; (e) fusion of deep feature vectors, which provides state-of-the-art performance, and (f) classification of different tumor types.
A classifier based on DLBTDC-MRI and CSO can detect brain tumors. Numerous simulations were conducted using the BRATS 2015 dataset to examine the enhanced performance in classifying brain tumors.
An overview of the rest of the paper follows. The literature review’s research is dissected in
Section 2.
Section 3 describes the DLBTDC-MRI approach that has been proposed. Experiments and findings are discussed in detail in
Section 4. The work that has been proposed is concluded, and future studies are suggested, in
Section 5.
2. Literature Survey
A CNN model was presented by Ayadi et al. [
14] which carries out the necessary comparison both before and after the data augmentation is applied. They demonstrated that by adding additional data to the model that they constructed, the accuracy of the model could be improved. They used three different data sets to test how well it worked and found that it could find 98.43% of pituitary tumors, which is a big step forward in the field.
Jude Hemanth [
15] et al. presented a model for diagnosing brain illnesses using MRI by resolving the limitations of ANNs regarding the amount of time needed for convergence. They accomplished this by using modified versions of the Counter Propagation Neural Model (CPN) and the Kohonen Neural Network (KNN), referred to as MCPN and MKNN, respectively. When they set out to design this model, their primary goal was to reduce the number of iterations in the ANN model so that it could solve the convergence rate; after modification, MKNN and MCPN had accuracy rates of 95% and 98%, respectively. The number of ANN model iterations required to determine the convergence rate was kept to a minimum.
Nayak et al. [
16] suggested using models where there is no segmenting or preprocessing. The data were put into groups with the help of multiple logistic regression. A CNN model that had already been trained and pictures that were cut up were used in the suggested method. The model was tested with three sets of data. Several data augmentation methods were used to improve accuracy. This method was tried out on both the unique information set and the extended information set. Compared to studies that have come before, the results are rather convincing.
To improve the degree to which the CAD system interacts with the user, Sachdeva et al. [
17] suggested a method for locating tumor foci. They utilized a wide variety of datasets to assess the accuracy of their suggested model. The first set of data had three different types of cancer, while the second had five different categories. Two novel models, the GA-SVM and the GA-ANN, were produced by applying a genetic algorithm (GA) to both the SVM and the ANN models.
Both of these vehicles qualify as hybrids. Following the execution of the suggested technique, the SVM model’s accuracy increased from 79.3% to 91%, while the ANN model’s accuracy increased from 75.6% to 94%.
Mzoughi et al. [
18] used the BRATS 2018 dataset’s complete volumetric T1-Gado MRI sequence to construct a 3D CNN architecture for classifying glioma brain tumors into low-grade gliomas (LGG) and high-grade gliomas (HGG). Based on a deep neural network and a three-dimensional convolutional layer, the design uses small kernels and lower weights to take into consideration both local and global contexts. A total of 96.49 % of inputs were correctly processed by the system.
Maqsood et al. [
19] developed an edge discovery and U-NET prototype-based approach for detecting brain tumors. This technique may be found in their publication. The framework for tumor segmentation enhances picture contrast and makes use of fuzzy logic for edge detection. By first extracting features from fading sub-band pictures and then categorizing those structures, the U-NET architecture can determine whether or not a meningioma is present in a patient’s brain imaging.
According to Afshar et al. [
20], the classification of brain tumors (CapsNet) will use a member network capsule CNN architecture. The technique known as CapsNet makes use of the spatial interaction that occurs between the tumor and the tissues that surround it. The segmented tumor produced results that were 86.56 % correct, whereas the picture of the brain that had not been altered produced results that were 72.13 % accurate.
Abiwinanda et al. [
21] suggest either studying the canonical CNN model without making any changes or developing CNN while simultaneously altering the CNN layers by either increasing or decreasing the total number of layers. This led to the creation of seven separate CNN architectures, each with a unique set of layers. They found that their second design (two layers for each difficulty level, a starting point, and maximum pooling) had the highest training accuracy (98.51%).
Khan et al. [
22] proposed a deep-learning-based hierarchical categorization system for brain tumors. In this study, researchers used MRI images from the Kaggle dataset to reach a classification accuracy of 92.13%. However, due to the low overall accuracy, the approach must be evaluated before being used in clinical settings for brain tumor categorization.
Ghassemi et al. [
23] proposed a pretraining-focused model and then used it with CNN. In this manner, the primary emphasis is placed on pretraining the model using various publicly available datasets, after which the model is applied. In the focus prototype, the SoftMax replaced the CNN and fully connected layers, and the ensuing prototype was validated using the primary dataset T1, which contained three distinct forms of a tumor, and obtained a 95.6% accuracy rate.