Next Article in Journal
Control of Precalciner Temperature in the Cement Industry: A Novel Method of Hammerstein Model Predictive Control with ISSA
Next Article in Special Issue
Multi-Models of Analyzing Dermoscopy Images for Early Detection of Multi-Class Skin Lesions Based on Fused Features
Previous Article in Journal
Modeling and Simulation of an Industrial-Scale 525 MWth Petcoke Chemical Looping Combustion Power Plant
Previous Article in Special Issue
Information Visualisation for Antibiotic Detection Biochip Design and Testing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Techniques of Analyzing MRI Images for Early Diagnosis of Brain Tumours Based on Hybrid Features

by
Badiea Abdulkarem Mohammed
1,*,
Ebrahim Mohammed Senan
2,3,*,
Talal Sarheed Alshammari
4,
Abdulrahman Alreshidi
4,
Abdulaziz M. Alayba
4,
Meshari Alazmi
4 and
Afrah N. Alsagri
4
1
Department of Computer Engineering, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
2
Department of Computer Science & Information Technology, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad 431004, India
3
Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana’a, Yemen
4
Department of Information and Computer Science, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Processes 2023, 11(1), 212; https://doi.org/10.3390/pr11010212
Submission received: 30 November 2022 / Revised: 23 December 2022 / Accepted: 28 December 2022 / Published: 9 January 2023
(This article belongs to the Special Issue Machine Learning in Biomaterials, Biostructures and Bioinformatics)

Abstract

:
Brain tumours are considered one of the deadliest tumours in humans and have a low survival rate due to their heterogeneous nature. Several types of benign and malignant brain tumours need to be diagnosed early to administer appropriate treatment. Magnetic resonance (MR) images provide details of the brain’s internal structure, which allow radiologists and doctors to diagnose brain tumours. However, MR images contain complex details that require highly qualified experts and a long time to analyse. Artificial intelligence techniques solve these challenges. This paper presents four proposed systems, each with more than one technology. These techniques vary between machine, deep and hybrid learning. The first system comprises artificial neural network (ANN) and feedforward neural network (FFNN) algorithms based on the hybrid features between local binary pattern (LBP), grey-level co-occurrence matrix (GLCM) and discrete wavelet transform (DWT) algorithms. The second system comprises pre-trained GoogLeNet and ResNet-50 models for dataset classification. The two models achieved superior results in distinguishing between the types of brain tumours. The third system is a hybrid technique between convolutional neural network and support vector machine. This system also achieved superior results in distinguishing brain tumours. The fourth proposed system is a hybrid of the features of GoogLeNet and ResNet-50 with the LBP, GLCM and DWT algorithms (handcrafted features) to obtain representative features and classify them using the ANN and FFNN. This method achieved superior results in distinguishing between brain tumours and performed better than the other methods. With the hybrid features of GoogLeNet and hand-crafted features, FFNN achieved an accuracy of 99.9%, a precision of 99.84%, a sensitivity of 99.95%, a specificity of 99.85% and an AUC of 99.9%.

1. Introduction

Over the past years, problems in the early diagnosis of many diseases using human intelligence have occurred, whilst progress in biomedical technology has been observed. However, cancer remains a curse to humanity because of its abnormal nature [1]. Brain cancer, in particular, is one of the deadlier cancers owing to its aggressive and heterogeneous characteristics. The brain controls all body activities and is considered the main organ, consisting of millions of neurons and complex tissues. Each neuron has a specific activity by controlling a certain organ. Each cell has the ability to grow normally, but some lose their ability to function, grow abnormally and then form an abnormal mass called a tumour [2]. Brain tumours have different types in accordance with their shape, texture and location (acoustic neuroma, pituitary gland, central nervous system lymphoma, meningioma, glioma, etc.) [3]. In clinical diagnosis, the incidence rates of glioma, meningioma and pituitary tumour are approximately 45%, 15% and 15% amongst all brain tumours, respectively [4]. On the basis of tumour type, doctors can formulate a diagnosis, predict survival and make the appropriate decision for treatment, whether that be surgery, chemotherapy or radiation. Therefore, early tumour grading is important to monitor the tumour and guarantee the patient receives appropriate treatment [5]. Brain tumours are split into malignant and benign tumours. Malignant tumours are lethal and contain precancerous cells, whereas benign tumours are homogeneous structures and do not contain precancerous cells. The World Health Organization classifies brain tumours into grades, where the first and second grades refer to a benign tumour, and the third and fourth grades refer to a malignant tumour [6]. Benign tumours grow slowly compared with malignant tumours. Low-grade tumours turn into high-grade tumours if they are not diagnosed early and receive treatment in time [7]. Thus, diagnosing a brain tumour and determining its type and grade are the main goals of doctors and radiologists [8]. The causes of brain tumours are unknown and pose a challenge in the medical field. Nevertheless, some uncommon risks of malignant tumours, such as exposure to vinyl chloride [9] and neurofibromatosis [10], can be identified. Meningiomas begin with slow growth on the membrane covering the brain and are considered benign tumours [11]. By contrast, malignant tumours arise in the brain, spinal cord and nerve cells (glial cells). Several types of gliomas grow in star-shaped glioma cells and are classified as anaplastic astrocytoma and glioblastoma. Tumours that start from the top of the neck or the upper lower back are called ependymomas, which are classified into neuromyeloma, anaplastic ependymoma and subendothelial tumour. Oligodendrocytes are another type of glial cell [12]. Magnetic resonance imaging (MRI) provides a detailed analysis of the internal structure of the human brain without any radiation or surgical intervention [13]. It highlights brain abnormalities and requires the patient not to move whilst taking magnetic resonance (MR) images. This method is also suitable for detecting brain tumours, but the use of a single MR image to classify whether a tumour is normal or abnormal is challenging. Hence, MRI devices have the ability to capture a series of multimodal MR images, which provides details of the brain structure for effective and efficient tumour diagnosis [14]. MRI helps show the internal structure of the brain, but MR images contain many complex details. Interpreting MR images requires highly qualified doctors and experts; in addition, tracking all the MR images of a patient is time consuming, and doctors and radiologists have differing opinions in classifying tumour type. Machine and deep learning techniques solve these challenges. These techniques have the ability to diagnose tumour type and grade with high efficiency and accuracy. Many researchers have devoted their efforts to developing automated techniques for classifying MR images and analysing each image to determine tumour type. In this study, several multimethod hybrid systems with feature hybrids are developed to classify MR images for the early diagnosis of brain tumours.
The use of two overlapping filters to eliminate noise and acquire improved images:
  • Feature extraction using LBP, GLCM) and DWT algorithms and their combination;
  • Adjustment of CNN to extract deep feature maps with high accuracy;
  • Application of the hybrid method of CNN and SVM algorithms;
  • Application of ANN and FFNN networks with the mixed features of CNN models and handcrafted features (LBP, GLCM and DWT) to produce highly efficient features for distinguishing between types of brain tumours and
  • Development of high performance systems to diagnose brain tumours and distinguish between their types at an early stage to assist physicians and radiologists in making a sound decision to diagnose and providing appropriate treatment for the patient.
The rest of this study is organised as follows. Section 2 presents relevant previous studies. Section 3 analyses the methods and techniques for processing a dataset. Section 4 reviews the performance results of the proposed systems. Section 5 shows a discussion and comparison of the performances of the proposed methods. Section 6 concludes the paper.

2. Related Work

Many papers have presented several studies on the diagnosis of brain tumours, some of which are reviewed in this study. What distinguishes our study is the use of diverse methods—hybrid methods of machine and deep learning models and a hybrid of handcrafted features (colour, shape and texture features) with CNN features.
Arshia et al. proposed three CNN models, for classifying three types of brain tu-mours. Data augmentation technology has been applied to increase reliable results and reduce overfitting [15]. Cheng et al. proposed a system on a brain tumour data set, and the tumour area was determined and divided into sub-area by an adaptive spatial al-gorithm. Characteristics were extracted using the methods of GLCM, bag of words (BoW) and density graph. The system achieved good results of 89.72% with GLCM features, 91.28% with BoW features and 87.54% with density graph features [16]. Ismael et al. presented methods for extracting statistical features by using the DWT method and Gabor filter algorithm. Features were rated using a multilayer perceptron neural network with 91.9% accuracy [17]. Abir et al. introduced methods for image enhancement, sharpening, resizing and image contrast optimisation and then applied a probabilistic neural network to classify a brain tumour data set. The method reached an accuracy of 83.33% [18]. Widhiarso et al. extracted features via GLCM and classi-fied them by using CNN. The GLCM features were combined with contrast features, and the system achieved an accuracy of 82% [19]. Sarah et al. proposed residual net-works for classifying a brain tumour data set into three types [20]. Hao et al. proposed a DCNN-based approach that combines symmetry with the segmentation of the tu-mour region. They expanded the network by adding symmetrical masks in some layers. The network achieved good results with an average dice similarity coefficient of 85.2% [21]. Menze et al. presented a method for segmenting a tumour region from multidimensional images by tracking context information with MRF. Other seg-mentation methods consider that each pixel is independent and similarly distributed. Therefore, they introduced the conditional random field method to utilise the infor-mation in neighbouring pixels [22]. Muhammad et al. presented the brain surface ex-traction method for removing the skull and then applied particle swarm optimisation to segment the tumour area. Features were extracted from the tumour area by using the LBP method. Lastly, the selected features were rated using an artificial neural net-work (ANN), which performed well [23]. Sharif et al. optimised images by using an unsupervised fuzzy set algorithm through triangular fuzzy median and tumour region segmentation. Gabor lesion features were extracted, and texture features were calcu-lated. Texture features were rated using extreme learning machines (ELMs), which performed well [24]. Baoshi et al. introduced the extended Kalman filter with SVM called EKF-SVM, a method for analysing images and detecting brain tumours. All im-ages underwent optimisation through nonlocal means filter, image standardisation and contrast enhancement. The GLCM was applied for extracting features and sent to an SVM classifier for MRI classification, and EKF was applied to classify MRI images [25]. Fatih et al. presented a fuzzy C-means approach for tumour region segmentation. The SqueezeNet was applied for extracting features from a CNN model and catego-rised using an ELM [26]. Parnian et al. introduced a boosted capsule network that en-hances vulnerable learners by gradually improving networks [27]. Kaplan et al. pre-sented a method for extracting features from the tumour area by using two methods, nLBP and αLBP. The nLBP method works on the basis of the relationship between a pixel and its neighbours, whilst the αLBP method basis of the angle value of the neighbouring pixels [28].

3. Materials and Methods

Here, algorithms for analysing an MRI dataset of brain tumours are reviewed as shown in Figure 1. The dataset is enhanced for noise removal, and the improved dataset is passed to the four suggested methods; each method has systems. The first technique is to classify the dataset via ANN and FFNN on the basis of the extracted hybrid features using the LBP, GLCM and DWT algorithms. The second technique uses GoogLeNet and ResNet-50 models to extract and categorise features. The third technique is a hybrid of CNN and SVM. The fourth technique is to extract deep features, combine them with LBP, GLCM and DWT features, and classify them using ANN and FFNN classifiers.

3.1. Description of the MRI Dataset

The performances of the systems are evaluated on a dataset of brain tumours collected from Tianjin Medical University General Hospital, China, and Nanfang Hospital, Guangzhou, China, between 2005 and 2010. The dataset consists of 3446 MR images, which are split into four types (classes): three classes of brain tumours and one class of normal type. The dataset is split into classes as follows: 926 MR images of glioma (26.87%), 1052 MR images of meningioma (30.53%), 493 MR images of no tumour (14.31%) and 975 MR images of pituitary tumour (28.29%). Thus, the dataset is unbalanced. We balanced the dataset using data augmentation method. The resolution of all images in the dataset is between 1358 × 1322 and 205 × 249 pixels in a 24-bit colour system. Figure 2 shows some of the MRI samples from the dataset for all the classes. The system randomly selects images from the dataset, so that the figure is representative of all classes.

3.2. MRI Images Enhancement

MR images contain various noises due to magnetic field noise. This condition causes problems in analysing and diagnosing images, reduces images’ spatial resolution and distorts the edges of brain tumours. Artificial intelligence techniques are affected by such noises; the diagnostic efficiency is reduced due to complications when extracting features that include noise. Thus, the images were enhanced in this study by calculating the mean RGB chromaticity for each image. The colour constancy was then scaled in each image. The MR images were subjected to optimisation using average and Laplacian filters. Firstly, the average filter works with a size of 5 × 5 pixels to show contrast and remove noise and irrelevant pixels by changing each pixel according to an average of 24 adjacent pixels, and the filter continues until all image pixels are replaced with the average of neighbouring pixels as shown in Equation (1):
z l = 1 L i = 0 L 1 y l 1 ,  
where z l is the input, y l 1   is the prior input, and L is the pixel number.
Secondly, the MR images are passed to the Laplacian filter to accurately and efficiently detect the edges of the lesion as shown in Equation (2):
  2   f =   2   f   2   x +   2   f   2   y ,
where     2   f is a second-order differential equation; and x, y refer to the coordinates of the binary matrix.
Lastly, the final improved image is obtained by subtracting the improved image obtained using the Laplacian filter from the improved image obtained using the average filter and passing it to the next image processing steps as shown in Equation (3):
Image   enhanced   = z l   2   f .
The reason for employing these two filters is that the average filter enhances the image contrast, particularly in the tumour area, and the Laplacian filter makes the tumour edges visible. As a result of combining the two photos, a more contrasted image highlighting the tumour’s edges emerges. Figure 3 shows a set of images from the brain tumour dataset for all the classes it contains.

3.3. Neural Network

3.3.1. Adopted Region Growing Method

The dataset images obtained using MR machines consist of whole-brain cells that contain healthy brain cells and brain tumours. Therefore, extracting the features from the tumour and the healthy area is not correct. Segmentation techniques solve this challenge and separate the tumour area from the healthy area. In this paper, the adopted region-growing method was used to separate the tumour area from the healthy area. The algorithm separates similar pixels in the same region [29]. The following conditions are necessary for effective partitioning:
  • i = 1 m x   i   = x , where m is the number of regions;
  • x = 1 , 2 , , M   is   connected ;
  • P x   i = TRUE   for   1 , 2 , , M and
  • P x   i   x j = FALSE   for   i   j ,   where   x   i     and   x   j   are   neighbouring   regions .
Firstly, the tumour segmentation process must be complete. Secondly, each region must be the same pixel. Thirdly, when representing region pixels that are similar to those from other regions, they must be correct. Fourthly, two regions must not contain two similar pixels (no two similar pixels have the same pixels in two regions). In all pixels, all regions must represent the entire lesion region. The algorithm works in accordance with the bottom-up method with a seed represented by a pixel and ends with a whole region represented by many similar pixels. The basic idea of this method is to start with many different pixels as the basic seeds for creating regions. Regions begin to grow gradually, and each region contains similar pixels. Figure 4 shows images from the dataset after the tumour region was segmented and separated from healthy regions.

3.3.2. Morphological Process

Images contain small holes after the segmentation method, and these holes do not belong to the area of interest; therefore, these holes are considered noises that must be removed, and the holes must be filled [30]. After the segmentation process, morphological operations must be applied to improve the binary images. Many morphological operations, such as erosion, opening, dilation and closing, serve to fill small holes. Operations create a 4 × 4 structural element, which wraps to each location of the image and compares it to neighbouring pixels. Processes test the structural element to determine if it ‘fits’ or ‘hits’ adjacent pixels or not. Figure 4 describes some dataset samples before and after the morphological process (before the morphological process means the output of the segmentation process). The bilateral images improve and fill in the gaps that do not belong to the tumour area.

3.3.3. Hybrid Feature Extraction

One of the image processing stages is extracting the features that represent each tumour. The accuracy of feature extraction depends on the previous stages (optimisation, segmentation and morphology). Therefore, feature extraction algorithms work to reduce an image’s dimensions and represent it with the most important features. In this study, three algorithms, namely LBP, GLCM and DWT, were applied and combined to produce more efficient feature vectors. The fused feature is a modern, powerful and efficient method for obtaining representative features that help in the early diagnosis of brain tumours.
Firstly, features are extracted using the LBP, which describes the texture of 2D surfaces. The algorithm works with a size of 5 × 5, selects a central pixel in each iteration and analyses it on the basis of Equation (4) by replacing the target pixel (central pixel) with neighbouring pixels and the radius of 24 adjacent pixels for each central pixel. The process is repeated for each pixel of the image, and 203 features are extracted and stored in a feature vector [31].
LBP R , P = p = 0 P 1 s g p g c 2 p   s x = 0 ,   x < 0 1 ,   x 0 ,  
where R represents the radius for neighbouring paves, g p is the grey value of nearby pixels, g c represents the grey value of the target pixel (central), and P represents the number of neighbours.
Secondly, features are extracted using the GLCM, which is considered a good method for extracting texture features from brain tumour areas. The GLCM algorithm shows many compositional levels from the grey levels of the brain tumour region. The algorithm works on the basis of spatial information distinguishing coarse and smooth areas. The coarse area contains pixels with divergent values, whereas the smooth area contains pixels with close values. This spatial data is vital in finding the relation between pixels on the basis of distance d and direction θ. The pixels are interpolated in accordance with four directions: 0°, 45°, 90° and 135° [32]. This method extracted 13 features from each image.
Thirdly, the DWT extracts features from the ROI. Square mirror filters analyse the input signals to two signals, matching the low- and high-pass filters. The algorithm produces 12 features for each image: approximate parameters and three detailed parameters. Low-pass filter (LL) produces approximation coefficients, whilst high-pass filters (LH, HL and HH) produce three features as detail coefficients (horizontal, vertical and diagonal, respectively).
Finally, the features of the three methods are hybridised into the same feature vector to form highly efficient features capable of diagnosing the tumour with high accuracy.
Figure 5 describes the fusion process of the features extracted from the three algorithms. The LBP method extracts 203 features, the GLCM extracts 13 (characteristic) features, and the DWT method extracts 12 (characteristic) features. All the features are combined into one feature vector; therefore, each image represents 228 features.

3.3.4. ANN and FFNN

In this section, the brain tumour dataset is diagnosed using the ANN and FFNN. ANN is highly efficient and consists of three layers. Each layer contains numerous linked neurons. The first layer is the input layer, which receives the outputs from the feature extraction stage. In this work, as shown in Figure 6, the input layer consists of 228 neurons. ANN also comprises many hidden layers; each layer has numerous neurons linked with neurons of the same layer and with neurons from other layers. In this work, the ANN contains 10 hidden layers to diagnose the features extracted for the early detection of brain tumours. The output layer contains at least two neurons to classify each input image into its appropriate class [33]. The output layer in this work has four neurons based on the types of classes (glioma, meningioma, no tumour and pituitary tumour). The ANN interprets and analyses large and complex data to produce clear diagnostic patterns. Each neuron is connected with a neuron in the same layer or from another layer with specific weight w, which effectively reduces errors between actual and predicted values [34]. The ANN assigns specific weights and updates them in each iteration until it reaches the mean square error (MSE) of the actual X and predicted Y, as described in Equation (5).
  MSE = 1 n i = 1 n     X i Y i 2 ,  
where n refers to the data points, X i refers to the actual output, and Y i refers to the predicted output.
FFNN is highly efficient in solving complex tasks. It also consists of three layers, and each layer contains many interconnected neurons. The first layer is the input layer, including neurons, as in the ANN algorithm. FFNN consists of many hidden layers, and each hidden layer has numerous linked neurons [35]. In this study, the FFNN contains 10 hidden layers to diagnose the features extracted for the early detection of brain tumours. The output layer contains neurons, as in the ANN algorithm. The working mechanism of the FFNN is the feedforward between neurons. Each neuron produces the output of the previous neuron with its own bias. In other words, the FFNN works to determine the weights and updates them on each iteration in the forward direction from the first hidden layer to the output layer. The weights are calculated and updated in each iteration until the algorithm reaches the minimum error as described in the above equation.

3.4. CNN Models

CNN is a new technology that solves problems in many areas, including pattern recognition and classification of biomedical images. It is a multi-perceptron multilayer neural network that produces millions of parameters and connections. CNN models are designed to process 2D data, such as biomedical images, and can be adapted to handle 1D or multidimensional tasks [36]. The essence of CNN models is that they obtain many representational levels, starting from simple levels that transform from one level to a more representative and higher abstract level. For 2D image diagnostics, CNN model layers amplify the highest representation, select the most critical inputs, differentiate between classes and suppress different irrelevant features. CNN models are distinguished from traditional neural networks by their depth of architecture and many layers of different levels. Consequently, many researchers have devoted themselves to developing CNNs.
The convolutional layer is one of the deepest layers, and its name is derived from CNN. Three parameters control how convolutional layers work: filter size, p-step and zero padding. Filter size controls the size of the convolution around an image. A larger filter size results in a greater convolution around the image. Each filter performs a specific task; examples are a filter that detects geometric features, a filter that detects edges and a filter that extracts colour and texture features. p-step limits the number of steps the filter moves on the image at each step [37]. Zero padding preserves the size of the entered images. These layers wrap the filter f(t) around the image x(t) as written in Equation (6):
z t = y f t = y a f t a   d a
where y(t) refers to the image inputted, z(t) refers to the output, and f(t) refers to the filter.
Some convolutional layers are followed by a rectified linear unit (ReLU) layer that passes positive values whilst converting negative values into zero. Equation (7) explains how a ReLU layer works [38].
ReLU x = max   0 ,   x   = x ,   x 0 0 ,   x < 0 ,
where x represents the value entered for the ReLU layer.
However, CNNs present some challenges, particularly overfitting, because a CNN produces millions of parameters. A dropout layer is used to pass a specific percentage of neurons in each iteration to solve this particular challenge. In this work, the layer of dropout is tuned to 50%; hence, 50% of the neurons are passed in each iteration. Nonetheless, one of the disadvantages of this layer is that it increases the training time twice.
Convolutional layers output millions of parameters with high dimensions; therefore, the dimensions must be reduced. CNN models provide pooling layers that interact with the outputs of the convolutional layers in a specific way and represent groups of pixels by a single pixel. Pooling layers work in two ways: max and average pooling. Each method has a particular working mechanism. In the max pooling process, a group of pixels is selected in accordance with filter size. The groups of pixels are represented by the maximum pixel in the groups specified by the filter, as shown in Equation 8. Average pooling selects a group of pixels in accordance with filter size and replaces it with an average pixel value, as shown in Equation (9).
P i ;   j = max m , n = 1 k   A i 1 p + m ;     j 1 p + n ,
P i ;   j = 1 k 2 m , n = 1 k A i 1 p + m ;     j 1 p + n ,  
where A is the number of pixels; m, n represents the dimensions of the matrix; k refers to the matrix amplitude; and p is the step filter.
The last layer in CNN models is the fully connected layer (FCL), where all the neurons are linked to one another. Each CNN model has FCLs that are different from other types. This layer converts 2D feature maps into 1D ones. Each image is classified into an appropriate class. Lastly, the SoftMax activation layer is a function that has four neurons, with each neuron representing one class in the dataset. Equation (10) shows how the SoftMax function works.
y x i = exp x i   j = 1 n   exp x j
where y x i outputs the SoftMax function, which has a value 0 ≤ y x i   ≤ 1.
In this section, we will highlight the GoogLeNet and ResNet-50 models.

3.4.1. GoogLeNet Model

GoogLeNet is used in many fields, including biomedical image classification. GoogLeNet has 27 layers, including pooling layers; each layer has a specific task. This model has a superior ability to perform complex calculations and reduce dimensions whilst maintaining the most important features. The first convolutional layer of the model with a filter size of 7 × 7 reduces the dimensions considerably. The other convolutional layers extract deep features. The network outputs millions of parameters. Therefore, the network contains three pooling layers to reduce the dimensions and number of parameters; the first pooling layer has a size of 7 × 7, which considerably reduces the dimensions, and every 49 pixels are changed with one pixel [39] The network produces 7 million parameters. Figure 7 shows the GoogLeNet, which shows the most important layers that the network contains for the diagnosis of brain diseases into four classes.

3.4.2. ResNet50 Model

ResNet-50 is a multilayer CNN model used in many fields, including biomedical image classification. ResNet-50 has 16 blocks with 177 layers, with each layer performing a specific task. ResNet-50 layers are divided into 49 convolutional layers with different filter sizes for extracting feature maps [40]. Many ReLU layers follow convolutional layers, batch normalisation layers, two layers of pooling (max and average layers), a FCL and a SoftMax function. The model contains 23.9 million parameters. Figure 8 describes the infrastructure of ResNet-50, which shows the most important layers included in the network for the brain tumour diagnostic dataset with four classes.

3.5. Hybrid of CNN and SVM

CNN is considered one of the best artificial intelligence models in the classification of biomedical images. However, CNN models demand high-specification computer hardware and consume time for training. Hence, this section presents a hybrid method of CNN and SVM to solve these challenges. These techniques operate with medium-specification computer hardware [41]. The first block represents the CNN used to extract features. The second block is SVM. Figure 9 describes the architecture of the hybrid techniques, GoogLeNet + SVM and ResNet-50 + SVM, for the early detection of brain tumours.

3.6. Hybrid Features between CNN and Handcrafted Features

In this section, the new techniques are similar to the method presented in the previous section, but they are a combination of handcrafted features and the features of CNN models. However, CNN models require high-specification hardware and are time-consuming in training data. The deep feature map classification technique using the ANN and FFNN solves this problem [42]. The main idea of the approach is represented in several steps. Firstly, features are extracted using CNN, which produces 4096 features. Secondly, features are extracted using the LBP, GLCM and DWT methods combined, producing 228 features (Section 3.3.3). Thirdly, the feature maps extracted from GoogLeNet and ResNet-50 (4096 features) are combined with the hybrid features extracted from LBP, GLCM and DWT (228 features). Thus, each vector contains 4324 features per image. Fourthly, these hybrid features are categorised using the ANN and FFNN. Figure 10 describes the basic architecture of this hybrid CNN model and neural network approach.

4. Experimental Results

4.1. Dataset Division

In this work, many models of machine learning, deep learning and their hybrid, in addition to the feature merging approach between deep feature maps and traditional feature extraction algorithms, were implemented on the same brain tumour dataset. The dataset contains 3446 images divided into three types of brain tumours (glioma [926 images, 26.87%], meningioma [1052 images, 30.53%], pituitary tumour [975 images, 28.29%] and no tumour [493 images, 14.31%]). Table 1 shows the splitting of the brain tumour dataset during the training and validation phases by 80% (80%:20%) and the test phase by 20% for each class in the dataset.

4.2. Evaluation Metrics

All the different systems in this work were evaluated on a brain tumour dataset using several statistical measures. The systems include neural network algorithms, CNN models, hybrid methods and hybrid feature techniques (i.e., GoogLeNet + [LBP, GLCM and DWT] and ResNet-50 + [LBP, GLCM and DWT]).
The most critical measures used to evaluate the systems’ performance are shown in Equations (11)–(15). The proposed systems produce confusion matrices (CMs) that provide information for calculating the measures. The CM contains all correctly labelled test samples called true positive (TP) and true negative (TN), and incorrectly labelled samples called false positive (FP) and false negative (FN).
Accuracy = TN + TP TN + TP + FN + FP × 100 % ,
Precision = TP TP + FP × 100 % ,  
Sensitivity = TP TP + FN × 100 % ,  
Specificity = TN TN + FP × 100 ,  
AUC   = True   Positive   Rate False   Positive   Rate = Sensitivity Specificity ,  
where TP is the MR images that are correctly categorized as brain tumours, TN is the MR images correctly categorized as the normal brain, FP is the normal brain MR images categorized as a tumour, and FN is the MR images of brain tumours categorized as normal.

4.3. Results of ANN and FFNN Networks

Neural networks are amongst the most popular artificial intelligence algorithms for recognising patterns and diagnosing biomedical images. Neural network algorithms rely on previous image processing stages, such as optimisation, segmentation, morphology and feature extraction. The more accurate and effective the previous stages of biomedical image processing, the more efficient the classification will be. This study uses ANN and FFNN classifiers to diagnose the dataset on the basis of tumour region determination and hybrid feature extraction. Figure 11 shows the process of training the dataset using the ANN and FFNN classifiers. From the figure, the input layer has 228 units as input, the hidden layer has 10 layers, and the number of neurons in the output layer is 4, which represents the number of classes in the dataset.

4.3.1. Performance Analysis

The performance of the ANN and FFNN on a brain tumour dataset can be measured using several tools, and one such measure is cross entropy. Cross entropy measures the performance of algorithms across many epochs. It has different metrics at each epoch for evaluating the performance by measuring the error rate between the actual and predicted values [43]. Figure 12 shows the output of the ANN and FFNN algorithms through the training stage represented by blue colour, the validation stage represented by green colour and the test stage represented by red colour. The algorithms achieved their best performance when the error of the actual and predicted values was at the minimum. The dashed lines represent the best performance of the algorithms. ANN achieved the best performance with a value of 0.062691 at epoch 122, whereas FFNN reached a value of 0.030491 at epoch 213 as the best validation.

4.3.2. Gradient

Gradient values and validation checks are other measures for evaluating the performance of the ANN and FFNN classifiers in classifying brain tumour datasets [44]. Figure 13 shows the output of the ANN and FFNN classifiers in analysing the brain tumour dataset during the training phase. The ANN reached the minimum error when the gradient value was 0.028304 at epoch 128, whereas the validation was 6 at epoch 128. The FFNN classifier achieved minimum error when the gradient value was 0.00021934 at epoch 16, whereas the validation check was 6 during at epoch 16.

4.3.3. Receiver Operating Characteristic (ROC)

ROC is one of the critical high efficiency measures of the performance of the ANN classifier, which calculates the TP rates divided by FN rates obtained during the evaluation of the brain tumour dataset. Its area, also called AUC, is described by Equation (16). Algorithms reach their best performance when the AUC approaches the left corner. The algorithms calculate the AUC by dividing the TP rates (sensitivity) represented in the y-axis by the FN rate (specificity) represented in the x-axis [45]. Figure 14 describes the performance of the ANN algorithm during the training, validation, test and whole dataset stages. Each stage has four colours, where each colour represents one class in the dataset. The ANN reaches the AUC with an overall value of 98.18% for all stages.

4.3.4. Error Histogram

Several metrics evaluate the output of the ANN and FFNN classifiers. One of these metrics is the error histogram, which evaluates the ANN and FFNN algorithms by finding the minimum error between the actual and predicted values [46]. Figure 15 describes the error histogram for the ANN and FFNN algorithms for evaluating the brain tumour dataset. Errors between the actual and predicted outputs through the training, validation and test stages are shown in blue, green and red, respectively. The orange line indicates that the error between the actual and predicted values is zero. The ANN reached the minimum error with 20 bins between −0.8591 and 0.9518, whereas the FFNN algorithm reached the minimum error with 20 bins between −0.9149 and 0.9509.

4.3.5. Regression

The performance of the FFNN classifier was also evaluated using regression measures. The regression scale predicts continuous variables on the basis of other variables. The target values represented by the x-axis are predicted on the basis of the output values represented by the y-axis. FFNN reaches its best performance when R is one or close to one. Figure 16 describes the output of the FFNN algorithm on the brain tumour dataset during the training, validation, test and entire phases. The FFNN algorithm reached regression values of 99.93% during the training phase, 98.63% during the validation phase, 98.56% during the test phase and 99.94% in the entire phase.

4.3.6. Confusion Matrix

The confusion matrix is the core of rating scales, through which the performance of a dataset is evaluated. It is the output of grids in a format containing all the numbers of correctly and incorrectly classified MR images. Correctly classified images are called TP, which means that all tumours have been correctly classified, and TN, which means that normal cases have been correctly classified. On the contrary, incorrectly classified images are called FP, which means that normal cases are classified as neoplastic, and FN, which means that neoplasms are classified as normal. In this section, the confusion matrix shown in Figure 17 generated using FFNN and ANN classifiers shows the analysis of the dataset and its classification into its appropriate class (TP and TN) or its error class (FP and FN). Classes in the confusion matrix are represented as follows: class 1 as glioma, class 2 as meningioma, class 3 as no tumour and class 4 as pituitary tumour. FFNN achieved slightly higher results than ANN. Specifically, FFNN achieved an overall accuracy of 97.6%, whereas ANN achieved an overall accuracy of 97.4%.
Table 2 describes the results of the performance evaluation of the FFNN and ANN classifiers on the brain tumour dataset for the early detection of tumour type. FFNN achieved a precision of 97.18%, a sensitivity of 99.16%, a specificity of 97.23% and an AUC of 98.18%. In comparison, ANN achieved a precision of 97.01%, a sensitivity of 99.18%, a specificity of 97.07% and an AUC of 98.10%.

4.4. Results of the CNN Models

This section evaluates the brain tumour dataset using the transfer learning technique with two CNN models: GoogLeNet and ResNet-50. Transfer learning technology is a method in which models are trained on more than a million images to output more than a thousand classes. Then, a new dataset is trained to proceed with new tasks on the basis of the experience gained. In this study, the CNN models’ expertise is transferred to the training of the brain tumour dataset. CNN models face the problem of overfitting, which requires a large dataset. Thus, CNN models provide a data augmentation technique to increase the images artificially through the training phase of the same dataset. Table 3 describes the dataset before and after using the data augmentation technique. This technique increases the images using rotation processes in several directions, flipping and other methods as shown in Figure 18. The method also solves the problem of balancing the dataset and increases each class by a certain amount until obtaining a balanced dataset. Through the training phase, the dataset classes are increased 6 times for the glioma images, 5 times for the meningioma and pituitary images and 11 times for the no tumour pictures.
Table 4 describes the set of the GoogLeNet and ResNet-50 parameters with Adam as the chosen optimiser for both models. The mini batch size, initial learning rate, validation frequency and dataset training time for each model were set.
GoogLeNet and ResNet-50 achieved superior results in classifying the dataset. Table 5 summarises the performance of the two models. ResNet-50 slightly outperformed GoogLeNet. The two models reached superior results, which can help physicians and radiologists make early an diagnostic decision regarding the appropriate treatment for patients. In particular, the ResNet-50 model reached an accuracy of 95.2%, a precision of 95.25%, a sensitivity of 95%, a specificity of 98.5% and an AUC of 97.3%. The GoogLeNet model reached an accuracy of 94.3%, a precision of 95.2%, a sensitivity of 94%, a specificity of 98% and an AUC of 99.19%.
The performances of ResNet-50 and GoogLeNet on the brain tumour dataset were evaluated. The confusion matrix during the test phase is shown in Figure 19. The figure displays the overall and diagnostic accuracies in each dataset class. Glioma was diagnosed at the rates of 95.1% and 90.8% by GoogLeNet and ResNet-50, respectively. Meningioma was diagnosed at the rates of 91% and 97.6% by GoogLeNet and ResNet-50, respectively. Pituitary tumour was diagnosed at the rates of 99% and 97.4% by GoogLeNet and ResNet-50, respectively. Lastly, normal MR images were diagnosed at the rates of 90.9% and 93.9% by GoogLeNet and ResNet-50, respectively.

4.5. Results of the Hybrid CNN with SVM Classifier

This section presents the evaluation results of hybrid models between the CNN models (ResNet-50 and GoogLeNet) and an SVM classifier. This technique consists of two parts. The first part comprises CNN that extract feature with high accuracy and send them to the second part. The second part of the technique is an SVM classifier, which classifies feature maps with high accuracy. Table 6 shows the results of the hybrid systems GoogLeNet + SVM and ResNet-50 + SVM in classifying the brain tumour dataset for the early detection of brain tumour types. The ResNet-50 + SVM system outperformed the GoogLeNet + SVM system. In particular, the ResNet-50 + SVM network reached an accuracy of 95.5%, a precision of 96%, a sensitivity of 95.25%, a specificity of 98.5% and an AUC of 99.12%. GoogLeNet + SVM achieved an accuracy of 94.8%, a precision of 94.5%, a sensitivity of 93.75%, a specificity of 98% and an AUC of 98.52%.
The results of the GoogLeNet + SVM and ResNet-50 + SVM models on the brain tumour dataset were evaluated. The confusion matrix during the test phase is shown in Figure 20. The confusion matrix displays the overall and diagnostic accuracies of each class in the dataset. Glioma was diagnosed at the rates of 89.2% and 91.4% by GoogLeNet + SVM and ResNet-50 + SVM, respectively. Meningioma was diagnosed at the rates of 96.7% and 96.2% by GoogLeNet + SVM and ResNet-50 + SVM, respectively. Pituitary tumour was diagnosed at the rates of 97.9% and 99% by GoogLeNet + SVM and ResNet-50 +SVM, respectively. Lastly, normal MR images were diagnosed at the rates of 94.9% and 94.9% by GoogLeNet + SVM and ResNet-50 + SVM, respectively.

4.6. Results of the Fusion Features of CNN and Handcrafted

This section presents the results of the hybrid technique with the features extracted using CNN models and the features extracted using LBP, GLCM and DWT. All features were combined into features vectors and then categorised using ANN and FFNN classifiers. Amongst the most important reasons behind using this technique is the very efficient distinction of the tumour type in each MR image.
Table 7 describes the performance of the ANN based on the hybrid features of CNN and handcrafted features (LBP, GLCM and DWT), which indicate that the algorithm reached superior results. Firstly, the ANN classifier based on the hybrid features of GoogLeNet and handcrafted features achieved an accuracy of 99.6%, a precision of 99.52%, a sensitivity of 99.87%, a specificity of 99.52% and an AUC of 99.7%. The ANN classifier based on the hybrid features of ResNet-50 and handcrafted features reached an accuracy of 99.8%, a precision 99.79%, a sensitivity of 99.93%, a specificity of 99.79% and an AUC of 99.89%. Secondly, the FFNN classifier based on the hybrid features of GoogLeNet and handcrafted features reached an accuracy of 99.9%, a precision of 99.84%, a sensitivity of 99.95%, a specificity of 99.85% and an AUC of 99.9%. The FFNN classifier based on the hybrid features of ResNet-50 and handcrafted features reached an accuracy of 99.7%, a precision of 99.61%, a sensitivity of 99.89%, a specificity of 99.61% and an AUC of 99.75%.
Figure 21 describes the performance of the ANN classifier based on features extracted from the GoogLeNet and ResNet-50 models and handcrafted features (LBP, GLCM and DWT) to classify the dataset for the early detection of brain tumour type. The confusion matrices in Figure 21a,b display the overall and diagnostic accuracies for each class in the dataset, respectively. Glioma was diagnosed at the rates of 99.6% and 99.9% with features from GoogLeNet + handcrafted features and ResNet-50 + handcrafted features, respectively. Meningioma was diagnosed at the rates of 99.8% and 99.9% with features from GoogLeNet + handcrafted features and ResNet-50 + handcrafted features, respectively. Pituitary tumour was diagnosed at the rates of 99.8% and 99.8% with features from GoogLeNet + handcrafted features and ResNet-50 + handcrafted features, respectively. Lastly, normal MR images were diagnosed at the rates of 99% and 99.4% with features from GoogLeNet + handcrafted features and ResNet-50 + handcrafted features, respectively.
Figure 22 describes the performance of the FFNN algorithm based on features extracted from the GoogLeNet and ResNet-50 models and the handcrafted features (LBP, GLCM and DWT) to classify the dataset for the early detection of brain tumour type. The confusion matrices in Figure 22a,b display the overall and diagnostic accuracies for each class in the dataset. Glioma was diagnosed at the rates of 99.8% and 99.9% with features from GoogLeNet + handcrafted features and ResNet-50 + handcrafted features, respectively. Meningioma was diagnosed at the rates of 99.9% and 99.7% with features from GoogLeNet + handcrafted features and ResNet-50 + handcrafted features, respectively. Pituitary tumour was diagnosed at the rates of 100% and 99.8% with features from GoogLeNet + handcrafted features and ResNet-50 + handcrafted features, respectively. Lastly, normal MR images were diagnosed at the rates of 99.6% and 98.8% with features from GoogLeNet + handcrafted features and ResNet-50 + handcrafted features, respectively.

5. Discussion and Comparison of the Proposed Methods

This study includes diverse and hybrid artificial intelligence methods. Four types of methods were developed, and each method contained more than one technology. The first proposed system consists of two neural network algorithms (ANN and FFNN). The second one comprises GoogLeNet and ResNet-50. The third one includes the hybrid technology of CNN and SVM. The fourth one constitutes the hybrid features of CNN and handcrafted features (LBP, GLCM and DWT).
In the first system, the dataset was classified on the basis of features extracted using a hybrid method with LBP, GLCM and DWT algorithms, which produced 228 features. The features were diagnosed using ANN and FFNN. ANN achieved an overall accuracy of 97.4%, whereas FFNN achieves an accuracy of 97.6%. In the second system, the dataset was classified using two pretrained models, GoogLeNet and ResNet-50, through effective deep feature map extraction and classification with high accuracy. GoogLeNet achieved an accuracy of 94.3%, whereas ResNet-50 achieved an accuracy of 95.2%. The third proposed system is a hybrid technology with CNN for feature map extraction and SVM for classification. GoogLeNet + SVM achieved an accuracy of 94.8%, whereas ResNet-50 + SVM achieved an accuracy of 95.5%. The fourth proposed system is a hybrid method for extracting features with CNN models (GoogLeNet and ResNet-50) and LBP, GLCM and DWT algorithms and classifying them using ANN and FFNN algorithms. The ANN based on GoogLeNet features with LBP, GLCM and DWT features achieved an accuracy of 99.6%, whereas the ANN based on ResNet-50 features with LBP, GLCM and DWT features achieved an accuracy of 99.8%. The FFNN based on GoogLeNet features with LBP, GLCM and DWT features achieved an accuracy of 99.9%, whereas the FFNN based on ResNet-50 features with LBP, GLCM and DWT features achieved an accuracy of 99.7%.
Table 8 summarises the evaluation results of all the proposed systems, which achieved superior results in diagnosing the dataset for the early detection of brain tumour type to administer appropriate treatment.
In the first proposed neural network system, ANN achieved accuracies of 98.2%, 99%, 91.3% and 98.3% for diagnosing glioma, meningioma, no tumour and pituitary tumour, respectively. FFNN achieved accuracies of 99.7%, 99.4%, 98.6% and 93.2% for diagnosing glioma, meningioma, no tumour and pituitary tumour, respectively. In the second proposed system, GoogLeNet reached accuracies of 95.1%, 91%, 90.9% and 99% for diagnosing glioma, meningioma, no tumour and pituitary tumour, respectively. ResNet-50 reached accuracies of 90.8%, 97.6%, 93.9% and 97.4% for diagnosing glioma, meningioma, no tumour and pituitary tumour, respectively. The third proposed system (hybrid technique) also achieved superior results. The GoogLeNet + SVM network achieved accuracies of 89.2%, 96.7%, 94.9% and 97.9% for diagnosing glioma, meningioma, no tumour and pituitary tumour, respectively. ResNet-50 + SVM reached accuracies of 91.4%, 96.2%, 94.9% and 99% for diagnosing glioma, meningioma, no tumour and pituitary tumour, respectively. The fourth proposed system achieves the best performance amongst the proposed systems. It reached accuracies close to 100% for diagnosing glioma, meningioma, no tumour and pituitary tumour.
From the figure, the performance of all systems for diagnosing brain tumours ranges between 89.2% and 100%. In particular, the ANN and FFNN based on the hybrid of CNN and handcrafted features were superior to the other proposed systems and reached accuracies close to 100% for diagnosing glioma, meningioma, pituitary tumour and normal class.
Bansal et al. presented a method to segment MR images using a marker-controlled watershed algorithm. Features were extracted and categorised using hybrid classifiers with promising results [47]. Hannan et al. presented a method for early detection by distinguishing features through hierarchical deep learning. The hierarchical deep learning-based brain tumour method divided brain tumours into four types and categorised them with 92.3% accuracy [48]. Muhannad et al. proposed an improved CNN approach to classifying brain tumours by layering CNN from scratch to assess their performance. Weights were reset in the CNN-pretrained. The improved models yielded 95.75% accuracy [49]. Momina et al. proposed a CNN model based on mask region-CNN with the structure of a pretrained DenseNet-41 to diagnose brain tumours. Experiments demonstrated the ability of the mask region-CNN to segment tumours with 96.3% accuracy and classify them with 98.34% accuracy [50]. Khairandish et al. proposed two models: a rough ELM (RELM) and a hybrid model. The RELM reached 94.23% accuracy, whereas the hybrid method yielded 98.49% accuracy [51].
Thus, it is noted that the performance of our systems is superior to the previous relevant systems.

6. Conclusions

In this work, many proposed and multitechnical methods are developed to diagnose a dataset for the early detection of brain tumour type to administer appropriate treatment. The work is divided into four proposed systems, each with more than one technology. Firstly, neural network algorithms classify the dataset by extracting the tumour area, isolating it from the rest of the image and utilising the advantages of three algorithms (LBP, GLCM and DWT). Secondly, the dataset is evaluated on two pretrained models, ResNet-50 and GoogLeNet, to extract and classify deep feature maps with high efficiency. Thirdly, a hybrid technique of CNN models is used to extract deep features, which are then classified using the SVM algorithm. Fourthly, a feature extraction technique using two models, ResNet-50 and GoogLeNet, combined with the features from LBP, GLCM and DWT algorithms was used. With the hybrid features of GoogLeNet and handcrafted features, FFNN achieved an accuracy of 99.9%, a precision of 99.84%, a sensitivity of 99.95%, a specificity of 99.85% and an AUC of 99.9%.
The limitation of this study is the lack of a sufficient dataset to train the models, which is overcome by artificially increasing the data. Future work will generalise the features extracted using the hybrid method to classify them via machine learning algorithms, in addition to evaluating the dataset on CNN models built from scratch and comparing their performance with the pretrained CNN models.

Author Contributions

Conceptualization, B.A.M., E.M.S. and T.S.A.; methodology, B.A.M., A.A. and E.M.S.; software, B.A.M., A.A., E.M.S. and A.M.A.; validation, M.A., A.N.A., and T.S.A.; formal analysis, B.A.M., A.M.A. and E.M.S.; investigation, T.S.A., M.A. and A.N.A.; resources, A.M.A. and M.A.; data curation, E.M.S.; writing—original draft preparation, A.N.A. and E.M.S.;. writing—review and editing, B.A.M., T.S.A. and A.A.; visualization, M.A., A.M.A., E.M.S. and A.N.A.; supervision, B.A.M.; project administration, B.A.M., A.A. and M.A.; funding acquisition, B.A.M. and A.N.A.; All authors have read and agreed to the published version of the manuscript.

Funding

This research has been funded by the Scientific Research Deanship at the University of Ha’il, Saudi Arabia, through project number GR-22 011.

Data Availability Statement

The used dataset is available on https://www.kaggle.com/sartajbhuvaji/brain-tumor-classification-mri (accessed on 15 May 2022).

Acknowledgments

We would like to acknowledge the Scientific Research Deanship at the University of Ha’il, Saudi Arabia, for funding this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jemal, A.; Ward, E.M.; Johnson, C.J.; Cronin, K.A.; Ma, J.; Ryerson, A.B.; Mariotto, A.; Lake, A.J.; Wilson, R.; Sherman, R.L.; et al. Annual report to the nation on the status of cancer, 1975–2014, featuring survival. JNCI J. Natl. Cancer Inst. 2017, 109, djx030. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Sciancalepore, F.; Tariciotti, L.; Remoli, G.; Menegatti, D.; Carai, A.; Petruzzellis, G.; Miller, K.P.; Delli Priscoli, F.; Giuseppi, A.; Premuselli, R.; et al. Computer-Based Cognitive Training in Children with Primary Brain Tumours: A Systematic Review. Cancers 2022, 14, 3879. [Google Scholar] [CrossRef] [PubMed]
  3. Louis, D.N.; Perry, A.; Reifenberger, G.; Von Deimling, A.; Figarella-Branger, D.; Cavenee, W.K.; Ohgaki, H.; Wiestler, O.D.; Kleihues, P.; Ellison, D.W. The 2016World Health Organization classification of tumors of the central nervous system: A summary. Acta Neuropathol. 2016, 131, 803–820. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Swati, Z.N.K.; Zhao, Q.; Kabir, M.; Ali, F.; Ali, Z.; Ahmed, S.; Lu, J. Content-based brain tumor retrieval for MR images using transfer learning. IEEE Access 2019, 7, 17809–17822. [Google Scholar] [CrossRef]
  5. Pereira, S.; Meier, R.; Alves, V.; Reyes, M.; Silva, C.A. Automatic brain tumor grading from MRI data using convolutional neural networks and quality assessment. In Understanding and Interpreting Machine Learning in Medical Image Computing Applications; Springer: Berlin/Heidelberg, Germany, 2018; pp. 106–114. [Google Scholar]
  6. DeAngelis, L.M. Brain Tumors. N. Engl. J. Med. 2001, 344, 114–123. [Google Scholar] [CrossRef] [Green Version]
  7. Kleihues, P.; Burger, P.C.; Scheithauer, B.W. The new WHO classification of brain tumours. Brain Pathol. 1993, 3, 255–268. [Google Scholar] [CrossRef]
  8. Naik, J.; Patel, S. Tumor detection and classification using decision tree in brain MRI. Int. J. Comput. Sci. Netw. Secur. (Ijcsns) 2014, 14, 87. [Google Scholar]
  9. Lewis, R.; Rempala, G.; Dell, L.D.; Mundt, K.A. Vinyl chloride and liver and brain cancer at a polymer production plant in Louisville, Kentucky. J. Occup. Environ. Med. 2003, 45, 533–537. [Google Scholar] [CrossRef]
  10. Korf, B.R. Malignancy in neurofibromatosis type 1. Oncologist 2000, 5, 477–485. [Google Scholar] [CrossRef] [Green Version]
  11. Ruggeri, A.G.; Fazzolari, B.; Colistra, D.; Cappelletti, M.; Marotta, N.; Delfini, R. Calcified spinal meningiomas. World Neurosurg. 2017, 102, 406–412. [Google Scholar] [CrossRef]
  12. Kieran, M.W.; Walker, D.; Frappaz, D.; Prados, M. Brain tumors: From childhood through adolescence into adulthood. J. Clin. Oncol. 2010, 28, 4783–4789. [Google Scholar] [CrossRef] [PubMed]
  13. Rajinikanth, V.; Satapathy, S.C.; Fernandes, S.L.; Nachiappan, S. Entropy based segmentation of tumor from brain MR images–a study with teaching learning based optimization. Pattern Recognit. Lett. 2017, 94, 87–95. [Google Scholar] [CrossRef]
  14. Abbasi, S.; Tajeripour, F. Detection of brain tumor in 3D MRI images using local binary patterns and histogram orientation gradient. Neurocomputing 2017, 219, 526–535. [Google Scholar] [CrossRef]
  15. Rehman, A.; Naz, S.; Razzak, M.I.; Akram, F.; Imran, M. A deep learning-based framework for automatic brain tumors classification using transfer learning. Circuits. Syst. Signal Process. 2020, 39, 757–775. [Google Scholar] [CrossRef]
  16. Cheng, J.; Huang, W.; Cao, S.; Yang, R.; Yang, W.; Yun, Z.; Wang, Z.; Feng, Q. Enhanced performance of brain tumor classification via tumor region augmentation and partition. PloS One 2015, 10, e0140381. [Google Scholar] [CrossRef] [PubMed]
  17. Ismael, M.R.; Abdel-Qader, I. Brain tumor classification via statistical features and back-propagation neural network. In Proceedings of the 2018 IEEE international conference on electro/information technology (EIT), Rochester, MI, USA, 3–5 May 2018; pp. 0252–0257. [Google Scholar]
  18. Abir, T.A.; Siraji, J.A.; Ahmed, E. Analysis of a novel MRI based brain tumour classification using probabilistic neural network (PNN). International Journal of Scientific Research in Science. Eng. Technol. 2018, 4, 65–79. [Google Scholar]
  19. Widhiarso, W.; Yohannes, Y.; Prakarsah, C. Brain tumor classification using gray level co-occurrence matrix and convolutional neural network. IJEIS (Indones. J. Electron. Instrum. Syst.) 2018, 8, 179–190. [Google Scholar] [CrossRef]
  20. Ismael, S.A.A.; Mohammed, A.; Hefny, H. An enhanced deep learning approach for brain cancer MRI images classification using residual networks. Artif. Intell. Med. 2020, 102, 101779. [Google Scholar] [CrossRef]
  21. Chen, H.; Qin, Z.; Ding, Y.; Tian, L.; Qin, Z. Brain tumor segmentation with deep convolutional symmetric neural network. Neurocomputing 2020, 392, 305–313. [Google Scholar] [CrossRef]
  22. Menze, B.H.; Van Leemput, K.; Lashkari, D.; Weber, M.A.; Ayache, N.; Golland, P. A generative model for brain tumor segmentation in multi-modal images. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Beijing, China, 20–24 September 2010; Springer: Berlin/Heidelberg, Germany, 2012; pp. 151–159. [Google Scholar]
  23. Sharif, M.; Amin, J.; Raza, M.; Yasmin, M.; Satapathy, S.C. An integrated design of particle swarm optimization (PSO) with fusion of features for detection of brain tumor. Pattern Recognit. Lett. 2020, 129, 150–157. [Google Scholar] [CrossRef]
  24. Sharif, M.; Amin, J.; Raza, M.; Anjum, M.A.; Afzal, H.; Shad, S.A. Brain tumor detection based on extreme learning. Neural Comput. 2020, 32, 15975–15987. [Google Scholar] [CrossRef]
  25. Chen, B.; Zhang, L.; Chen, H.; Liang, K.; Chen, X. A novel extended Kalman filter with support vector machine based method for the automatic diagnosis and segmentation of brain tumors. Comput. Methods Programs Biomed. 2021, 200, 105797. [Google Scholar] [CrossRef] [PubMed]
  26. Özyurt, F.; Sert, E.; Avcı, D. An expert system for brain tumor detection: Fuzzy C-means with super resolution and convolutional neural network with extreme learning machine. Med. Hypotheses 2020, 134, 109433. [Google Scholar] [CrossRef] [PubMed]
  27. Afshar, P.; Plataniotis, K.N.; Mohammadi, A. BoostCaps: A Boosted Capsule Network for Brain Tumor Classification. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 1075–1079. [Google Scholar]
  28. Kaplan, K.; Kaya, Y.; Kuncan, M.; Ertunç, H.M. Brain tumor classification using modified local binary patterns (LBP) feature extraction methods. Med. Hypotheses 2020, 139, 109696. [Google Scholar] [CrossRef]
  29. Kanniappan, S.; Samiayya, D.; Vincent, P.M.D.R.; Srinivasan, K.; Jayakody, D.N.K.; Reina, D.G.; Inoue, A. An Efficient Hybrid Fuzzy-Clustering Driven 3D-Modeling of Magnetic Resonance Imagery for Enhanced Brain Tumor Diagnosis. Electronics 2020, 9, 475. [Google Scholar] [CrossRef] [Green Version]
  30. Civita, P.; Valerio, O.; Naccarato, A.G.; Gumbleton, M.; Pilkington, G.J. Satellitosis, a Crosstalk between Neurons, Vascular Structures and Neoplastic Cells in Brain Tumours; Early Manifestation of Invasive Behaviour. Cancers 2020, 12, 3720. [Google Scholar] [CrossRef]
  31. Hasan, A.M.; Jalab, H.A.; Ibrahim, R.W.; Meziane, F.; AL-Shamasneh, A.R.; Obaiys, S.J. MRI Brain Classification Using the Quantum Entropy LBP and Deep-Learning-Based Features. Entropy 2020, 22, 1033. [Google Scholar] [CrossRef]
  32. Senan, E.M.; Jadhav, M.E.; Kadam, A. Classification of PH2 images for early detection of skin diseases. In Proceedings of the 2021 6th International Conference for Convergence in Technology (I2CT), Maharashtra, India, 2–4 April 2021; pp. 1–7. [Google Scholar]
  33. Al-Mekhlafi, Z.G.; Senan, E.M.; Mohammed, B.A.; Alazmi, M.; Alayba, A.M.; Alreshidi, A.; Alshahrani, M. Diagnosis of Histopathological Images to Distinguish Types of Malignant Lymphomas Using Hybrid Techniques Based on Fusion Features. Electronics 2022, 11, 2865. [Google Scholar] [CrossRef]
  34. Senan, E.M.; Jadhav, M.E. Techniques for the Detection of Skin Lesions in PH 2 Dermoscopy Images Using Local Binary Pattern (LBP). In Proceedings of the International Conference on Recent Trends in Image Processing and Pattern Recognition, Aurangabad, India, 3–4 January 2020; pp. 14–25. [Google Scholar]
  35. Senan, E.M.; Jadhav, M.E. Diagnosis of Dermoscopy Images for the Detection of Skin Lesions Using SVM and KNN. In Proceedings of the Third International Conference on Sustainable Computing, Jaipur, India, March 2021; pp. 125–134. [Google Scholar]
  36. Dildar, M.; Akram, S.; Irfan, M.; Khan, H.U.; Ramzan, M.; Mahmood, A.R.; Alsaiari, S.A.; Saeed, A.H.M.; Alraddadi, M.O.; Mahnashi, M.H. Skin Cancer Detection: A Review Using Deep Learning Techniques. Int. J. Environ. Res. Public Health 2021, 18, 5479. [Google Scholar] [CrossRef]
  37. Alomari, E.; Katib, I.; Albeshri, A.; Mehmood, R. COVID-19: Detecting Government Pandemic Measures and Public Concerns from Twitter Arabic Data Using Distributed Machine Learning. Int. J. Environ. Res. Public Health 2021, 18, 282. [Google Scholar] [CrossRef]
  38. Jossa-Bastidas, O.; Zahia, S.; Fuente-Vidal, A.; Sánchez Férez, N.; Roda Noguera, O.; Montane, J.; Garcia-Zapirain, B. Predicting Physical Exercise Adherence in Fitness Apps Using a Deep Learning Approach. Int. J. Environ. Res. Public Health 2021, 18, 10769. [Google Scholar] [CrossRef] [PubMed]
  39. Al-Mekhlafi, Z.G.; Senan, E.M.; Rassem, T.H.; Mohammed, B.A.; Makbol, N.M.; Alanazi, A.A.; Ghaleb, F.A. Deep Learning and Machine Learning for Early Detection of Stroke and Haemorrhage. Comput. Mater. Continua. 2022, 72, 775–796. [Google Scholar] [CrossRef]
  40. Abunadi, I.; Senan, E.M. Multi-Method Diagnosis of Blood Microscopic Sample for Early Detection of Acute Lymphoblastic Leukemia Based on Deep Learning and Hybrid Techniques. Sensors 2022, 22, 1629. [Google Scholar] [CrossRef] [PubMed]
  41. Senan, E.M.; Jadhav, M.E.; Rassem, T.H.; Aljaloud, A.S.; Mohammed, B.A.; Al-Mekhlafi, Z.G. Early Diagnosis of Brain Tumour MRI Images Using Hybrid Techniques between Deep and Machine Learning. Comput. Math. Methods Med. 2022, 2022, 8330833. [Google Scholar] [CrossRef] [PubMed]
  42. Senan, E.M.; Abunadi, I.; Jadhav, M.E.; Fati, S.M. Score and Correlation Coefficient-Based Feature Selection for Predicting Heart Failure Diagnosis by Using Machine Learning Algorithms. Comput. Math. Methods Med. 2021, 2021, 8500314. [Google Scholar] [CrossRef]
  43. Koklu, M.; Unlersen, M.F.; Ozkan, I.A.; Aslan, M.F.; Sabanci, K. A CNN-SVM study based on selected deep features for grapevine leaves classification. Measurement 2022, 188, 110425. [Google Scholar] [CrossRef]
  44. Mohammed, B.A.; Senan, E.M.; Al-Mekhlafi, Z.G.; Alazmi, M.; Alayba, A.M.; Alanazi, A.A.; Alreshidi, A.; Alshahrani, M. Hybrid Techniques for Diagnosis with WSIs for Early Detection of Cervical Cancer Based on Fusion Features. Appl. Sci. 2022, 12, 8836. [Google Scholar] [CrossRef]
  45. Mohammed, B.A.; Senan, E.M.; Rassem, T.H.; Makbol, N.M.; Alanazi, A.A.; Al-Mekhlafi, Z.G.; Almurayziq, T.S.; Ghaleb, F.A. Multi-Method Analysis of Medical Records and MRI Images for Early Diagnosis of Dementia and Alzheimer’s Disease Based on Deep Learning and Hybrid Methods. Electronics 2021, 10, 2860. [Google Scholar] [CrossRef]
  46. Mohammed, B.A.; Senan, E.M.; Al-Mekhlafi, Z.G.; Rassem, T.H.; Makbol, N.M.; Alanazi, A.A.; Almurayziq, T.S.; Ghaleb, F.A.; Sallam, A.A. Multi-Method Diagnosis of CT Images for Rapid Detection of Intracranial Hemorrhages Based on Deep and Hybrid Learning. Electronics 2022, 11, 2460. [Google Scholar] [CrossRef]
  47. Alanazi, M.F.; Ali, M.U.; Hussain, S.J.; Zafar, A.; Mohatram, M.; Irfan, M.; AlRuwaili, R.; Alruwaili, M.; Ali, N.H.; Albarrak, A.M. Brain Tumor/Mass Classification Framework Using Magnetic-Resonance-Imaging-Based Isolated and Developed Transfer Deep-Learning Model. Sensors 2022, 22, 372. [Google Scholar] [CrossRef]
  48. Bansal, T.; Jindal, N. An improved hybrid classification of brain tumor MRI images based on conglomeration feature extraction techniques. Neural Comput. Appl. 2022, 34, 9069–9086. [Google Scholar] [CrossRef]
  49. Khairandish, M.; Sharma, M.; Jain, V.; Chatterjee, J.; Jhanjhi, N. A Hybrid CNN-SVM Threshold Segmentation Approach for Tumor Detection and Classification of MRI Brain Images. IRBM 2021, 43, 290–299. [Google Scholar] [CrossRef]
  50. Khan, A.H.; Abbas, S.; Khan, M.A.; Farooq, U.; Khan, W.A.; Siddiqui, S.Y.; Ahmad, A. Intelligent Model for Brain Tumor Identification Using Deep Learning. Appl. Comput. Intell. Soft Comput. 2022, 2022, 8104054. [Google Scholar] [CrossRef]
  51. Masood, M.; Nazir, T.; Nawaz, M.; Mehmood, A.; Rashid, J.; Kwon, H.-Y.; Mahmood, T.; Hussain, A. A Novel Deep Learning Method for Recognition and Classification of Brain Tumors from MRI Images. Diagnostics 2021, 11, 744. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Methodology for classifying a brain tumour dataset.
Figure 1. Methodology for classifying a brain tumour dataset.
Processes 11 00212 g001
Figure 2. Some images from the dataset for brain tumours.
Figure 2. Some images from the dataset for brain tumours.
Processes 11 00212 g002
Figure 3. Some images of the dataset of brain tumours after the enhancement process.
Figure 3. Some images of the dataset of brain tumours after the enhancement process.
Processes 11 00212 g003
Figure 4. Samples from the dataset representing the three brain tumours; (a) original image (b) segmentation of the tumour area (c) morphological process.
Figure 4. Samples from the dataset representing the three brain tumours; (a) original image (b) segmentation of the tumour area (c) morphological process.
Processes 11 00212 g004
Figure 5. Display of a hybridisation between the LBP, GLCM and DWT algorithms.
Figure 5. Display of a hybridisation between the LBP, GLCM and DWT algorithms.
Processes 11 00212 g005
Figure 6. Architectures of the ANN algorithms for diagnosing the brain tumour dataset.
Figure 6. Architectures of the ANN algorithms for diagnosing the brain tumour dataset.
Processes 11 00212 g006
Figure 7. Basic architecture of the GoogLeNet model for diagnosing the brain tumour dataset.
Figure 7. Basic architecture of the GoogLeNet model for diagnosing the brain tumour dataset.
Processes 11 00212 g007
Figure 8. Basic architecture of the ResNet-50 model for diagnosing the brain tumour dataset.
Figure 8. Basic architecture of the ResNet-50 model for diagnosing the brain tumour dataset.
Processes 11 00212 g008
Figure 9. Hybrid techniques: (a) GoogLeNet + SVM (b) ResNet-50+ SVM.
Figure 9. Hybrid techniques: (a) GoogLeNet + SVM (b) ResNet-50+ SVM.
Processes 11 00212 g009
Figure 10. The basic structure of the hybrid feature approach using the neural network.
Figure 10. The basic structure of the hybrid feature approach using the neural network.
Processes 11 00212 g010
Figure 11. Training dataset using the ANN and FFNN algorithms.
Figure 11. Training dataset using the ANN and FFNN algorithms.
Processes 11 00212 g011
Figure 12. Performance evaluation of (a) ANN and (b) FFNN.
Figure 12. Performance evaluation of (a) ANN and (b) FFNN.
Processes 11 00212 g012
Figure 13. Gradient value and validation check of the brain tumour dataset using (a) ANN and (b) FFNN.
Figure 13. Gradient value and validation check of the brain tumour dataset using (a) ANN and (b) FFNN.
Processes 11 00212 g013
Figure 14. ROC value for the brain tumour dataset by using ANN.
Figure 14. ROC value for the brain tumour dataset by using ANN.
Processes 11 00212 g014
Figure 15. Error histogram bin on the brain tumour dataset by using (a) ANN and (b) FFNN.
Figure 15. Error histogram bin on the brain tumour dataset by using (a) ANN and (b) FFNN.
Processes 11 00212 g015
Figure 16. Evaluation of the performance of FFNN on the brain tumour dataset through a regression value.
Figure 16. Evaluation of the performance of FFNN on the brain tumour dataset through a regression value.
Processes 11 00212 g016
Figure 17. Confusion matrix for the performance of the ANN and FFNN classifiers on the brain tumour dataset. (a) ANN (b) FFNN.
Figure 17. Confusion matrix for the performance of the ANN and FFNN classifiers on the brain tumour dataset. (a) ANN (b) FFNN.
Processes 11 00212 g017
Figure 18. Samples of MR images after the data augmentation technique is applied.
Figure 18. Samples of MR images after the data augmentation technique is applied.
Processes 11 00212 g018
Figure 19. Confusion matrix for evaluating the brain tumour dataset. (a) GoogLeNet and (b) ResNet-50.
Figure 19. Confusion matrix for evaluating the brain tumour dataset. (a) GoogLeNet and (b) ResNet-50.
Processes 11 00212 g019
Figure 20. Confusion matrix for evaluating the brain tumour dataset. (a) GoogLeNet + SVM and (b) ResNet-50 + SVM.
Figure 20. Confusion matrix for evaluating the brain tumour dataset. (a) GoogLeNet + SVM and (b) ResNet-50 + SVM.
Processes 11 00212 g020
Figure 21. ANN performance in evaluating the brain tumour dataset on the basis of features. (a) GoogLeNet with traditional methods and (b) ResNet-50 with traditional methods.
Figure 21. ANN performance in evaluating the brain tumour dataset on the basis of features. (a) GoogLeNet with traditional methods and (b) ResNet-50 with traditional methods.
Processes 11 00212 g021
Figure 22. FFNN performance in evaluating the brain tumour dataset based on features. (a) GoogLeNet with traditional methods and (b) ResNet-50 with traditional methods.
Figure 22. FFNN performance in evaluating the brain tumour dataset based on features. (a) GoogLeNet with traditional methods and (b) ResNet-50 with traditional methods.
Processes 11 00212 g022
Table 1. Split of the brain tumour dataset.
Table 1. Split of the brain tumour dataset.
Phase(80%)(20%) for Testing
ClassesTraining (80%)Validation (20%)
Glioma593148185
Meningioma674168210
No tumour3157999
Pituitary624156195
Table 2. Result of ANN and FFNN on the brain tumour dataset.
Table 2. Result of ANN and FFNN on the brain tumour dataset.
MeasureANNFFNN
Accuracy %97.497.6
Precision %97.1897.01
Sensitivity %99.1699.18
Specificity %97.2397.07
AUC %98.1898.1
Table 3. The data augmentation technique to balance the MRI dataset during the training phase.
Table 3. The data augmentation technique to balance the MRI dataset during the training phase.
PhaseTraining Phase 80%
Class NameGlioma Meningioma Pituitary No tumour
No. images before augmentation593674624315
No. images after augmentation3558337037443456
Table 4. Tuning parameter options for the GoogLeNet and ResNet-50 models.
Table 4. Tuning parameter options for the GoogLeNet and ResNet-50 models.
OptionsGoogLeNetResNet-50
Training_optionsAdamAdam
Mini_batch_size1812
Max_epochs46
Initial_learning_rate0.00030.0001
Validation_frequency35
Training_time (min)114 min 15 s84 min 56 s
Execution_environmentGPUGPU
Table 5. Results of the GoogLeNet and ResNet-50 on the brain tumour dataset.
Table 5. Results of the GoogLeNet and ResNet-50 on the brain tumour dataset.
MeasureGoogLeNetResNet-50
Accuracy %94.395.2
Precision %95.295.25
Sensitivity %9495
Specificity %9898.5
AUC %99.1997.3
Table 6. Results of the GoogLeNet and ResNet-50 on the brain tumour dataset.
Table 6. Results of the GoogLeNet and ResNet-50 on the brain tumour dataset.
MeasureGoogLeNet + SVMResNet-50 + SVM
Accuracy %94.895.5
Precision %94.596
Sensitivity %93.7595.25
Specificity %9898.5
AUC %98.5299.12
Table 7. Results of the performance of ANN and FFNN classifiers based on hybrid features.
Table 7. Results of the performance of ANN and FFNN classifiers based on hybrid features.
ClassifiersANNFFNN
Hybrid FeaturesGoogLeNet Feature + (LBP, GLCM and DWT)ResNet-50 Feature + (LBP, GLCM and DWT)GoogLeNet Feature + (LBP, GLCM and DWT)ResNet-50 Feature + (LBP, GLCM and DWT)
Accuracy%99.699.899.999.7
Precision%99.5299.7999.8499.61
Sensitivity%99.8799.9399.9599.89
Specificity%99.5299.7999.8599.61
AUC%99.799.8999.999.75
Table 8. Accuracy reached using all proposed systems in the diagnosis of each disease.
Table 8. Accuracy reached using all proposed systems in the diagnosis of each disease.
DiseasesGliomaMeningiomaNo TumourPituitary Tumour
Neural networksANN98.29991.398.3
FFNN99.799.498.693.2
Deep learningGoogLeNet95.19190.999
ResNet-5090.897.693.997.4
HybridGoogLeNet + SVM89.296.794.997.9
Res-Net-18 + SVM91.496.294.999
Hybrid FeaturesANNGoogLeNet and handcrafted features99.699.89999.8
ResNet-50 and handcrafted features99.999.999.499.8
FFNNGoogLeNet and handcrafted features99.899.999.6100
ResNet-50 and handcrafted features99.999.798.899.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mohammed, B.A.; Senan, E.M.; Alshammari, T.S.; Alreshidi, A.; Alayba, A.M.; Alazmi, M.; Alsagri, A.N. Hybrid Techniques of Analyzing MRI Images for Early Diagnosis of Brain Tumours Based on Hybrid Features. Processes 2023, 11, 212. https://doi.org/10.3390/pr11010212

AMA Style

Mohammed BA, Senan EM, Alshammari TS, Alreshidi A, Alayba AM, Alazmi M, Alsagri AN. Hybrid Techniques of Analyzing MRI Images for Early Diagnosis of Brain Tumours Based on Hybrid Features. Processes. 2023; 11(1):212. https://doi.org/10.3390/pr11010212

Chicago/Turabian Style

Mohammed, Badiea Abdulkarem, Ebrahim Mohammed Senan, Talal Sarheed Alshammari, Abdulrahman Alreshidi, Abdulaziz M. Alayba, Meshari Alazmi, and Afrah N. Alsagri. 2023. "Hybrid Techniques of Analyzing MRI Images for Early Diagnosis of Brain Tumours Based on Hybrid Features" Processes 11, no. 1: 212. https://doi.org/10.3390/pr11010212

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop