Next Article in Journal
Comparison Study between ERCP and PTBD for Recurrent Choledocholithiasis in Patients Following Gastrectomy
Next Article in Special Issue
DSCNet: Deep Skip Connections-Based Dense Network for ALL Diagnosis Using Peripheral Blood Smear Images
Previous Article in Journal
Comparison of the Effects of DOTA and NOTA Chelators on 64Cu-Cudotadipep and 64Cu-Cunotadipep for Prostate Cancer
Previous Article in Special Issue
Skin Lesion Synthesis and Classification Using an Improved DCGAN Classifier
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Brain Tumor Segmentation from MRI Images Using Handcrafted Convolutional Neural Network

1
Department of Computer Science, International Islamic University, Islamabad 44000, Pakistan
2
Department of Computer Science, Bacha Khan University, Charsadda 24420, Pakistan
3
Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11345, Saudi Arabia
4
Department of Information Systems, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia
5
Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
6
Department of Computer Science, Abdul Wali Khan University, Mardan 23200, Pakistan
*
Authors to whom correspondence should be addressed.
Diagnostics 2023, 13(16), 2650; https://doi.org/10.3390/diagnostics13162650
Submission received: 26 June 2023 / Revised: 4 August 2023 / Accepted: 5 August 2023 / Published: 11 August 2023
(This article belongs to the Special Issue Deep Disease Detection and Diagnosis Models)

Abstract

:
Brain tumor segmentation from magnetic resonance imaging (MRI) scans is critical for the diagnosis, treatment planning, and monitoring of therapeutic outcomes. Thus, this research introduces a novel hybrid approach that combines handcrafted features with convolutional neural networks (CNNs) to enhance the performance of brain tumor segmentation. In this study, handcrafted features were extracted from MRI scans that included intensity-based, texture-based, and shape-based features. In parallel, a unique CNN architecture was developed and trained to detect the features from the data automatically. The proposed hybrid method was combined with the handcrafted features and the features identified by CNN in different pathways to a new CNN. In this study, the Brain Tumor Segmentation (BraTS) challenge dataset was used to measure the performance using a variety of assessment measures, for instance, segmentation accuracy, dice score, sensitivity, and specificity. The achieved results showed that our proposed approach outperformed the traditional handcrafted feature-based and individual CNN-based methods used for brain tumor segmentation. In addition, the incorporation of handcrafted features enhanced the performance of CNN, yielding a more robust and generalizable solution. This research has significant potential for real-world clinical applications where precise and efficient brain tumor segmentation is essential. Future research directions include investigating alternative feature fusion techniques and incorporating additional imaging modalities to further improve the proposed method’s performance.

1. Introduction

Brain tumor segmentation is an essential process in medical image analysis, which aims to pinpoint the affected areas of the brain due to the presence of a tumor [1]. Diagnosis, therapy planning, disease progression monitoring, and precise and effective segmentation of brain tumors are crucial [2]. The complex nature of brain tumors and the differences between patients make manually identifying these tumors a tough and time-consuming job [3]. Brain tumors represent a heterogeneous group of intracranial neoplasms that affect both adults and children, posing significant challenges for diagnosis and treatment [4]. Magnetic resonance imaging (MRI) stands as the top choice for non-invasive brain tumor detection and assessment because of its exceptional resolution and outstanding contrast for soft tissues [5]. In many clinical tasks—for example, diagnosis, treatment planning, and patient monitoring—the accurate segmentation of brain tumors using MRI data is crucial [6]. The traditional segmentation methods are mainly based on handcrafted features, and these are designed based on domain knowledge [7]. The problem is that they are generally sensitive to variations in image intensity and hence require extensive manual tuning. Thus, the robustness and precision are very low [8]. Segmentation of brain tumor techniques is crucial for accurate diagnosis, monitoring of tumor progression, and treatment planning [9]. These techniques can be generally divided into three categories: manual brain tumor segmentation, semi-automatic, and fully automatic methods [10]. Manual segmentation of the brain is performed by radiologists or experts and involves the delineation of tumor regions on medical images using graphical tools [11]. This method can be accurate, but it is slow and takes a lot of work [12]. Furthermore, the increasing demand for medical imaging and the limited availability of expert radiologists make manual segmentation challenging to scale. Semi-automatic methods require minimal user intervention, often providing an initial contour or seed point to guide the segmentation process. These methods rely on algorithms such as region growing, which iteratively groups neighboring pixels with similar intensity values [13]; level-set methods, which evolve a contour based on geometric and image-based properties [14]; and active contours or snakes, which deform a curve or surface to minimize an energy function derived from image features [15]. Semi-automatic methods offer improved efficiency compared to manual methods; however, they still require user interaction, which can be time-consuming and may introduce variability. In fully automatic methods, tumors can be segmented without user interaction, such as machine learning (ML) and DL approaches. These techniques seek to increase the segmentation process’ effectiveness, consistency, and scalability [16]. Handcrafted feature-based methods involve extracting engineered features from images and training ML classifiers for tumor segmentation, while DL techniques such as CNN automatically learn hierarchical representations of the data to perform segmentation [17]. Fully automatic methods have demonstrated the potential for high precision and accuracy; however, they may require large, annotated datasets for training and can be computationally expensive. Recently, convolutional neural networks (CNNs) have emerged as a powerful resource for computer vision tasks, for instance, segmentation, etc. [9]. As compared to the traditional methods, CNNs have shown superior performance, especially in medical image segmentation, by introducing learning features from the data [10]. However, the integration of handcrafted features and CNNs for brain tumor segmentation has not been thoroughly investigated in the literature. From the literature, we identify that combining the handcrafted features and CNNs could lead to improving the overall performance by leveraging the strengths of both methods. Thus, based on this discussion, this study proposes a novel hybrid approach and suggests combining the handcrafted features and CNNs for brain tumor segmentation in MRI scans.
Briefly, our study contributes to the research on brain tumor segmentation by introducing a hybrid approach that combines handcrafted features with CNN to enhance the performance of brain tumor segmentation from MRI scans. The proposed hybrid model outperforms traditional handcrafted feature-based methods and individual CNN-based methods for brain tumor segmentation. In addition, it provides a more robust and generalizable solution with significant potential. The key contributions of this study are given below:
  • Our proposed approach integrates various handcrafted features, for example, intensity, texture, and shape-based features and CNNs together to achieve high accuracy and robustness. In addition, the proposed approach has a better generalization capability for the unseen data.
  • The efficiency of our proposed model is measured by comparing it with state-of-the-art segmentation models using standard benchmark datasets. The efficient results were measured based on various standard metrics, for instance, segmentation accuracy, Dice score, specificity, and sensitivity. The achieved results prove that our proposed model is highly efficient.
  • This research has significant potential for real-world clinical applications where precise and efficient brain tumor segmentation is essential.
The rest of the paper is organized as follows: Section 2 presents a recent literature review on brain tumor segmentation techniques, examining the latest developments in the field, focusing on handcrafted feature-based methods, CNN-based methods, and hybrid approaches in medical imaging. Section 3 describes the proposed methodology, including data acquisition and pre-processing, handcrafted feature extraction, CNN architecture, and the hybrid approach for integrating handcrafted features and CNNs. Section 4 presents the experiments and results, discussing dataset description, evaluation metrics, comparative analysis, and performance discussion. In Section 5, this study concludes the paper’s limitations and future work.

2. Related Work

In recent years, extensive research has been carried out on brain tumor segmentation using handcrafted feature extraction techniques and deep learning (DL) approaches. In this section, we provide a detailed overview of related work in both areas.

2.1. Handcrafted Features-Based Methods

Medical image analysis has made extensive use of handcrafted feature-based techniques, including brain tumor segmentation. These techniques involve the segmentation of images using ML algorithms and the extraction of engineered features that define image qualities [18]. Handcrafted features are divided into three categories: intensity-based, texture-based, and shape-based. Intensity-based features capture the local intensity distribution within the image. These features include statistical measurements such as mean, median, standard deviation, and histogram-based metrics [19]. Intensity-based features are useful in differentiating between normal and abnormal tissue regions due to their distinct intensity profiles. Texture-based features describe the spatial arrangement of intensities and reflect the local patterns in the image. Common texture-based features include the gray-level co-occurrence matrix (GLCM), which captures the frequency of specific pixel value combinations at certain spatial relationships [20]; local binary patterns (LBP), which encode the relationship between a pixel and its neighbors [21]; and Gabor filters, which analyze the frequency and orientation information in images [22]. Texture features can be valuable for characterizing the heterogeneity and complexity of tumor regions. Shape-based features capture the geometric properties of the tumor region, providing information about the tumor’s size, shape, and boundary irregularities. Examples of shape-based features include area, perimeter, compactness, and various moments [23]. These features can help differentiate tumors from surrounding tissues based on their distinct morphological characteristics.
ML algorithms, such as random forests (RF), support vector machines (SVM), and k-nearest neighbors (k-NN), are trained for segmentation tasks after handcrafted features are extracted [24]. Despite the success of handcrafted feature-based methods in various medical image segmentation tasks, these methods often require extensive manual tuning and are sensitive to variations in image intensity, limiting their robustness and precision [25]. Additionally, the reliance on manually engineered features can lead to a lack of adaptability to diverse imaging conditions and tumor appearances. Therefore, there is a need for more robust and versatile approaches to tumor segmentation.

2.2. Convolutional Neural Network-Based Methods

CNN has revolutionized the field of image recognition and segmentation by automatically learning features from the data, making them more robust to variations and alleviating the need for manual feature engineering [17]. CNNs are composed of multiple layers, including convolutional, pooling, and fully connected layers, that use nonlinear transformations to learn hierarchical representations of the input data [26]. This allows CNNs to effectively capture complex patterns and structures within images, leading to improved performance in various image analysis tasks.
Several CNN architectures have been proposed in the context of brain tumor segmentation to address the challenges faced by the heterogeneity and complexity of brain tumors. Some of the most prominent architectures include U-Net, V-Net, and DeepMedic [27,28,29]. The accurate localization of tumor boundaries is made possible by the symmetric encoder-decoder architecture known as U-Net, which uses skip connections to merge low-level and high-level data. V-Net extends the U-Net architecture to 3D medical images and incorporates a volumetric loss function for improved segmentation performance. DeepMedic employs a multi-scale approach with parallel processing of image patches at different resolutions to capture both global and local contextual information. The study in [30] aimed to accurately segment brain tumors from MRI scans using a 3D nnU-Net model enhanced with domain knowledge from a senior radiologist. The approach improved the model’s performance and achieved high Dice scores for the validation and test sets. The approach was validated on hold-out testing data, including pediatric and sub-Saharan African patient populations, demonstrating high generalization capabilities.
These CNN architectures have demonstrated superior performance in brain tumor segmentation compared to traditional methods by learning context-aware features that capture both local and global information [31]. Additionally, CNN-based methods are more robust to intensity variations and can adapt to diverse imaging conditions and tumor appearances, making them a promising approach for this task.
Despite the success, CNNs usually need large, annotated datasets for training, which can be challenging to obtain in the medical domain due to the limited availability of expert annotations and the time-consuming nature of manual segmentation [32]. Furthermore, CNNs can be computationally expensive, particularly for large 3D medical images, and may lack interpretability due to their black-box nature.
To overcome these limitations, researchers have explored various strategies, such as transfer learning, data augmentation, and incorporating domain knowledge through the integration of handcrafted features. These approaches aim to leverage the strengths of both handcrafted feature-based methods and CNNs to improve the robustness, precision, and interpretability of brain tumor segmentation techniques.

2.3. Hybrid Approaches in Medical Imaging

Hybrid approaches aim to combine the strengths of handcrafted features and DL techniques to increase the performance of medical image segmentation tasks, taking advantage of domain knowledge and automated feature learning [33]. Several hybrid approaches have been proposed for various medical imaging applications, including lung nodule detection, breast cancer segmentation, and retinal vessel segmentation [34,35,36].
These hybrid approaches often involve integrating handcrafted features at different levels of the CNN architecture, such as input channels, feature maps, or decision levels [33]. Several strategies have been proposed for incorporating handcrafted features into DL models. One approach is to concatenate handcrafted features with deep features before the classification layer, which allows the model to leverage both feature types during the decision-making process [36]. Another approach involves injecting handcrafted features into intermediate layers of the CNN, enabling the network to learn more complex, higher-level representations that integrate domain knowledge [37]. Multi-stream architectures, which process handcrafted and deep features in parallel, have also been proposed to encourage complementary learning and robust feature representations [38].
These hybrid approaches have demonstrated an improved performance compared to individually handcrafted features or CNN-based methods in various medical imaging tasks. By combining the advantages of both techniques, hybrid models can capitalize on the domain knowledge provided by handcrafted features while benefiting from the automatic feature learning capabilities of CNNs.
Table 1 summarizes a comparison of brain tumor segmentation techniques, including handcrafted feature methods, CNN-based, and hybrid approaches. The evaluation of the relevant literature emphasizes the limitations of handcrafted feature-based methods and the potential of CNN-based methods for tumor segmentation. However, the integration of handcrafted features and CNNs has not been thoroughly investigated for brain tumor segmentation. A hybrid method that combines the strengths of each approach could lead to improved performance in this task, offering a promising avenue for future research and development in the field of medical image analysis.

2.4. Data Augmentation Techniques

The literature includes an extensive collection of data augmentation in brain MRI. Different techniques were applied to the MRI such as translation, noise addition, rotation, and shearing to increase the size of the dataset as well the performance of tumor segmentation. Khan et al. [39] applied the noise addition to and shearing methods to increase the size of the dataset and improved the accuracy of the classification and tumor segmentation. Similarly, Dufumier et al. [40] applied rotation, random cropping, noise addition, translation, and blurring to increase the dataset size and performance in the prediction of age, and sex classification. Different studies used elastic deformation, rotation, and scaling to improve tumor segmentation and accuracy at the same time [41]. These techniques are common due to their simplicity and performance. In addition to these techniques, the researchers also generated synthetic images to perform a specific task. The most common method of image generation is the mix-up, where the patches from two random images are combined to generate the new image. In all these applications, the researchers used different datasets and different numbers of images. Similarly, everyone used a different network architecture. Thus, every researcher presented the results performance based on their selected techniques. In this article, after careful evaluation of the literature, the common techniques are used. These techniques are presented in Table 2. Furthermore, Nelapa et al. [42] provided a comprehensive survey of the data augmentation that can be used for further details.

3. Methodology

In this section, we describe the proposed methodology for brain tumor segmentation using a fusion of handcrafted features and CNN. The proposed method consists of two feature pathways for handcrafted and CNN. Pre-processing and data augmentation are also applied. An overview of the proposed methodology is presented in Figure 1.

3.1. Data Acquisition and Preprocessing

The Brain Tumor Segmentation (BraTS) 2018 dataset, which is freely available, provided the MRI scans used in this study [43]. BraTS provides multi-institutional, multi-scanner, and multi-protocol pre-operative scans of patients with brain tumors. The dataset contains four different MRI modulates for each patient: T1-weighted, T1-weighted post-contrast (T1C), T2-weighted, and T2-FLAIR. These sequences provide complementary information about the tumor and its surroundings, allowing for a more comprehensive analysis of the tumor’s characteristics. Table 3 presents the dataset’s distribution in terms of the number of patients with gliomas and their respective tumor classifications.
Before feeding the MRI scans into the proposed model, several pre-processing techniques were applied to standardize the input MRI slice. To achieve spatial alignment between various sequences, the MRI scans were co-registered to a common reference space using a rigid registration technique [44]. The skull and other non-brain tissues were removed from the MRI scans using a skull stripping algorithm, such as the Brain Extraction Tool (BET) in FSL [45], to reduce noise and computational complexity. The intensity values of the MRI scans were normalized to a standard range of 0 and 1 to minimize the effects of intensity variations across different scanners and protocols [46]. MRI scans often suffer from intensity inhomogeneity due to the presence of a biased field. The N4ITK algorithm was used to correct the bias field and achieve uniform intensity distributions across the images [47].

3.2. Handcrafted Feature Extraction

The proposed hybrid approach for brain tumor segmentation combines handcrafted features and CNNs. In this section, the handcrafted feature extraction process is described, which includes the Dense SURF (DSURF) descriptor and Histogram of Oriented Gradients (HOG) features shown in Figure 1.

3.2.1. DSURF Descriptor

The Speeded Up Robust Features (SURF) is a variation that includes the DSURF descriptor, which is utilized for both feature point detection and description [48]. DSURF selects dense feature points situated closely together along a grid with a specific step size, resulting in a significant feature gain compared to SURF when prior knowledge is limited. Each key point is assigned a feature descriptor, and the SURF descriptor can have 64 or 128 dimensions. After identifying the key point, an orientation is defined in a circular region around the key point, which is then aligned to derive the SURF descriptor. The DSURF descriptor extraction is given as follows:
Grid Creation:
G ( x , y ) = ( x × s , y × s )
where x and y are both integers and s are a specific step size.
Feature detection (Matrix H):
H = [ L x x ( x , σ ) L x y ( x , σ ) L x y ( x , σ ) L y y ( x , σ ) ]
where σ represents a standard deviation value.
Orientation assignment:
θ = arctan ( W ( x , y ) × L x ( x , y ) W ( x , y ) × L y ( x , y ) )
SURF:
D = [ L x , L y , | L x | , | L y | ]   for   each   sub - region .

3.2.2. HOG Features

HOG features have been widely employed in a variety of applications, including pedestrian recognition, object identification, image registration, and medical image categorization [49]. HOG calculates the number of times an oriented gradient appears in a certain area of an image, capturing edge information that may be used for categorization. The image is divided into small, contiguous cells, and the edge orientations or HOG directions for each cell are determined. The resulting histograms are combined to form the descriptor. Using the following equations, gradients are computed:
G x = f ( x , y ) x = f ( x + 1 , y ) f ( x 1 , y ) ( x + 1 ) ( x 1 )
G y = f ( x , y ) y = f ( x , y + 1 ) f ( x , y 1 ) ( y + 1 ) ( y 1 )
Every block in the HOG process generates the density of its intensity gradients. A feature vector represents the information received from distinct parts of an image.

3.3. CNN Architecture

The suggested CNN architecture is based on the U-Net architecture and is intended to segregate brain tumors. The architecture is made up of an encoding path that collects context information and a decoding path that allows for exact localization [27]. Table 4 shows the architecture of the proposed CNN.

Training Procedure and Hyperparameters

The proposed CNN was trained with a mix of cross-entropy and Dice coefficient losses. As shown in Equation (7), Cross Entropy Loss is calculated by Equation (9):
E n t r o p y = y = i y i × log ( f ( x ) ) i
The Cross-Entropy Loss function measures the dissimilarity between the predicted probability distribution (f(x)) and the true distribution (y), while the Dice coefficient is calculated below:
D i c e   C o e f f i c i e n t = 1 2 × y i × f ( x ) + ε y i + j i × f ( x ) + ε
L o s s = E n t r o p y + ( 1 D i c e   C o e f f i c i e n t )
The training dataset is divided into mini-batches, and the weights are updated with momentum using the stochastic gradient descent (SGD) optimization algorithm. Key hyperparameters for the training process are provided in Table 5.
To prevent overfitting, the model is trained for 100 epochs and measures its performance on a validation set. Early stopping is used to end training when the validation loss does not improve after a certain number of epochs. The proposed methodology leverages a CNN architecture inspired by U-Net, which is trained using a combination of cross-entropy and Dice coefficient loss. The proposed model is trained over the SGD optimizer with momentum and early halting. By gathering both local and global information from the input MRI slices, this architecture seeks to achieve accurate and exact brain tumor segmentation.

3.4. Integrating Handcrafted Features and CNN

The hybrid approach aims to leverage the strengths of both handcrafted features and CNN for improved brain tumor segmentation. In this approach, handcrafted features are integrated into the CNN architecture to create a more robust and accurate model. The proposed model consists of two input channels for handcrafted features and CNN features. In the next stage, a feature map is calculated, and handcrafted features are fused with feature maps extracted from intermediate layers of the CNN. In the next stage, handcrafted features are concatenated with the output of the last CNN layer before the final classifier as shown in Figure 2.
These strategies include input channel fusion, feature map fusion, and decision level fusion. By combining handcrafted features with CNN features, the model can capture both low-level and high-level information to improve segmentation performance.

Fine-Tuning the CNN

After integrating handcrafted features, CNN is fine-tuned to adapt to the new input representation. The fine-tuning process involves updating the weights of the model by minimizing the same loss function as used in the initial training Equation (3).
The hyperparameters for fine-tuning are like those used in the initial training The learning rate is minimized by a factor of 10 to ensure that the fine-tuning process does not drastically change the learned features. The fine-tuning process is performed for a smaller number of epochs (e.g., 50) to avoid overfitting, as the model has already been trained on the dataset. The proposed hybrid approach integrates handcrafted features with the CNN architecture to create a more robust and accurate model for brain tumor segmentation. Different feature fusion strategies are proposed for combining handcrafted and CNN features at various levels of the architecture. CNN is then fine-tuned to adapt to the new input representation, with a reduced learning rate and fewer epochs to prevent overfitting. This hybrid approach aims to leverage the strengths of both handcrafted features and CNNs for improved performance in brain tumor segmentation tasks.

3.5. Evaluation Metrics

To assess the performance of this study, various evaluation techniques are used, including segmentation accuracy, Dice score, specificity, and sensitivity. These metrics provide a comprehensive assessment of the segmentation performance, considering various aspects such as overlap, false positives, and false negatives.

3.5.1. Segmentation Accuracy

Segmentation accuracy is a widely used metric in image segmentation tasks. In terms of the total number of pixels in the image, it calculates the percentage of properly identified pixels, mathematically defined as below:
A c c u r a c y = ( T P + T N ) ( T P + T N + F P + F N )
In Equation (10), True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) stand for the respective counts of true positives, true negatives, false positives, and false negatives. While segmentation accuracy provides an overall assessment of the segmentation performance, it may be misleading in cases of class imbalance, where one class is significantly more prevalent than the others.

3.5.2. Dice Score

The Dice score, commonly referred to as the Dice similarity coefficient, is a well-liked statistic for determining how much the projected and actual segmentation masks coincide. It is defined as below:
D i c e   S c o r e = 2 × T P 2 × T P + F P + F N
The Dice score goes from 0 to 1, with 0 denoting complete overlap and 1 denoting no overlap at all. This metric is particularly useful in medical image segmentation, as it accounts for both false positives and false negatives and is less sensitive to class imbalance compared to segmentation accuracy.

3.5.3. Sensitivity and Specificity

In medical image analysis, sensitivity and specificity measurements are frequently used metrics to assess the effectiveness of binary classification tasks. Sensitivity, also known as the true positive rate or recall, is a measurement of the proportion of positive cases containing true positives. Specificity, also known as the true negative rate, is the calculation of the number of true negatives in truly negative cases. These sensitivity and specificity metrics are defined as:
S e n s i t i v i t y = T P ( T P + F N )
S p e c i f i c i t y = T N ( T N + F P )
Sensitivity and specificity provide complementary information about the segmentation performance, as sensitivity focuses on the ability of the method to correctly identify positive cases (like tumor regions), while specificity focuses on the ability to correctly identify negative cases in non-tumor regions. By considering both sensitivity and specificity, a more comprehensive assessment of the segmentation performance can be obtained.

3.6. Experimental Setup

In this article, we harnessed the power of Google Colab to set up and conduct experiments using Python 3, taking full advantage of its default GPU setting. Google Colab provides an excellent platform for machine learning and deep learning tasks, and its integration with Python 3 makes it an attractive choice for researchers, developers, and students alike.
To build and train convolutional neural networks (CNNs), we leveraged the capabilities of TensorFlow, one of the most widely used and well-documented deep learning libraries. TensorFlow’s intuitive interface and extensive community support enabled us to design complex neural network architectures for various computer vision tasks.

4. Results and Discussion

This study proposed a hybrid approach that combines handcrafted features and CNN. The integration of handcrafted features with CNN features in our proposed hybrid approach led to improved segmentation performance. This approach allowed us to leverage the strengths of both handcrafted features and CNN for more accurate and robust tumor segmentation. The fine-tuning of the CNN on the integrated features further improved the performance of our approach.

4.1. Brain Tumor Segmentation Challenge Dataset

In this section, we evaluate our approach to the Brain Tumor Segmentation Challenge (BraTS) 2018 dataset. The BraTS dataset includes multi-modal MRI images from patients with brain tumors, containing four MRI modalities: T1-weighted (T1), T1-weighted post-contrast (T1-Gd), T2-weighted (T2), and fluid-attenuated inversion recovery (FLAIR). There are 285 patients in the dataset, comprising 200 scans for training and 85 scans for testing, which is a traditional 70:30 ratio splits.
Each MRI scan has 155 axial slices with a resolution of 240 × 240 pixels. The BraTS dataset includes ground truth labels for three tumor sub-regions: the enhancing tumor (ET), the tumor core (TC), and the whole tumor (WT). Predicting the voxel-wise labels for these sub-regions in the MRI images is part of the segmentation task. The given ground truth labels enable a quantitative evaluation of the proposed technique, with conventional metrics like Dice score, sensitivity, and specificity used to measure performance.

4.2. Data Augmentation Techniques

To enhance our proposed model’s performance to generalize and prevent overfitting caused by the dataset’s limited size, on the training data, we use data augmentation methods. Random rotation, scaling, and horizontal flipping of the MRI images. Additionally, random intensity shifts and contrast normalization are performed to account for intensity variations between patients.
Data augmentation approaches enhance the heterogeneity in the training dataset, allowing the model to learn more robust features and perform better on unobserved data. The combination of these techniques ensures that the model can handle potential variations in the input data, such as differences in imaging protocols, scanner types, and patient populations.

4.3. Performance of Handcrafted Feature-Based Methods

The performance of various handcrafted feature-based methods is evaluated, including MI features, HOG features, and SURF features. The results are summarized in Table 6.

4.4. Performance of CNN-Based Methods

The performance of various CNN-based methods is evaluated, including U-Net, V-Net, and DeepMedic. The performance of U-Net is comparatively better than that of the traditional CNN due to its distinctive features, i.e., skip connection. The results are summarized in Table 7.

4.5. Performance of the Proposed Hybrid Approach

The performance of the proposed hybrid approach, which combines handcrafted features and the proposed CNN, is evaluated. The results are summarized in Table 8.
The comparative analysis shows that the proposed hybrid approach outperforms both handcrafted feature-based methods and CNN-based methods in terms of segmentation accuracy, Dice score, sensitivity, and specificity. This demonstrates the effectiveness of the hybrid approach in leveraging the strengths of handcrafted features and DL techniques for brain tumor segmentation.

4.6. Impact of Handcrafted Features on CNN Performance

The integration of handcrafted features into the CNN model proved to have a positive impact on the segmentation performance, as demonstrated in Table 9. The hybrid technique proposed outperformed CNN-based methods in terms of segmentation precision, Dice score, sensitivity, and specificity. This enhancement is due to the complementary character of the custom-designed and CNN-learned features.
The combination of handcrafted features and CNN-based features allows the model to capture a wide range of information, increasing its ability to handle these variations.
In addition, the hybrid approach demonstrated better generalization capabilities compared to individual handcrafted feature-based and CNN-based methods. By leveraging the strengths of both types of features, the model can generalize well to unseen data, making it a promising solution for real-world applications in clinical settings.
This study’s findings demonstrate the feasibility of the proposed hybrid method for accurate and robust brain tumor segmentation. Future research could investigate the effect of various feature fusion strategies and fine-tuning techniques on the hybrid model’s performance. Furthermore, the integration of other handcrafted features or advanced DL techniques, such as attention mechanisms, could be explored to enhance the segmentation performance even further.

5. Conclusions

In this research, a hybrid approach for brain tumor segmentation that combines handcrafted features and CNNs is presented. The methodology involved data acquisition and pre-processing, feature extraction, CNN architecture, and the integration of handcrafted features and CNNs. The proposed hybrid approach demonstrated a superior performance compared to individual handcrafted feature-based and CNN-based methods. The integration of handcrafted features and CNNs led to improved segmentation accuracy and robustness, as well as better generalization capabilities for unseen data. Despite the promising results, the proposed hybrid approach has some limitations. One limitation is the complexity of integrating handcrafted features and CNNs, which can require extensive tuning to achieve optimal performance. Moreover, the approach still relies on the availability of large, annotated datasets for training, which can be challenging to obtain in the medical domain. Future work could address these limitations by investigating the impact of different feature fusion strategies, fine-tuning techniques, and the integration of advanced DL techniques, such as attention mechanisms or domain adaptation. Furthermore, exploring the use of transfer learning and unsupervised or semi-supervised learning methods could help overcome the challenge of limited annotated datasets and improve the generalization capabilities of the model across different medical imaging datasets and modalities.

Author Contributions

Investigation, M.N.; Resources, M.A. and T.A.; Writing—original draft, F.U.; Visualization, A.S.; Funding acquisition, M.A.-R. and F.A. All authors have read and agreed to the published version of the manuscript.

Funding

Researchers Supporting Project number (RSP2023R206), King Saud University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

The study did not require ethical approval.

Informed Consent Statement

Not applicable.

Data Availability Statement

Datasets analyzed during the current study are available on the BraTS website [43].

Acknowledgments

Researchers Supporting Project number (RSP2023R206), King Saud University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hossain, T.; Shishir, F.S.; Ashraf, M.; Al Nasim, M.A.; Shah, F.M. Brain tumor detection using convolutional neural network. In Proceedings of the 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT), Dhaka, Bangladesh, 3–5 May 2019; pp. 1–6. [Google Scholar]
  2. Fernandes, S.L.; Tanik, U.J.; Rajinikanth, V.; Karthik, K.A. A reliable framework for accurate brain image examination and treatment planning based on early diagnosis support for clinicians. Neural Comput. Appl. 2020, 32, 15897–15908. [Google Scholar] [CrossRef]
  3. Magadza, T.; Viriri, S. Deep learning for brain tumor segmentation: A survey of state-of-the-art. J. Imaging 2021, 7, 19. [Google Scholar] [CrossRef] [PubMed]
  4. Ostrom, Q.T.; Cioffi, G.; Waite, K.; Kruchko, C.; Barnholtz-Sloan, J.S. CBTRUS statistical report: Primary brain and other central nervous system tumors diagnosed in the United States in 2014–2018. Neuro-Oncol. 2021, 23, 1–105. [Google Scholar] [CrossRef] [PubMed]
  5. Augustine, R.; Al Mamun, A.; Hasan, A.; Salam, S.A.; Chandrasekaran, R.; Ahmed, R.; Thakor, A.S. Imaging cancer cells with nanostructures: Prospects of nanotechnology driven non-invasive cancer diagnosis. Adv. Colloid Interface Sci. 2021, 294, 102457. [Google Scholar] [CrossRef]
  6. Ullah, F.; Salam, A.; Abrar, M.; Amin, F. Brain Tumor Segmentation Using a Patch-Based Convolutional Neural Network: A Big Data Analysis Approach. Mathematics 2023, 11, 1635. [Google Scholar] [CrossRef]
  7. Xie, X.; Niu, J.; Liu, X.; Chen, Z.; Tang, S.; Yu, S. A survey on incorporating domain knowledge into deep learning for medical image analysis. Med. Image Anal. 2021, 69, 101985. [Google Scholar] [CrossRef] [PubMed]
  8. Ayadi, W.; Charfi, I.; Elhamzi, W.; Atri, M. Brain tumor classification based on hybrid approach. Vis. Comput. 2022, 38, 107–117. [Google Scholar] [CrossRef]
  9. O’Mahony, N.; Campbell, S.; Carvalho, A.; Harapanahalli, S.; Hernandez, G.V.; Krpalkova, K.; Riordan, D.; Walsh, J. Deep learning vs. traditional computer vision. In Advances in Computer Vision: Proceedings of the 2019 Computer Vision Conference (CVC); Springer: Cham, Switzerland, 2020; Volume 1, pp. 128–144. [Google Scholar]
  10. Du, G.; Cao, X.; Liang, J.; Chen, X.; Zhan, Y. Medical image segmentation based on u-net: A review. J. Imaging Sci. Technol. 2020, 64, 020508–020512. [Google Scholar] [CrossRef]
  11. Li, W.; Li, Y.; Qin, W.; Liang, X.; Xu, J.; Xiong, J.; Xie, Y. Magnetic resonance image (MRI) synthesis from brain computed tomography (CT) images based on deep learning methods for magnetic resonance (MR)-guided radiotherapy. Quant. Imaging Med. Surgery 2020, 10, 12–23. [Google Scholar] [CrossRef]
  12. Gordillo, N.; Montseny, E.; Sobrevilla, P. State of the art survey on MRI brain tumor segmentation. Magn. Reson. Imaging 2013, 31, 1426–1438. [Google Scholar] [CrossRef]
  13. Mesanovic, N.; Grgic, M.; Huseinagic, H.; Males, M.; Skejic, E.; Smajlovic, M. Automatic CT image segmentation of the lungs with region growing algorithm. In Proceedings of the 18th International Conference on Systems, Signals and Image Processing—IWSSIP, Sarajevo, Bosnia and Herzegovina, 16–18 June 2011; pp. 395–400. [Google Scholar]
  14. Osher, S.; Sethian, J.A. Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations. J. Comput. Phys. 1988, 79, 12–49. [Google Scholar] [CrossRef] [Green Version]
  15. Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef]
  16. Bakas, S.; Reyes, M.; Jakab, A.; Bauer, S.; Rempfler, M.; Crimi, A.; Shinohara, R.T.; Berger, C.; Ha, S.M.; Rozycki, M.; et al. Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge. arXiv 2018, arXiv:1811.02629. [Google Scholar]
  17. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  18. Zacharaki, E.I.; Wang, S.; Chawla, S.; Soo Yoo, D.; Wolf, R.; Melhem, E.R.; Davatzikos, C. Classification of brain tumor type and grade using MRI texture and shape in a machine learning scheme. Magn. Reson. Med. Off. J. Int. Soc. Magn. Reson. Med. 2009, 62, 1609–1618. [Google Scholar] [CrossRef] [Green Version]
  19. Chaddad, A. Automated feature extraction in brain tumor by magnetic resonance imaging using gaussian mixture models. J. Biomed. Imaging 2015, 5, 868031. [Google Scholar] [CrossRef] [Green Version]
  20. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, 6, 610–621. [Google Scholar] [CrossRef] [Green Version]
  21. Ojala, T.; Pietikäinen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
  22. Daugman, J.G. Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. J. Opt. Soc. Am. A 1985, 2, 1160–1169. [Google Scholar]
  23. Asodekar, B.H.; Gore, S.A.; Thakare, A. Brain tumor analysis based on shape features of MRI using machine learning. In Proceedings of the 2019 5th International Conference on Computing, Communication, Control and Automation (ICCUBEA), Pune, India, 19–21 September 2019; pp. 1–5. [Google Scholar]
  24. Tandel, G.S.; Biswas, M.; Kakde, O.G.; Tiwari, A.; Suri, H.S.; Turk, M.; Laird, J.R.; Asare, C.K.; Ankrah, A.A.; Khanna, N.N.; et al. A Review on a Deep Learning Perspective in Brain Cancer Classification. Cancers 2019, 11, 111. [Google Scholar] [CrossRef] [Green Version]
  25. Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R.; et al. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 2014, 34, 1993–2024. [Google Scholar] [CrossRef]
  26. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
  27. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015, Proceedings, Part III; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  28. Milletari, F.; Navab, N.; Ahmadi, S.-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
  29. Kamnitsas, K.; Ledig, C.; Newcombe, V.F.; Simpson, J.P.; Kane, A.D.; Menon, D.K.; Rueckert, D.; Glocker, B. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 2017, 36, 61–78. [Google Scholar] [CrossRef]
  30. Kotowski, K.; Adamski, S.; Machura, B.; Zarudzki, L.; Nalepa, J. Infusing Domain Knowledge into nnU-Nets for Segmenting Brain Tumors in MRI. In Proceedings of the International MICCAI Brainlesion Workshop, Singapore, 18–22 September 2022; pp. 186–194. [Google Scholar]
  31. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sanchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [Green Version]
  32. Shen, H.; Wang, R.; Zhang, J.; McKenna, S.J. Boundary-aware fully convolutional network for brain tumor segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2017: 20th International Conference, Quebec City, QC, Canada, 11–13 September 2017, Proceedings, Part II; Springer: Cham, Switzerland, 2017; pp. 433–441. [Google Scholar]
  33. Hu, H.; Mu, Q.; Bao, Z.; Chen, Y.; Liu, Y.; Chen, J.; Wang, K.; Wang, Z.; Nam, Y.; Jiang, B.; et al. Mutational landscape of secondary glioblastoma guides MET-targeted trial in brain tumor. Cell 2018, 175, 1665–1678. [Google Scholar] [CrossRef] [Green Version]
  34. Raza, A.; Ayub, H.; Khan, J.A.; Ahmad, I.; Salama, S.A.; Daradkeh, Y.I.; Javeed, D.; Ur Rehman, A.; Hamam, H. A Hybrid Deep Learning-Based Approach for Brain Tumor Classification. Electronics 2022, 11, 1146. [Google Scholar] [CrossRef]
  35. Shah, P.M.; Ullah, F.; Shah, D.; Gani, A.; Maple, C.; Wang, Y.; Abrar, M.; Islam, S.U. Deep GRU-CNN model for COVID-19 detection from chest X-rays data. IEEE Access 2021, 10, 35094–35105. [Google Scholar] [CrossRef]
  36. Fu, H.; Cheng, J.; Xu, Y.; Wong, D.W.K.; Liu, J.; Cao, X. Joint optic disc and cup segmentation based on multi-label deep network and polar transformation. IEEE Trans. Med. Imaging 2018, 37, 1597–1605. [Google Scholar] [CrossRef] [Green Version]
  37. Song, B.; Wen, P.; Ahfock, T.; Li, Y. Numeric investigation of brain tumor influence on the current distributions during transcranial direct current stimulation. IEEE Trans. Biomed. Eng. 2015, 63, 176–187. [Google Scholar] [CrossRef]
  38. Saba, T.; Mohamed, A.S.; El-Affendi, M.; Amin, J.; Sharif, M. Brain tumor detection using fusion of hand crafted and deep learning features. Cogn. Syst. Res. 2020, 59, 221–230. [Google Scholar] [CrossRef]
  39. Khan, A.R.; Khan, S.; Harouni, M.; Abbasi, R.; Iqbal, S.; Mehmood, Z. Brain tumor segmentation using K-means clustering and deep learning with synthetic data augmentation for classification. Microsc. Res. Tech. 2021, 7, 1389–1399. [Google Scholar] [CrossRef] [PubMed]
  40. Dufumier, B.; Gori, P.; Battaglia, L.; Victor, J.; Grigis, A.; Duchesnay, E. Benchmarking CNN on 3D anatomical brain MRI: Architectures, data augmentation and deep ensemble learning. arXiv 2021, arXiv:2106.01132. [Google Scholar]
  41. Isensee, F.; Jäger, P.F.; Full, P.M.; Vollmuth, V.; Maier, K.H. nnU-Net for brain tumor segmentation. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, 4 October 2020; Springer: Cham, Switzerland, 2021; pp. 118–132. [Google Scholar]
  42. Nalepa, J.; Marcinkiewicz, M.; Kawulok, M. Data augmentation for brain-tumor segmentation: A review. Front. Comput. Neurosci. 2019, 13, 83–102. [Google Scholar] [CrossRef] [Green Version]
  43. Multimodal Brain Tumor Segmentation Challenge 2018. Available online: https://www.med.upenn.edu/sbia/brats2018/data.html (accessed on 1 February 2023).
  44. Sloan, J.M.; Goatman, K.A.; Siebert, J.P. Learning rigid image registration-utilizing convolutional neural networks for medical image registration. In Proceedings of the 11th International Joint Conference on Biomedical Engineering Systems and Technologies, Funchal, Portugal, 19–21 January 2018; pp. 89–99. [Google Scholar]
  45. Smith, S.M. Fast robust automated brain extraction. Hum. Brain Mapp. 2002, 17, 143–155. [Google Scholar] [CrossRef] [PubMed]
  46. Nyúl, L.G.; Udupa, J.K.; Zhang, X. New variants of a method of MRI scale standardization. IEEE Trans. Med. Imaging 2000, 19, 143–150. [Google Scholar] [CrossRef] [PubMed]
  47. Tustison, N.J.; Avants, B.B.; Cook, P.A.; Zheng, Y.; Egan, A.; Yushkevich, P.A.; Gee, J.C. N4ITK: Improved N3 bias correction. IEEE Trans. Med. Imaging 2010, 29, 1310–1320. [Google Scholar] [CrossRef] [Green Version]
  48. Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. Lect. Notes Comput. Sci. 2006, 3951, 404–417. [Google Scholar]
  49. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 886–893. [Google Scholar]
Figure 1. Handcrafted feature extraction.
Figure 1. Handcrafted feature extraction.
Diagnostics 13 02650 g001
Figure 2. Proposed hybrid approach with multiple pathways.
Figure 2. Proposed hybrid approach with multiple pathways.
Diagnostics 13 02650 g002
Table 1. Comparison of brain tumor segmentation techniques.
Table 1. Comparison of brain tumor segmentation techniques.
TechniqueAdvantagesDisadvantages
Handcrafted FeaturesDomain knowledgeSensitive to intensity variations
Compatibility with traditional MLRequire manual tuning
CNN-based MethodsAutomatic feature learningRequire large, annotated datasets
Robustness to intensity variationsComputationally expensive
High precision and accuracyMay lack interpretability
Hybrid ApproachesStrengths of both handcrafted and CNN featuresThe complexity of the integration strategy
Improved performanceLarge, annotated datasets and tuning
Potential for increased interpretability
Table 2. Data augmentation techniques.
Table 2. Data augmentation techniques.
TechniqueDescription
RotationRandomly rotate the MRI scans by ±15 degrees
ScalingRandomly scale the MRI scans by a factor between 0.8 and 1.2
Horizontal FlipRandomly flip the MRI scans horizontally with a probability of 0.5
Elastic DeformationApply random elastic deformation to the MRI scans with a Gaussian filter of σ = 4.0
Intensity ShiftRandomly shift the intensity of the MRI scans by a factor between −0.1 and 0.1
Contrast NormalizationNormalize the contrast of the MRI scans by histogram equalization
Table 3. Distribution of the BraTS 2018 dataset.
Table 3. Distribution of the BraTS 2018 dataset.
Tumor GradeNumber of Patients
High-Grade210
Low-Grade75
Total285
Table 4. Proposed CNN architecture.
Table 4. Proposed CNN architecture.
Layer TypeOutput SizeActivation Function
Input256 × 256 × 4-
Convolution128 × 128 × 64ReLU
Convolution32 × 32 × 128ReLU
Max Pooling16 × 16 × 128-
Convolution8 × 8 × 256ReLU
Max Pooling4 × 4 × 256-
Convolution2 × 2 × 512ReLU
Up-sampling4 × 4 × 512-
Convolution8 × 8 × 256ReLU
Up-sampling16 × 16 × 256-
Convolution32 × 32 × 128ReLU
Up-sampling64 × 64 × 128-
Convolution128 × 128 × 64ReLU
Up-sampling256 × 256 × 64-
Convolution256 × 256 × 4Softmax
Table 5. Training hyperparameters.
Table 5. Training hyperparameters.
HyperparametersValue
Momentum0.9
Learning Rate0.0010
Weight Decay0.0005
Batch Size16
Number of Epochs100
Loss FunctionEquation (3)
OptimizerSGD
Table 6. Performance of handcrafted feature-based methods.
Table 6. Performance of handcrafted feature-based methods.
MethodAccuracyDice ScoreSpecificitySensitivity
MI0.750.650.720.77
HOG0.800.700.760.82
SURF0.820.740.790.84
Table 7. Performance of CNN-based methods.
Table 7. Performance of CNN-based methods.
MethodAccuracyDice ScoreSpecificitySensitivity
U-Net0.900.850.880.91
V-Net0.920.880.900.93
DeepMedic0.930.890.910.94
Table 8. Performance of the proposed hybrid approach.
Table 8. Performance of the proposed hybrid approach.
MethodAccuracyDice ScoreSensitivitySpecificity
Proposed hybrid approach0.950.910.930.96
Asodekar et al. [23]0.820.740.790.84
Ronneberger et al. [27]0.900.850.880.91
Milletari et al. [28]0.920.880.900.93
Kamnitsas et al. [29]0.930.890.910.94
Raza et al. [34]0.940.900.920.95
Table 9. Performance comparison between CNN and hybrid approach.
Table 9. Performance comparison between CNN and hybrid approach.
MethodAccuracyDiceSpecificitySensitivity
Handcrafted0.840.720.780.89
CNN0.880.790.840.92
Proposed hybrid approach0.950.910.960.93
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ullah, F.; Nadeem, M.; Abrar, M.; Al-Razgan, M.; Alfakih, T.; Amin, F.; Salam, A. Brain Tumor Segmentation from MRI Images Using Handcrafted Convolutional Neural Network. Diagnostics 2023, 13, 2650. https://doi.org/10.3390/diagnostics13162650

AMA Style

Ullah F, Nadeem M, Abrar M, Al-Razgan M, Alfakih T, Amin F, Salam A. Brain Tumor Segmentation from MRI Images Using Handcrafted Convolutional Neural Network. Diagnostics. 2023; 13(16):2650. https://doi.org/10.3390/diagnostics13162650

Chicago/Turabian Style

Ullah, Faizan, Muhammad Nadeem, Mohammad Abrar, Muna Al-Razgan, Taha Alfakih, Farhan Amin, and Abdu Salam. 2023. "Brain Tumor Segmentation from MRI Images Using Handcrafted Convolutional Neural Network" Diagnostics 13, no. 16: 2650. https://doi.org/10.3390/diagnostics13162650

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop