Next Article in Journal
Privacy-Preserving Data Mining on Blockchain-Based WSNs
Next Article in Special Issue
Molecular Techniques and Target Selection for the Identification of Candida spp. in Oral Samples
Previous Article in Journal
Accumulation and Emission of Water Vapor by Silica Gel Enriched with Carbon Nanotubes CNT-Potential Applications in Adsorption Cooling and Desalination Technology
Previous Article in Special Issue
Exploring Early Prediction of Chronic Kidney Disease Using Machine Learning Algorithms for Small and Imbalanced Datasets
 
 
Article
Peer-Review Record

An Effective Approach to Detect and Identify Brain Tumors Using Transfer Learning

Appl. Sci. 2022, 12(11), 5645; https://doi.org/10.3390/app12115645
by Naeem Ullah 1, Javed Ali Khan 2,*, Mohammad Sohail Khan 3, Wahab Khan 4, Izaz Hassan 2, Marwa Obayya 5, Noha Negm 6,* and Ahmed S. Salama 7
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3:
Reviewer 4: Anonymous
Appl. Sci. 2022, 12(11), 5645; https://doi.org/10.3390/app12115645
Submission received: 6 April 2022 / Revised: 14 May 2022 / Accepted: 16 May 2022 / Published: 2 June 2022
(This article belongs to the Special Issue Advanced Decision Making in Clinical Medicine)

Round 1

Reviewer 1 Report

1 "Analyze the performance of each TL algorithm in classifying correctly"is the TL an algorithm or a model??

2- the related work section narrate the previous work without mentioning the accuracy and other parameters value in order to determine the advantage of the proposed work

3- the problem defintion that motivated the current work from the point of view of a programmer is not presented in a proper way (they mention: the present tumor detection techniques, based on the findings, do not provide satisfactory results)

4- the term (Flow Diagram) is very bizzar!!! is it block diagram or architecture or framework?
5- the authors should indicate other works that used TL and succeeded to obtain better results.

6- table 4 appears to miss the last model results

7- the conclusion does not contain any mention of the experiments preformed using SVM

8- the augmentation step is not disscused??

9- the average time is not shown how it is computed and for how many number of images it is averaged , the time is very large may be to know how it is computed will show us the reason

Author Response

Response to Review 1 Comments
Reviewer#1, Concern # 1:  "Analyze the performance of each TL algorithm in classifying correctly" is the TL an algorithm or a model??

Author response: Thank you for your comment. We agree with this comment that this statement is unclear and create misconceptions. We have updated the sentence in the revised manuscript to clear the misconceptions.  As

  • Algorithms are procedures that are implemented in code and are run on data.
  • Models are output by algorithms and are comprised of model data and a prediction algorithm.

So, we have used models not algorithms because we are producing predictions.

Author action: In the revised version of the manuscript, we have updated the aforementioned sentence on page #3 and added it in a reply here:

“Analyze the performance of each TL model in classifying correctly and efficiently brain MRI images.”

 

Reviewer#1, Concern # 2: the related work section narrates the previous work without mentioning the accuracy and other parameters value in order to determine the advantage of the proposed work

Author response: Thank you for your comment. We agree with this comment that we need to mention the accuracy or other parameter values in order to determine the advantage of the proposed work.  Based on this comment we have updated the entire related work section to include the accuracies achieved by each method in the past.  

Author action: In the revised version of the manuscript, we have included details about the classification accuracies achieved by each method in the related work section at page # 4-5, and section #2, and in reply here

 

Ismael et al. introduced an approach that integrates statistical features and neural network techniques [22] to detect and classify brain tumor MRI images. Area of interest (ROI) is used in this method, defined as the tumor segment detected using any ROI segmentation technique. Also, 2D Discrete Wavelet Transform (DWT) and 2D Gabor filter techniques were used to determine features for the classifier. To create the features set, they used many transform domain statistical features. For classification, a backpropagation neural network classifier was used. A Figshare dataset of 3,064 slices of T1-weighted MRI images of three forms of brain tumors: meningioma, glioma, and the pituitary tumor was used to evaluate the model. The authors achieved maximum accuracy of 91.9%. A Deep Neural Network classifier, one of the DL frameworks, was used by Mohsen et al. to classify a dataset of 66 brain MRIs into four categories: normal, glioblastoma, sarcoma, and metastatic bronchogenic carcinoma tumors [23]. The classifier was combined with the discrete wavelet transform (DWT), a powerful feature extraction approach, and principal components analysis (PCA), with promising results across all performance metrics. The authors achieve maximum accuracy of 98.4% by combining DNN with DWT. Deepak et al. classified medical images using a combination of CNN features and SVM [24]. To analyze and validate their proposed approach, they used publically available MRI images of brain tumors from Figshare that comprised three types of brain tumors. They extracted characteristics from MRI scans of the brain using the CNN classifier. For increased performance, a multiclass SVM is paired with CNN features. They also tested and evaluated the integrated system using a five-fold cross-validation technique. The proposed model surpassed the current techniques in terms of total classification accuracy by achieving a classification accuracy of 95.82%. When there is limited training data, the SVM classifier outperforms the softmax classifier for CNN feature extraction. They employed the CNN-SVM approach, which requires fewer computations and memory than TL-based classification. For the objective of brain tumor identification, the scientists presented a multi-level attention mechanism network (MANet) in [25]. The suggested MANet incorporates both spatial and cross-channel attention, focusing on tumor region prioritization while also preserving cross-channel temporal relationships found in the Xception backbone's semantic feature sequence. The proposed method was tested using the Figshare and BraTS benchmark datasets. Experiments show that combining cross-channel and spatial attention blocks improves ggeneralizationand results in improved performance with fewer model parameters. The suggested MANet outperforms various current models for tumor recognition, with a maximum accuracy of 96.51 percent on Figshare and 94.91 percent on BraTS'2018 datasets.

In image detection and recognition challenges, CNN plays a significant role. To extract features automatically from brain images, CNN filters are convolved with the input image. Most research methodologies used CNN-based approaches for brain tumor detection and classification. Afshar et al. used Capsule networks for brain tumor classification and investigated the overfitting problem of CapsNets using a real collection of MRI data [26]. CapsNets are robust to rotation and affine change, and they require far less training data, making them perfect for medical imaging datasets such as brain MRI scans. They built a visualization paradigm for CapsNet's output to better illustrate the learned features. The achieved accuracy of 86.56% demonstrated that the presented method for brain tumor classification could successfully overcome CNNs. The MR images are used to diagnose a brain tumor [27]. The use of CNN classification for automatic brain tumor detection is proposed by authors. Small kernels were used to create the deeper architecture. The neuron's weight is described as tiny. When compared to all other state-of-the-art methodologies, experimental results demonstrate that CNN archives have a rate of 97.5 percent accuracy with little complexity.

Rai et al. adopted a Less Layered and Less Complex U-Net (LeU-Net) framework for brain tumor detection [28]. The LeU-Net idea was influenced by both the Le-Net and U-Net frameworks; however, it differs significantly from both architectural approaches. The performance of the LeU-Net was compared to the existing basic CNN frameworks Le-Net, U-Net, and VGG-16. Accuracy, precision, F-score, recall, and specificity were used to assess the CNNs performance. The experiment was conducted on the MR Dataset with cropped (removed unwanted area) and uncropped images. Also, the results were compared to all three models. The LeU-Net model has much faster processing (simulation) time; training the network with 100 epochs achieved 98% accuracy on cropped and achieved 94% accuracy on uncropped images and took 252.36 seconds and 244.42 seconds, respectively. Kader et al. proposed a new hybrid model for brain tumor identification and classification based on MR brain images [29], intending to assist doctors in the early diagnosis and classification of brain tumors with maximum accuracy and performance. The approach was developed using a hybrid deep CNN and deep watershed auto-encoder (CNN-DWA) model. The technique is broken down into six steps: input MR images, preprocessing with a filter and morphological operation, generating a matrix that represents MR brain images, using the hybrid CNN-DWA framework, brain tumor detection and classification, and model performance evaluation. The model was validated using five databases: BRATS2012, BRATS2013, BRATS2014, ISLES-SISS 2015, and BRATS2015. Based on the RCNN technique [30], Kesav et al. developed a new framework for brain tumor classification and tumor type object recognition, tested using two publicly available datasets from Figshare and Kaggle. The goal was to design a basic framework that would allow classic RCNN framework to run faster. Glioma and healthy tumor MRI were initially classified using a two-channel CNN. Later, a feature extractor in an RCNN was used to locate tumor locations in a Glioma MRI sample categorized from a previous stage using the same framework. Bounding boxes were used to define the tumor region. Meningioma and pituitary tumors are two more malignancies that have been treated with this method. The proposed method achieved an average confidence level of 98.83% for two-class tumor classification i.e., Meningioma and Pituitary tumor.

 

Reviewer#1, Concern # 3: the problem definition that motivated the current work from the point of view of a programmer is not presented in a proper way (they mention: the present tumor detection techniques, based on the findings, do not provide satisfactory results).

Author response: Thank you for your comment. We agree with this comment that we have not presented the motivation for the current work in a proper way. Based on your comment we have revised the last paragraph of related work (which includes the aforementioned sentence).

Author action: In the revised version of the manuscript, we have included details about the motivations of this approach in section # 2 (related work) at pages # 5 and 6, also added in reply here:

Existing works on brain tumor detection and classification have some limitations: Most of the approaches are validated with the figshare dataset, which is an imbalanced dataset and affects classification approaches' performance. Hence there is a need to validate brain tumor classification approaches on another balanced dataset. ML in its traditional form necessitates domain knowledge and experience. Manual feature extraction necessitates time and effort, reducing the system's efficiency. On the other hand, employing DL, particularly CNN, in medical imaging is challenging as it requires a significant amount of data for training. In contrast, deep TL-based algorithms can avoid these drawbacks by using automatic feature extraction and robust classification applications based on convolutional layers. This study proposed an automatic classification system for multiclass brain tumor MR images, which is a more complex and difficult assignment than simple binary classification. However, our dataset is very small, and it is difficult to train CNN from scratch using small datasets without suffering from overfitting and with appropriate convergence. Inspired by the success of TL techniques [5, 17, 33, 34], we adopted the concept of TL in this work. For this purpose, we employed various TL models in this research work, including Inceptionresnetv2, Inceptionv3, Xception, Resnet18, Resnet50, Resnet101, Shufflenet, Densenet201, and Mobilenetv2 to achieve brain tumor detection and classification on target data set. Furthermore, we compared the best model with other methods to show its efficacy in identifying brain tumor.

We also added the details in section # 1 (Introduction) at page # 3, and added in reply here:

Additionally, there are a lot of irregularities in the sizes and positions of brain tumors, which makes the natural understanding of brain tumors problematic. Generally, for the classification of brain tumors T1-weighted Circuits and contrast-enhanced images are used. The different features extracted from MRI images are the key source for tumor classification. DL makes predictions and decisions on data by learning data representations. DL practices are most widely used for medical imaging classification. However, DL-based methods have shown satisfactory results in various applications across a wide range of domains in various fields [17,18,19,20]. But DL approaches are starving data approaches, i.e., they require a lot of training data. Recently, DL approaches are attracting more and more attention, particularly the CNN model. CNN outperforms other classifiers on larger datasets, like ImageNet, consisting of millions of images. But it is challenging to employ CNNs in the field of medical imaging. First, medical image datasets contain limited data because expert radiologists are required to label the dataset's images, which is a tedious and time-consuming task. Second, CNNs training is difficult for a small dataset because of overfitting. Thirdly, hyperparameters of the CNNs classifier need to be adjusted to achieve better performance that requires domain expertise. Therefore, using pre-trained models on TL and fine-tuning is a viable solution to address these challenges. In TL approaches, DL models are trained on a large dataset (base dataset) and transfer the learned knowledge to the target dataset (small dataset) [21]. This paper proposed an automatic brain tumors classification approach intended for the three-class classification. Several approaches utilized the manually defined tumor regions to detect and classify brain tumors, preventing them from being fully automated [1, 2, 19]. However, the proposed new approach does not involve any segmentation or feature extraction and selection in the pre-processing step, in contrast to some previous methods [1, 2, 19], which require prior segmentation and feature extraction of tumors from the MRI images. We used a standard Kaggle brain tumor classification (MRI) data set, including three types of brain tumors, i-e, meningioma, pituitary, and glioma. We performed extensive experiments based on this dataset to compare the performance of nine DL models for the classification of brain tumor MRI images using TL. We used Inceptionresnetv2, Inceptionv3, Xception, Resnet18, Resnet50, Resnet101, Shufflenet, Densenet201, and Mobilenetv2 for the automatic detection and classification of brain tumors using a fine-grained classification approach. The aim is to identify the most effective and efficient deep TL model for brain tumor classification. We report the overall accuracy, precision, recall, f-measure, and elapsed time of the nine pre-trained frameworks in the paper.

 

 

Reviewer#1, Concern # 4: the term (Flow Diagram) is very bizzar!!! is it block diagram or architecture or framework?

Author response: Thank you for your comment. We agree with this comment that the term Flow Diagram is very bizarre so we have revised the caption of the diagram in the revised version of the manuscript.

Author action: In the revised version of the manuscript, we have updated the caption in section 3.1 (Proposed approach) at page # 7, and added in reply here:

Figure 1. Overview of the proposed method for brain tumor classification.

Reviewer#1, Concern # 5: the authors should indicate other works that used TL and succeeded to obtain better results.

Author response: Thank you for your comment. We agree with this comment that we need to indicate other works that used TL and succeeded to obtain better results.

Author action: In the revised version of the manuscript, we have included a paragraph to indicate the other works that used transfer learning and succeeded to obtain better results. We have included the details in section # 4.6 (Comparison with state-of-the-art related works) on page # 15, and added in reply here:

Researchers have used TL techniques and succeeded in achieving the best results. Using a pre-trained VGG19 deep CNN model, Swati et al. developed a block-wise fine-tuning technique based on TL [43]. A benchmark dataset of T1-weighted contrast-enhanced magnetic resonance imaging (CE-MRI) is used to test the proposed method. When validated in a five-fold cross-validation setting, the method achieves an average accuracy of 94.82% for brain tumor classification into meningioma, pituitary, and glioma. The proposed technique outperformed state-of-the-art classification on the CE-MRI dataset according to experimental findings. Arshia Rehman et a., [44] proposed a framework and performed three studies to classify brain malignancies such as meningioma, glioma, and pituitary tumors utilizing three convolutional neural networks architectures (AlexNet, GoogLeNet, and VGGNet). Each study then investigates TL approaches, such as fine-tuning and freezing, utilizing MRI slices from a brain tumor dataset—Figshare. Data augmentation techniques are used on MRI slices to help generalize results, increase dataset samples, and reduce the risk of overfitting. In the presented studies, the fine-tune VGG16 architecture achieved the greatest classification and detection accuracy of 98.69%.

 

Reviewer#1, Concern # 6: table 4 appears to miss the last model results

Author response: Thank you for your comment. We agree with this comment that table 4 missed the last model results in the previous version. Based on your comment we have inserted the accuracy of the last model in table 4.

Author action: In the revised version of the manuscript, we have included the accuracy achieved by last model in table 4 in section # 4.5 on page # 14 and added in reply here:

Table 4. Accuracy comparison among the Inceptionresnetv2 and deep features-SVM approach.

Model

Accuracy

Squeezenet

97.28

Alexnet

97.83

Inceptionresnetv2

98.01

Inceptionv3

97.86

Resnet101

98.01

Resnet18

96.38

Vgg19

97.46

Shufflenet

96.56

Googlenet

96.56

Densenet201

98.37

Resnet50

98.36

Mobilenetv2

98.5

 

Reviewer#1, Concern # 7: the conclusion does not contain any mention of the experiments preformed using SVM.

Author response: Thank you for your comment. We agree with this comment that we have not discussed the hybrid experiments performed using DL models and SVM. Unfortunately, we missed mentioning the required details. Based on your comment we have added the details about hybrid experiments in the conclusion.

Author action: In the revised version of the manuscript, we have included the details about the hybrid experiments performed in this work to compare the performance of our best model. The details are included in section # 5 (Conclusion) on page # 16 and added in reply here:

The accuracy of 98.91% for brain tumor classification has confirmed the superiority of the best model (Inceptionresnetv2) over other hybrid approaches in which we used DL models for deep features extraction and SVM for classification of brain tumors.

 

Reviewer#1, Concern # 8: the augmentation step is not disscused??

Author response: Thank you for your comment. We agree with this comment that we have not discussed the data augmentation step. Unfortunately, we missed mentioning the required details. Based on your comment we have added the details about the data augmentations techniques used in our work.

Author action: In the revised version of the manuscript, we have included the details about data augmentation techniques in section # 3.1 (Proposed approach) on page # 6 and added in reply here:

In the third step, we applied the data augmentation technique to test the generalizability of the TL models. Data augmentation, or increasing the amount of available data without acquiring new data by applying multiple processes to current data, has been proven advantageous in image classification. Due to the limited number of images in the dataset, we applied the data augmentation technique in this study. The images in the training set were rotated at a random angle between -20 and 20 degrees, arbitrarily translated up to thirty pixels vertically and horizontally, and randomly translated the images in between [0.9 1.1] to create additional images. It's also worth noting that the imageDataAugmenter function was utilized to dynamically create sets of augmented images during each training phase. The number of images in the training set was significantly expanded using this data augmentation method, enabling more effective use of our DL model by training with a much higher number of training images. Furthermore, the augmented images were only used to train the proposed framework, not to test it; hence, only real images from the dataset were utilized to test the learned framework.

Reviewer#1, Concern # 9: the average time is not shown how it is computed and for how many number of images it is averaged , the time is very large may be to know how it is computed will show us the reason

Author response: Thank you for your comment. We agree with this comment that we need to discuss the reason for large training time and how the training time is calculated. Based on your comment we have added the details about the elapsed time in our work.

Author action: In the revised version of the manuscript, we have included the details about the elapsed time by each model in section # 4.4 (Results) on pages # 12 and 13, also added in reply here:

The training and validation process of our best performing deep neural network, i.e., Inceptionresnetv2, is shown in Figure 4. The elapsed time returns the total CPU time used by the DL model since it was started. It is the time taken by the model to process (classify) all images of the dataset. Since we have used the data augmentation technique, which significantly increases the dataset's size, all models have taken considerable time for classification. This time also depends on the depth and architectural design of the models. The elapsed time is expressed in seconds. The Shufflenet TL model is the most efficient elapsed time classifier, achieving satisfactory classification results and taking the shortest 159 minutes for brain tumor classification. In contrast, the Xception TL model took a maximum time of 1730 min 25 sec to identify and classify brain tumors MRI images into different types. The Shufflenet model is fast because it uses two new operations, i.e., channel shuffle and pointwise group convolution, significantly reducing computation cost while retaining accuracy. It is to be noted that classification time for the different variants of the Resnet TL classifiers increases with the framework layers. For example, Renset18 took a minimum time of 187 min 47 sec, and Resnet50 took 525 min 14 sec. Additionally, the Resnet101 took the maximum time of 801 min 36 sec to classify brain tumors into meningioma, pituitary, and glioma. Resnet18 achieved the lowest classification accuracy because of the ReLU activation function. Relu function outputs the positive input directly, whereas it outputs zero for negative inputs (x<0). So, the Relu activation function fails to activate the neuron when it receives negative inputs, leaving no guarantee that all of the neurons would be active at all times, resulting in the dying Relu problem. In this case, the network cannot learn using the optimization approach. The dying ReLU problem is undesirable because it causes a large percentage of the network to become idle over time. We observed in table 3 that in the case of different variants of Resnet, the accuracy improves with increasing depth of the networks because a deeper DL-based model captures more complicated and essential deep features and increases the network's classification performance. However, as the depth of the network expands, the computational complexity increases, which ultimately affects the efficiency of networks. Furthermore, we can conclude from Table 3 that the inceptionresnetv2 TL algorithm is identified as the best classification method for detecting and classifying brain tumors.

 

 

 

Author Response File: Author Response.pdf

Reviewer 2 Report

The research presents an interesting piece of work, with positive results that are comparable to other results in the literature.

The paper requires some English proofreading.

Author Response

Reviewer#2, Concern # 1: The paper requires some English proofreading.

Author response: Thank you for your comment. We agree with you that our paper needs English proofreading. Based on your comment we have revised the entire paper to remove all mistakes.

Author action: In the revised version of the manuscript, we have revised the entire paper to remove grammatical, spelling, typos etc by using paid online services. 

Reviewer 3 Report

Dear authors 

this paper discusses techniques to "Detect and Identify Brain Tumor" and  is more closely related to a review paper employing a public test data set

There are unsubstantiated statements made throughout the paper. Example below 

"Recently, there has been a lot of work on brain tumor detection and classification. " this is an unsubstantiated 

Author Response

Reviewer#3, Concern # 1: There are unsubstantiated statements made throughout the paper. Example below 

"Recently, there has been a lot of work on brain tumor detection and classification. " this is an unsubstantiated 

Author response: Thank you for your comment. We agree with you that there are some unsubstantiated statements throughout the paper. We have revised the paper and added the required references.

Author action: In the revised version of the manuscript, we have revised the statement mentioned in the comment and added the references in section # 2 (Related work) on page # 4 and added in reply here:

Recently, there has been a lot of work on brain tumor detection and classification [22-30].

Reviewer 4 Report

The work presented has thorough reflection and the findings with respect to comparision and end results are quite interesting.

  1. It seems the dataset used is limited and if the data set is increased, there will be any effect on the accuary of model?
  2. Is it possible to produce the segmented tissues (result images) with respect to findings of tumor as resultant image to support the work produced? 

Author Response

Reviewer#4, Concern # 1: It seems the dataset used is limited and if the data set is increased, there will be any effect on the accuary of model?

Author response: Thank you for your comment. We agree with you that we have used the dataset with limited data samples. But we have used data augmentation techniques to increase the size of the dataset. however, we can further improve the accuracy by utilizing larger datasets. Massive data can lead to lower estimation variance and hence better predictive performance. More data increases the probability that it contains useful information, which is advantageous.

We are using small training data, in such case, our model is capable to remember every training sample, and model settings would be very specific to training data, the model knows every training sample and its output label, so training loss will be very small. But the same setting fails on test data and produces bad results, that is what we call model overfitting.

To avoid overfitting, we need to increase the size of training data. When we increase training data, our model will not remember all training data, and it will try to find some general setting that applies to the majority of training data for reducing loss during training. But same general settings can also be applied to predict test data. So, by increasing training data, training loss of our model will also increase but test loss will decrease and that is what is expected for Prediction.

Author action: In the revised version of the manuscript, we have revised the conclusion and added the details about the effect of the larger dataset on the accuracy of models. we have added the details in section # 5 (Conclusion) on pages # 16-17 and added in reply here:

Despite the limited data in our dataset, we have achieved satisfactory results. We applied data augmentation techniques to increase the size of the training dataset. However, the results can be further improved in the future by training the model with a larger dataset. 

Also, the accomplishments of this study, some improvements are still possible: First, the comparatively weak performance of the pre-trained DL models as stand-alone classifiers. Second, significant training time elapsed by the transfer of learned deep neural networks. Third, because of limited training data, the phenomenon of overfitting was observed. Future exploration in this domain will address these issues, possibly utilizing larger datasets for training and further tuning the transfer of learned deep neural networks. In the future, we will explore the TL of the remaining powerful deep neural networks for brain tumor detection and classification with less time complexity.

 

 

Reviewer#4, Concern # 2: Is it possible to produce the segmented tissues (result images) with respect to findings of tumor as resultant image to support the work produced? 

Author response: Thank you for your comment. Basically, the aim of this work was to provide a solution for brain tumor classification which does not require separate feature extraction and segmentation of brain tumor regions. To make the aim of this work clear we have added the details in the introduction and conclusion.

Author action: In the revised version of the manuscript, we have added the details about segmentation in revised the statement mentioned in the comment and added the references in section # 1 (Introduction) on page # 3-4 and added in a reply here:

 

Furthermore, Several approaches utilize the manually defined tumor regions to detect and classify  brain tumors that prevent them from being fully automated. Our new approach does not involve any segmentation or feature extraction and selection in the pre-processing step, in contrast to some previous methods [24], [25], [26], which require prior segmentation and feature extraction of tumors from the MRI images.

We also have added details in section # 5 (conclusion last sentence) on page # 16 and added in a reply here:

We will also apply image segmentation techniques to improve the performance of our best-performing model.  

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Dear Authors

Thanks for fullfiling each comment I have requested

Please add a reference to the equations of computing performance measures

 

Author Response

Dear Reviewer,

Thank you for providing your valuable suggestions against our research manuscript and with an opportunity to address the comments and improve the overall quality of the paper. Also, we highlighted the changes in the revised manuscript to make them visible.

Please find the attached file for the responses against your valuable suggestions

Author Response File: Author Response.docx

Reviewer 3 Report

 

Dear authors 

thank you for addressing the comments Please correct the typo below before resubmission 

line 103 

"T1-weighted Circuits and contrast-enhanced images"

I believe "Circuits" is a typo and please provide the contrast-enhanced methods (i.e. DWI and gadolinium enhanced ) with a  reference here 

Author Response

Dear Reviewer,

Thank you for providing your valuable suggestions against our research manuscript and with an opportunity to address the comments and improve the overall quality of the paper. Also, we highlighted the changes in the revised manuscript to make them visible.

Please find the attached file for the responses to your valuable suggestions. For quick reference it is also copied here:

 

Reviewer#3, Concern # 1: “T1-weighted circuits and contrast enhanced images” I believe “Circuits” is a typo and please provide the contrast-enhanced methods (i.e., DWI and gadolinium enhanced) with a reference here. 

Author response: Thank you for your comment. We agree with this comment that there is typing mistake. We have updated the sentence and removed the “circuits” word from the sentence because the correct word is “T1-weighted contrast-enhanced images” not “T1-weighted circuits and contrast-enhanced images”.

Author action: In the revised version of the manuscript, we have added the references to the equations of computing performance measures in section # 1, on page # 3, and added it in a reply here:

Generally, for the classification of brain tumors, T1-weighted contrast-enhanced (with gadolinium-enhanced) MRI images (T1c) are used because tumors are considerably better visualized on T1c due to the stimulation of 0.150.20 mMol/kg of contrast material (gadolinium) in the patients [44]. Diffusion-weighted imaging (DWI) is also considered vital for detecting brain tumors because it can visualize restrictions to the free diffusion of water caused by tissue microstructure [46].

Author Response File: Author Response.docx

Back to TopTop