Next Article in Journal
Nascent Rice Husk as an Adsorbent for Removing Cationic Dyes from Textile Wastewater
Previous Article in Journal
An Automated Data Acquisition System for Pinch Grip Assessment Based on Fugl Meyer Protocol: A Feasibility Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Customized VGG19 Network with Concatenation of Deep and Handcrafted Features for Brain Tumor Detection

by
Venkatesan Rajinikanth
1,
Alex Noel Joseph Raj
2,
Krishnan Palani Thanaraj
1 and
Ganesh R. Naik
3,*
1
Department of Electronics and Instrumentation Engineering, St. Joseph’s College of Engineering, Chennai 600119, India
2
Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, College of Engineering, Shantou University, Shantou 515063, China
3
MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Milperra, NSW 2560, Australia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(10), 3429; https://doi.org/10.3390/app10103429
Submission received: 22 March 2020 / Revised: 6 May 2020 / Accepted: 12 May 2020 / Published: 15 May 2020

Abstract

:
Brain tumor (BT) is one of the brain abnormalities which arises due to various reasons. The unrecognized and untreated BT will increase the morbidity and mortality rates. The clinical level assessment of BT is normally performed using the bio-imaging technique, and MRI-assisted brain screening is one of the universal techniques. The proposed work aims to develop a deep learning architecture (DLA) to support the automated detection of BT using two-dimensional MRI slices. This work proposes the following DLAs to detect the BT: (i) implementing the pre-trained DLAs, such as AlexNet, VGG16, VGG19, ResNet50 and ResNet101 with the deep-features-based SoftMax classifier; (ii) pre-trained DLAs with deep-features-based classification using decision tree (DT), k nearest neighbor (KNN), SVM-linear and SVM-RBF; and (iii) a customized VGG19 network with serially-fused deep-features and handcrafted-features to improve the BT detection accuracy. The experimental investigation was separately executed using Flair, T2 and T1C modality MRI slices, and a ten-fold cross validation was implemented to substantiate the performance of proposed DLA. The results of this work confirm that the VGG19 with SVM-RBF helped to attain better classification accuracy with Flair (>99%), T2 (>98%), T1C (>97%) and clinical images (>98%).

1. Introduction

The brain is one of the primary organs in humans, and it assesses the complete physiological gestures coming from other sensory parts and takes the necessary control measures. The normal operations of brain are badly affected if any infection or disease arises, and an unnoticed and untreated abnormality may lead to various difficulties, including death [1,2].
The regular state of the brain may be exaggerated due to different reasons, such as birth defects, a head injury due to an accident or uncontrolled cell growth (UCG) in a central brain section [3,4]. An irregularity will cause various problems in the physiological system and the untreated brain abnormality also will lead to several major illnesses. A brain abnormality due to UCG is a major threat, and the untreated growth will lead to brain-cancer, which is one of the rapidly increasing cancer burdens globally. The work of Louis et al. [5] clearly discusses the ranking/classification of brain tumors (BTs) as per the 2016 report of World Health Organization (WHO).
Recently, a substantial number of awareness programs have been initiated to protect people from such abnormalities. However, because of different unavoidable causes, such as contemporary lifestyles, food behavior, heredity factors and age, most individuals are suffering due to developed BT [6,7]. If the BT is detected at a premature stage; a promising treatment can be employed to heal/manage the cell growth. The clinical level detection of BT is performed with; (i) single/multi-channel EEG signals and (ii) brain imaging techniques. The image-assisted technique provides more meaningful information compared to the signal assisted technique. Hence, in most clinical-level detection, the imaging procedures are widely preferred, and the image recording procedures, such as computed tomography (CT) and magnetic resonance imaging (MRI), are widely considered to record and check the brain abnormalities using three-dimensional (3D) and 2D images. Compared to CT, MRI is widely preferred due to its varied modalities, and the visibility of the BT in a brain MRI is very clear compared to the CT. Hence, MRIs are largely preferred to evaluate the various brain abnormalities, including the BT. BT evaluation with the modalities such as Flair, T2 and T1C has enhanced tumor visibility compared to the T1 and diffused weight (DW) modality [8,9,10,11,12].
In the literature, a significant amount of conventional and modern BT detection procedures are proposed and implemented by researchers with a chosen machine learning (ML) or deep learning (DL) technique [13,14,15,16,17,18]. The chief aim of the existing automated and semi-automated disease evaluation procedure is to develop an accurate disease detection system to assist the doctor during the diagnosis and treatment-planning process. Most of the new disease diagnosis systems implement the DL technique due to its superiority and detection accuracy. The work of Talo et al. [19] implemented a transfer-learning-based deep learning architecture (DLA) to detect tumors using 2D MRI slices and achieved a classification accuracy of >98%. Further, Talo et al. [20] presented a detailed analysis on the existing DLA in the literatures and confirmed that the ResNet50 offers a better classification accuracy (>95%) during the brain tumor detection process. Amin et al. [21] implemented a brain tumor evaluation procedure using BRATS2013, 2015 and clinical database and achieved an accuracy of >98%. The work of Sharif et al. [22] implemented an enhanced binomial thresholding and multi-features selection-based technique to classify the brain tumor and attained enhanced result. The work of Fabelo et al. [23] implemented a DLA to detect the glioblastoma using hyperspectral 3D and 2D brain images. Sajid et al. [24] implemented a DL based brain tumor detection procedure and attained better values of sensitivity and specificity. The higher order spectra feature-based detection and classification of the abnormal section in a brain MRI is discussed by Acharya et al. [25]. Further, a considerable number of approaches are proposed and implemented by a considerable number of researchers to improve the detection accuracy on a class of brain MRI images ranging from the benchmark datasets and clinical images [26,27,28,29,30,31].
The work in this paper aimed to evaluate the performances of the existing DLAs, such as AlexNet, VGG16, VGG19, ResNet50 and ResNet101 [20] for detecting the BT using the 2D brain MRI slices. Initial detection was performed with a transfer learning procedure using the attained deep-features and the SoftMax classifier. After finding the best suitable DLA for the considered task, the concatenation of the deep and hand-crafted features was performed to enhance the classification accuracy with the SoftMax classifier. Further, a detailed comparative study with other classifiers, such as random forest (RF), decision tree (DT), k-nearest neighbor (KNN), SVM-linear and SVM-RBF was also performed to attain better classification accuracy.
For the experimental investigation, the 2D brain MRIs recorded using Flair, T2 and T1C images of the dimensions 227 × 227 × 1 were considered, and the essential images for this task were collected using the benchmark datasets, such as BRATS (without skull section) [32,33] and TCIA (with skull section) [34,35,36,37] to train, test and validate the DLAs. Further, a clinical level dataset [38] with the skull was considered to validate the DLA. This clinical level dataset was already used to test the machine-learning systems [2,3,4]. In this work, the performance of the proposed system was confirmed by computing the accuracy, precision, sensitivity, specificity, F1-Score and negative predictive value (NPV). This work also implemented a ten-fold cross validation, and based on the average value, the performance of the proposed DLA was confirmed.
The remaining parts of this work are set out as follows: Section 2 presents the materials and methods. Section 3 presents the details of experimental investigation, and conclusion of this work is discussed in Section 4.

2. Materials and Methods

In the literature, a number of DLAs are proposed to detect the abnormalities in medical images using conventional and customized DLAs [17,39,40,41]. Development of a new DLA from the scratch is complex and requires complex work to build, train, test and validate the architecture for a chosen problem. Hence, most of the earlier works adapt the proven DLAs existing in the literature to solve a disease detection problem. Furthermore, selecting and implementing a particular architecture requires prior knowledge about its structure, complexity in implementation, initial tuning and validation procedures [17,39,40,41,42].
This work initially considers the existing DLAs discussed in [20] to detect the brain tumors from the considered MRI database. This work employs the transfer learning concept to train, test and validate the adopted DLAs using the SoftMax classifier. The DLA which offered the enhanced classification accuracy is then selected and its performance is further enhanced using the proposed method. The initial experimental outcome of this research is confirmed: the VGG19 offers better classification accuracy compared to the alternatives, and hence conventional and customized VGG19 are then considered in this research to attain the better tumor detection accuracy.
After selecting the VGG19 to solve the considered image examination problem, its performance enhancement is tried using the following approaches: (i) replacing the SoftMax classifier with DT, KNN, SVM-Linear and SVM-RBF classifiers, and (ii) enhancing the outcome of VGG19 using a new feature vector obtained by fusing the handcrafted and deep features. The earlier research work confirms that if the feature vector is enhanced, then the pre-trained DLA will offer better detection accuracy compared to the conventional DLA [1,29,30]. Figure 1 depicts the customized VGG19 proposed and implemented in this research. The pre-trained DLA will provide a one-dimensional deep-feature vector, and the hand crafted texture features are attained using counterlet transform (COT) (38 features), curvelet transform (CUT) (121 features) and discrete wavelet transform (DWT) (40 features) [43]. The existing deep and handcrafted features are sorted and serially combined based on principal component analysis (PCA), and these features are then used to train, test and validate the classifier unit, which separates the given image into normal/tumor classes [1].

2.1. Image Collection and Processing

In every medical evaluation procedure, the performance of the developed diagnosis system depends mainly on the database considered based on the problem to be solved. To solve the brain tumor detection problem, the most commonly considered images are attained form the well-known benchmark images of the Multimodal Brain Tumor Segmentation Challenge (BRATS) [32,33]. Figure 2 depicts the image dataset considered in this work along with the available MRI modalities. BRATS and TCIA will not have the MRI modalities, such as DW and T1C respectively, and hence, it is denoted as not available (NA) in Figure 2. In this work, the 2D MRI slices of Flair, T2 and T1C are considered for the assessment [2,3,4,6].
The Cancer Imaging Archive (TCIA) also provides clinical grade medical images for research purposes [34,35,36,37]. In this work, the glioma images associated with the skull section are chosen for the assessment. Furthermore, the clinical grade MRIs collected from the Proscans Ltd are considered to validate the proposed DLA [2,3,4,38]. In this work, the data augmentation is implemented to increase the BRATS and TCIA images with the help of image-flip and image-rotate (90° left/right) operations. This procedure helped to achieve the considerable number of test images for both the normal and tumor classes.
Table 1 presents the details of the test image datasets and its modalities considered in this work. This table also depicts the images considered for training and testing the classifier unit.

2.2. Handcrafted Feature Extraction

In ML and DL techniques, feature extraction is the principal procedure which helps to extract the meaningful information from the image based on its shape and the texture values. Based on these features, the implemented classifier units are trained, tested and validated. In the literature, a substantial number of feature extraction techniques are implemented for a class of RGB/gray scaled pictures [44,45,46,47,48,49]. The implemented CUT, COT and DWT helped to get a sum of 199 features, which were then fused with the deep-feature vector to improve its feature dimension. The features extracted with the conventional scheme are called the handcrafted features and every approach provides a 1D feature vector. The feature vectors of chosen procedures are represented as: F V 1 = 1 x 1 x 38 , F V 2 = 1 x 1 x 121 , F V 3 = 1 x 1 x 40 and the handcrafted-feature vector F V h = F V 1 + F V 2 = F V 3 = 1 x 1 x 199 . The other details on the feature extraction and selection can be found in [1,30].

2.3. Feature Selection and Concatenation

The major objective of this research was to achieve a solitary feature vector by fusing the F V h with deep features (DF) of the considered DLA. In the literature, features are fused with the serial and parallel process and in this work; serial concatenation is adopted due to its simplicity and the F V h to be combined with the DF selected and sorted based on the PCA. Normally, PCA transfer n-vectors ( p 1 , p 2 , , p n ) of D-dimensional space into D’ space with values ( p 1 , p 2 , , p n ) where D and D’ are positive integers with size D’ D.
The new feature of PCA can be represented as;
f n = k = 1 U R k , i S k
where S k = eigenvectors and R k , i = primary components [1,26,29,30]. After implementing the feature concatenation, the final feature vector will be ( 1 x 1 x 199 ) + ( 1 x 1 x 1024 ) = 1 x 1 x 1223 .

2.4. Classification

The general performance of the DLA relies mainly on the classifiers employed to classify the considered images into normal/tumor class. The traditional DLA considers the SoftMax, which provides a reasonable accuracy with using transfer learning approach. Further, the performance of the DLA can be enhanced by employing appropriate classifiers [50,51,52,53].
In this work, the SoftMax classifier is replaced with other classifiers, discussed below:
  • Decision tree: DT is one of the more famous methodologies used to categorize the linear and non-linear information with a sequence of testing methods, which expands like a tree. The DT utilizes a quality exploration situation as the root and internal nodes, and the class label forms terminal nodes. Once a DT has been shaped, categorization is accomplished by the conclusions taken in each branch of the tree. Other particulars of DT can be found in [43,44,45,46].
  • K-nearest neighbor: KNN is a well-known technique often considered to classify medical images based on an existing feature set. In this work, KNN is considered to classify the brain MRIs of varied modality. During the classification task, the KNN evaluates the space among new features to each training feature and discovers the best neighbor. The earlier works on the KNN can be found in [43,44,45,46].
  • Support vector machine: SVM categorizer uses a hyperplane for labeling of dataset based on features gathered throughout the training stage. SVM is one of the most frequently used to categorize MRI images. Radial basis function-based SVM (SVM-RBF) is used to sort the 2D MRI with the elected features. In SVM-RBF, the kernel value is controlled by a scaling parameter “σ”; and this value is varied from 0.2 to 1.9 with a step size of 0.1. Furthermore, the SVM with linear polynomial kernel (SVM-Linear) is also adopted to grade the MRI database [43].

2.5. Performance Measures and Validation

The evaluation of the performance of classifiers is normally carried by computing the essential performance values. In this work, the classifier performance is to be assessed based on a chosen performance values. The initial assessment computes the essential measures, such as true-positive (TP), true-negative (TN), false-positive (FP) and false-negative (FN) values [43,44,45,54].
A c c u r a c y = A C C = T P + T N T P + T N + F P + F N
Precision = PRE = TP TP + FP
S e n s i t i v i t y = S E N = T P T P + F N
S p e c i f i c i t y = S P E = T N T N + F P
F 1 s c o r e = F 1 S = 2 T P 2 T P + F N + F P
Negative   Predictive   Value = NPV = TN TN + FN

3. Experimental Outcome and Discussions

This part of the work presents the experimental results and discussions. This work is executed using the workstation I5 processor, 8 GB RAM and 2 GB VRAM within a Matlab environment. During this work, the following initial values are assigned for every DLA: epoch size = 55, iteration size = 1200, iteration per epoch = 110, updating frequency = five iterations, learning error rate = 1e-5, stopping criteria = best validation or maximum iteration.
Initially, the DLAs such as AlexNet, VGG16, VGG19, ResNet50 and ResNet101 are considered to examine and classify the dataset into normal/tumor classes using SoftMax classifier trained and tested with the deep-features. Initially, the AlexNet is implemented to solve the classification problem and the attained results are depicted in Figure 3. Figure 3a, b presents the accuracy and the loss value attained during the training and testing, respectively. A similar procedure is repeated with the other DLAs and the corresponding results attained are depicted in Table 2. The results available in this table confirm that the performance values attained with the VGG19 DLA are better for all the MRI modalities, such as Flair, T2 and T1C. This result confirms that, for the chosen dataset, the VGG19 offered better results compared to the alternatives, and hence, the VGG19 is then considered in this work for further enhancement to attain a better accuracy using the proposed methodology.
The performance of the chosen VGG19 is then verified by replacing the SoftMax classifier with other approaches, such as DT, KNN, SVM-Linear and SVM-RBF, and the obtained results are depicted in Table 3. This table confirms that the pre-trained VGG19 offered a classification accuracy of 96.70% for Flair modality (SVM-RBF), 96.10% for T2 modality (DT) and 94.60% for the T1C modality MRI slices. These results confirm that, if the SoftMax is replaced with a chosen classifier unit, the detection accuracy can be improved.
The performance of the traditional VGG19 is further improved by combining the deep-features with the handcrafted features ( F V h ). During this process, a serial concatenation procedure is implemented to combine the deep-features of dimension 1 x 1 x 1024 with the F V h of dimension 1 x 1 x 199 to attain a new feature vector size of 1 x 1 x 1223 . This feature set is then considered to train, test and validate the classifier implemented in the VGG19 network. The sample results attained with various layers of the VGG19 are presented in Figure 4 and the area under the curve (AUC) of 98.5% attained using the VGG19 with SVM-RBF classifier is presented in Figure 5.
Table 4 shows the overall performance measures achieved after a 10-fold cross validation for the BRATS, TCIA and clinical brain MRI datasets, and averages of the performance measures are considered for the validation. The results shown in this table confirm that the proposed network helped to achieve better classification accuracy for all the considered datasets irrespective of the modalities of the test images. Figure 6 depicts the sample classification result attained with the proposed VGG19 with SVM-RBF classifier for the TCIA database. The TCIA database consists of the brain MRI slices with the skull section, and hence, the average detection accuracy attained for the TCIA database is less compared with the results attained with the BRATS database. Figure 7 depicts the overall results attained with the customized VGG19 for the considered test images, and these results confirm that the VGG19 works well on the considered image dataset. Further, this architecture helped to attain classification accuracy of 98.17%. This outcome confirmed that the proposed DLA is clinically significant, and in future, it can be considered to detect tumors in clinical-grade brain MRI slices. During the real time implementation, this DLA can be used as an assisting tool for the doctor to make the possible decisions during brain tumor detection and treatment-planning processes.
The performance of the proposed DLA is then validated with other methods existing in the literature, and the results are presented in Table 5. These results confirm that customized VGG19 offers enhanced outcomes for the BRATS, TCIA and clinical level images compared to the results of existing methods.
This study was focused on implementing a DLA to segregate MRI slices into normal/tumor classes, and after the segregation, the MRI slices with the tumor were further examined by the doctor. In future, the outcome of the proposed system along with the clinically collected data can be used to develop a computerized model to track the ependymal tumor dissemination.
The future scope of the proposed research includes:
(i)
Enhancing the handcrafted feature vector by considering the additional texture and shape features.
(ii)
Adjusting the fully-connected and drop-out layers to improve the categorization accuracy.
(iii)
Improving the feature-concatenation technique to attain better results.
(iv)
Implementing the proposed VGG19 DLA to classify the tumors into low/high grade gliomas.
(v)
Developing a neural-network model for ependymal tumor dissemination.

4. Conclusions

The main objective of the proposed research was to identify and improve a suitable deep-learning architecture which would help to achieve a better detection of brain tumors from 2D MRIs. With an experimental investigation, this work identified that the VGG19 helped to attain better results compared to AlexNet, VGG16, ResNet50 and ResNet101. The proposed work implemented the following techniques to improve the detection accuracy of the VGG19; (i) replacing the SoftMax classifier with well-known classifiers, such as decision tree, k-nearest Neighbor, SVM-linear and SVM-RBF, and (ii) improving the performance of the pre-trained VGG19 by implementing a future fusion technique to help improve the detection accuracy. In this work the customized VGG was developed using the handcrafted features of dimension 1 x 1 x 199 and deep features of dimension 1 x 1 x 1024 , and then these features were sorted based on the PCA and fused using the serial concatenation technique. The final feature vector of size 1 x 1 x 1223 is then considered to enhance the classification accuracy. In this work, the brain images of BRATS, TCIA and clinical datasets were considered for the examination, and the overall results attained with the proposed VGG19 with SVM-RBF classifier helped to attain better results on Flair, T2 and T1C modality images. The performance was confirmed with ten-fold cross validation and classification accuracies of >99%, >98% and >97% for the modalities Flair, T2 and T1C, respectively.

Author Contributions

Conceptualization, V.R. and K.P.T.; methodology, V.R. and K.P.T.; software, V.R. and K.P.T.; validation, A.N.J.R. and G.R.N.; formal analysis, K.P.T., A.N.J.R. and G.R.N. investigation, V.R., K.P.T., A.N.J.R. and G.R.N.; resources, K.P.T., A.N.J.R. and G.R.N.; data curation, V.R. and K.P.T.; writing—original draft preparation, V.R. and K.P.T.; writing—review and editing, V.R., K.P.T., A.N.J.R. and G.R.N.; visualization, V.R., K.P.T., A.N.J.R. and G.R.N.; supervision, A.N.J.R.; project administration, G.R.N.; funding acquisition A.N.J.R. and G.R.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financially supported by The Research Start-Up Fund Subsidized Project of Shantou University, China, grant number NTF17016.

Acknowledgments

The authors of this article would like to acknowledge M/S. Proscans Diagnostics Pvt. Ltd., a leading scan center in Chennai, for providing the clinical brain MRI for experimental investigation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bhandary, A. Deep-learning framework to detect lung abnormality–A study with chest X-Ray and lung CT scan images. Pattern Recogn. Lett. 2020, 129, 271–278. [Google Scholar] [CrossRef]
  2. Pugalenthi, R.; Rajakumar, M.P.; Ramya, J.; Rajinikanth, V. Evaluation and classification of the brain tumor MRI using machine learning technique. Control Eng. Appl. Inf. 2019, 21, 12–21. [Google Scholar]
  3. Fernandes, S.L.; Tanik, U.J.; Rajinikanth, V.; Karthik, K.A. A reliable framework for accurate brain image examination and treatment planning based on early diagnosis support for clinicians. Neural Comput. Appl. 2019, 1–12. [Google Scholar] [CrossRef]
  4. Dey, N. Social-group-optimization based tumor evaluation tool for clinical brain mri of flair/diffusion-weighted modality. Biocybern. Biomed. Eng. 2019, 39, 843–856. [Google Scholar] [CrossRef] [Green Version]
  5. Louis, D.N. The 2016 world health organization classification of tumors of the central nervous system: A summary. Acta Neuropathol. 2016, 131, 803–820. [Google Scholar] [CrossRef] [Green Version]
  6. Rajinikanth, V.; Satapathy, S.C.; Fernandes, S.L.; Nachiappan, S. Entropy based segmentation of tumor from brain MR images—A study with teaching learning based optimization. Pattern Recogn. Lett. 2017, 94, 87–95. [Google Scholar] [CrossRef]
  7. Bauer, S.; Wiest, R.; Nolte, L.P. A survey of MRI-based medical image analysis for brain tumor studies. Phys. Med. Biol. 2013, 58, R97. [Google Scholar] [CrossRef]
  8. El-Dahshan, E.S.A.; Mohsen, H.M.; Revett, K. Computer-aided diagnosis of human brain tumor through MRI: A survey and a new algorithm. Expert Syst. Appl. 2014, 41, 5526–5545. [Google Scholar] [CrossRef]
  9. Palani, T.K.; Parvathavarthini, B.; Chitra, K. Segmentation of brain regions by integrating meta heuristic multilevel threshold with markov random field. Curr. Med. Imaging Rev. 2016, 12, 4–12. [Google Scholar] [CrossRef]
  10. Rajinikanth, V.; Raja, N.S.M.; Kamalanand, K. Firefly algorithm assisted segmentation of tumor from brain MRI using Tsallis function and Markov random field. Control Eng. Appl. Inform. 2017, 19, 97–106. [Google Scholar]
  11. Amin, J.; Sharif, M.; Yasmin, M. Big data analysis for brain tumor detection: Deep convolutional neural networks. Future Gener. Comput. Syst. 2018, 87, 290–297. [Google Scholar] [CrossRef]
  12. Thanaraj, P.; Parvathavarthini, B. Multichannel interictal spike activity detection using time–frequency entropy measure. Australas. Phys. Eng. Sci. Med. 2017, 40, 413–425. [Google Scholar] [CrossRef] [PubMed]
  13. Raja, N.S.M.; Fernandes, S.L.; Dey, N.; Satapathy, S.C. Contrast enhanced medical MRI evaluation using Tsallis entropy and region growing segmentation. J. Ambient. Intell. Hum. Comput. 2018, 1–12. [Google Scholar] [CrossRef]
  14. Liu, M.; Zhang, J.; Nie, D. Anatomical landmark based deep feature representation for MR images in brain disease diagnosis. IEEE J. Biomed. Health Inform. 2018, 22, 1476–1485. [Google Scholar] [CrossRef] [PubMed]
  15. Kanmani, P.; Marikkannu, P. MRI brain images classification: A multi-level threshold based region optimization technique. J. Med. Syst. 2018, 42, 62. [Google Scholar] [CrossRef]
  16. Wang, G.; Li, W.; Zuluaga, M.A. Interactive medical image segmentation using deep learning with image-specific fine-tuning. IEEE Trans. Med. Imaging 2018, 37, 1562–1573. [Google Scholar] [CrossRef] [PubMed]
  17. Gudigar, A.; Raghavendra, U.; San, T.R.; Ciaccio, E.J.; Acharya, U.R. Application of multiresolution analysis for automated detection of brain abnormality using MR images: A comparative study. Future Gener. Comput. Syst. 2019, 90, 359–367. [Google Scholar] [CrossRef]
  18. Buda, M. Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm. Comput. Biol. Med. 2019, 109, 218–225. [Google Scholar] [CrossRef] [Green Version]
  19. Talo, M.; Baloglu, U.B.; Yıldırım, O.; Acharya, U.R. Application of deep transfer learning for automated brain abnormality classification using MR images. Cognit. Syst. Res. 2019, 24, 176–188. [Google Scholar] [CrossRef]
  20. Talo, M.; Yildirim, O.; Baloglu, U.B.; Aydin, G.; Acharya, U.R. Convolutional neural networks for multi-class brain disease detection using MRI image. Comput. Med. Imag. Grap. 2019, 78, 101673. [Google Scholar] [CrossRef]
  21. Amin, J. Brain tumor detection using statistical and machine learning method. Comput. Methods. Progr. Biomed. 2019, 177, 69–79. [Google Scholar] [CrossRef] [PubMed]
  22. Sharif, M.; Tanvir, U.; Munir, E.U.; Khan, M.A.; Yasmin, M. Brain tumor segmentation and classification by improved binomial thresholding and multi-features selection. J. Ambient. Intell. Hum. Comput. 2018, 1–20. [Google Scholar] [CrossRef]
  23. Fabelo, H. Deep learning-based framework for ln vivo identification of glioblastoma tumor using hyperspectral images of human brain. Sensors 2019, 19, 920. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Sajid, S.; Hussain, S.; Sarwar, A. Brain tumor detection and segmentation in MR images using deep learning. Arab. J. Sci. Eng. 2019, 44, 9249–9261. [Google Scholar] [CrossRef]
  25. Acharya, U.R. Automatic detection of ischemic stroke using higher order spectra features in brain MRI images. Cognit. Syst. Res. 2019, 58, 134–142. [Google Scholar] [CrossRef]
  26. Bakator, M.; Radosav, D. Deep learning and medical diagnosis: A review of literature. Multimodal Technol. Interact. 2018, 2, 47. [Google Scholar] [CrossRef] [Green Version]
  27. Havaei, M. Brain tumor segmentation with deep neural networks. Med. Image Anal. 2017, 35, 18–31. [Google Scholar] [CrossRef] [Green Version]
  28. Mallick, P.K. Brain MRI image classification for cancer detection using deep wavelet autoencoder-based deep neural network. IEEE Access 2019, 7, 46278–46287. [Google Scholar] [CrossRef]
  29. Tiwari, A.; Srivastava, S.; Pant, M. Brain tumor segmentation and classification from magnetic resonance images: Review of selected methods from 2014 to 2019. Pattern Recogn. Lett. 2019, 131, 244–260. [Google Scholar] [CrossRef]
  30. Sharif, M.I.; Li, J.P.; Khan, M.A.; Saleem, M.A. Active deep neural network features selection for segmentation and recognition of brain tumors using MRI images. Pattern Recogn. Lett. 2020, 129, 181–189. [Google Scholar] [CrossRef]
  31. Mohsen, H. Classification using deep learning neural networks for brain tumors. Future Comput. Inform. J. 2018, 3, 68–71. [Google Scholar] [CrossRef]
  32. Menze, B.H.; Jakab, A.; Bauer, S. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 2015, 34, 1993–2024. [Google Scholar] [CrossRef] [PubMed]
  33. Brain Tumour Database (BraTS-MICCAI). Available online: http://hal.inria.fr/hal-00935640 (accessed on 15 February 2020).
  34. Clark, K.; Vendt, B.; Smith, K.; Freymann, J.; Kirby, J.; Koppel, P.; Moore, S.; Phillips, S.; Maffitt, D.; Pringle, M.; et al. The cancer imaging archive (TCIA): Maintaining and operating a public information repository. J. Digit. Imaging 2013, 26, 1045–1057. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Pedano, N.; Flanders, A.E.; Scarpace, L.; Mikkelsen, T.; Eschbacher, J.M.; Hermes, B.; Ostrom, Q. Radiology data from the cancer genome atlas low grade glioma [TCGA-LGG] collection. Cancer Imaging Arch. 2016. [Google Scholar] [CrossRef]
  36. Chang, K. Automatic assessment of glioma burden: A deep learning algorithm for fully automated volumetric and bi-dimensional measurement. Neuro Oncol. 2019, 21, 1412–1422. [Google Scholar] [CrossRef]
  37. Chang, P. Deep-learning convolutional neural networks accurately classify genetic mutations in gliomas. Am. J. Neuroradiol. 2018, 39, 1201–1207. [Google Scholar] [CrossRef] [Green Version]
  38. Proscans Diagnostics PVT. LTD. Homepage. Available online: https://proscans.in (accessed on 1 November 2019).
  39. Munir, K. Cancer diagnosis using deep learning: A bibliographic review. Cancers 2019, 11, 1235. [Google Scholar] [CrossRef] [Green Version]
  40. Tandel, G.S. A review on a deep learning perspective in brain cancer classification. Cancers 2019, 11, 111. [Google Scholar] [CrossRef] [Green Version]
  41. Nadeem, M.W. Brain tumor analysis empowered with deep learning: A review, taxonomy, and future challenges. Brain Sci. 2020, 10, E118. [Google Scholar] [CrossRef] [Green Version]
  42. Khawaldeh, S. Noninvasive grading of glioma tumor using magnetic resonance imaging with convolutional neural networks. Appl. Sci. 2018, 8, 27. [Google Scholar] [CrossRef] [Green Version]
  43. Acharya, U.R.; Fernandes, S.L.; WeiKoh, J.E.; Ciaccio, E.J.; Fabell, M.K.M.; Tanik, U.J.; Rajinikanth, V.; Yeong, C.H. Automated detection of alzheimer’s disease using brain MRI images—A study with various feature extraction techniques. J. Med. Syst. 2019, 43, 302. [Google Scholar] [CrossRef] [PubMed]
  44. Acharya, U.R.; Sree, S.V.; Ang, P.C.A.; Yanti, R.; Suri, J.S. Application of non-linear and wavelet based features for the automated identification of epileptic EEG signals. Int. J. Neural Syst. 2012, 22, 1250002. [Google Scholar] [CrossRef] [PubMed]
  45. Acharya, U.R.; Sudarshan, V.K.; Adeli, H.; Santhosh, J.; Koh, J.E.W. A novel depression diagnosis index using nonlinear features in EEG signals. Eur. Neurol 2015, 74, 79–83. [Google Scholar] [CrossRef]
  46. Acharya, U.R.; Faust, O.; Sree, S.V.; Molinari, F.; Garberoglio, R.; Suri, J.S. Cost-effective and non-invasive automated benign & malignant thyroid lesion classification in 3D contrast-enhanced ultrasound using combination of wavelets and textures: A class of thyroscan™ algorithms. Technol. Cancer Res. Treat 2011, 10, 371–380. [Google Scholar] [PubMed] [Green Version]
  47. Acharya, U.R.; Fujita, H.; Lih, O.S.; Adam, M.; Tan, J.H.; Chua, C.K. Automated detection of coronary artery disease using different durations of ECG segments with convolutional neural network. Knowl. Based Syst. 2017, 132, 62–71. [Google Scholar] [CrossRef]
  48. Raghavendra, U.; Bhat, N.S.; Gudigar, A.; Acharya, U.R. Automated system for the detection of thoracolumbar fractures using a CNN architecture. Future Gener. Comput. Syst. 2018, 85, 184–189. [Google Scholar] [CrossRef]
  49. Usharani, T.; Snekhalatha, U.; Palani, T.K.; Kumar, J. Human tongue thermography could be a prognostic tool for prescreening the type II diabetes mellitus. Evid. Based Complement. Altern. 2020, 2020, 3186208. [Google Scholar]
  50. Adapa, D.; Raj, A.N.J.; Alisetti, S.N.; Zhuang, Z.; Naik, G. A supervised blood vessel segmentation technique for digital fundus images using Zernike Moment based features. PLoS ONE 2020, 15, e0229831. [Google Scholar] [CrossRef]
  51. Zhuang, Z.; Fan, G.; Yuan, Y.; Raj, A.N.J.; Qiu, S. A fuzzy clustering based color-coded diagram for effective illustration of blood perfusion parameters in contrast-enhanced ultrasound videos. Comput. Methods. Progr. Biomed. 2019, 105233. [Google Scholar] [CrossRef]
  52. Noe, J.R.A. A multi-sensor system for silkworm cocoon gender classification via image processing and support vector machine. Sensors 2019, 19, 2056. [Google Scholar]
  53. Zhuang, Z. Nipple segmentation and localization using modified u-net on breast ultrasound images. J. Med. Imaging. Health Inform 2019, 9, 1827–1837. [Google Scholar] [CrossRef]
  54. Satapathy, S.C.; Rajinikanth, V. Jaya algorithm guided procedure to segment tumor from brain MRI. J. Optim. 2018, 2018, 3738049. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Structure of the customized VGG19 network.
Figure 1. Structure of the customized VGG19 network.
Applsci 10 03429 g001
Figure 2. Sample test images with varied modalities. (a) Dataset employed, (b) Flair, (c) DW, (d) T1, (e) T1C and (f) T2.
Figure 2. Sample test images with varied modalities. (a) Dataset employed, (b) Flair, (c) DW, (d) T1, (e) T1C and (f) T2.
Applsci 10 03429 g002
Figure 3. Convergence of the training and validation process of AlexNet. (a) Accuracy. (b) Loss value.
Figure 3. Convergence of the training and validation process of AlexNet. (a) Accuracy. (b) Loss value.
Applsci 10 03429 g003
Figure 4. Sample outcome attained with customized VGG19.
Figure 4. Sample outcome attained with customized VGG19.
Applsci 10 03429 g004
Figure 5. The area under the curve (AUC) attained for VGG19 with SVM-RBF classifier.
Figure 5. The area under the curve (AUC) attained for VGG19 with SVM-RBF classifier.
Applsci 10 03429 g005
Figure 6. Classification results attained with customized VGG19 with SVMRBF.
Figure 6. Classification results attained with customized VGG19 with SVMRBF.
Applsci 10 03429 g006
Figure 7. Graphical representation of the performance measures attained with customized VGG19.
Figure 7. Graphical representation of the performance measures attained with customized VGG19.
Applsci 10 03429 g007
Table 1. Test images considered in this work.
Table 1. Test images considered in this work.
Image ClassModalityNumber of Images for TrainingNumber of Images for Testing
Normal
(BRATS+TCIA)
Mixed
(Flair+T2)
1000400
Abnormal
(BRATS)
Flair1500600
T21500600
T1C1500600
Abnormal
(TCIA)
T21000400
Abnormal
(Clinical)
T2200200
Table 2. Performance measures attained with conventional DLA with SoftMax classifier.
Table 2. Performance measures attained with conventional DLA with SoftMax classifier.
NetworkModalityTPFNTNFPACCPRESENSPEF1SNPV
AlexNetFlair377235811995.8095.2094.2596.8394.7296.19
T2368325732794.1093.1692.0095.5092.5894.71
T1C361395683292.9091.8690.2594.6791.0593.57
VGG16Flair379215851596.4096.1994.7597.5095.4796.53
T2364365703093.4092.3991.0095.0091.6994.06
T1C368325643693.2091.0992.0004.0091.5494.63
VGG19Flair381195841696.5095.9795.2597.3395.6196.85
T2378225811995.9095.2194.5096.8394.8696.35
T1C370305742694.4093.4392.5095.6792.9695.03
ResNet50Flair371295673393.8091.8392.7594.5092.2995.13
T2358425653592.3091.0989.5094.1790.2993.08
T1C362385663492.8091.4190.5094.3390.9593.71
ResNet101Flair374265831795.7095.6593.5097.1794.5695.73
T2361395683292.9091.8590.2594.6791.0593.57
T1C366345653593.1091.2791.5094.1791.3994.32
Table 3. Performance values obtained with VGG19 with chosen classifiers.
Table 3. Performance values obtained with VGG19 with chosen classifiers.
ClassifierModalityTPFNTNFPACCPRESENSPEF1SNPV
DTFlair379215871396.6096.6894.7597.8395.7196.55
T2377235841696.1095.9394.2597.3395.0896.21
T1C368325712993.9092.6992.0095.1792.3594.69
KNNFlair369315811995.0095.1092.2596.8393.6594.93
T2371295802095.1094.8892.7596.6793.8095.24
T1C368325782294.6094.3692.0096.3393.1694.75
SVM-LinearFlair370305831795.3095.6192.5097.1794.0395.11
T2374265792195.3094.6893.5096.5094.0995.70
T1C372285712994.3092.7793.0095.1792.8895.33
SVM-RBFFlair378225891196.7097.1794.5098.1795.8296.40
T2374265821895.6095.4193.5097.0094.4495.72
T1C369315742694.3093.4292.2595.6792.8394.88
Table 4. Performance values obtained with the proposed VGG19 architecture.
Table 4. Performance values obtained with the proposed VGG19 architecture.
DatasetClassifierModalityTPFNTNFPACCPRESENSPEF1SNPV
BRATSDTFlair388125891197.7097.2497.0098.1797.1298.00
T2385155861497.1096.4996.2597.6796.3797.50
T1C383175811996.4095.2795.7596.8395.5197.16
KNNFlair38713591997.8097.7396.7598.5097.2497.85
T2384165881297.2096.9696.0098.0096.4897.35
T1C385155891197.4097.2296.2598.1796.7397.52
SVM-LinearFlair3928592898.4098.0098.0098.6798.0098.67
T239195901098.1097.5097.7598.3397.6398.50
T1C384165871397.1096.7296.0097.8396.3697.34
SVM-RBFFlair3955596499.1099.0098.7599.3398.8799.17
T23946595598.9098.7498.5099.1798.6299.00
T1C389115881297.7097.0197.2598.0097.1398.16
TCIASVM-RBFT23937391998.0097.7698.2597.7598.0198.24
ClinicalSVM-RBFT2+Flair3955194698.1798.5098.7597.0098.6397.49
Table 5. Performance validation of proposed VGG19 with the existing methods.
Table 5. Performance validation of proposed VGG19 with the existing methods.
ReferenceApproachAccuracy (%)
Amin et al. [11]Machine learning with fused features97 (SVM)
98 (Naïve-Bayes)
86 (Ensemble)
97 (DT)
97 (KNN)
Mallick et al. [28]Deep neural network89
Sharif et al. [30]Deep learning with feature fusion97.8 (SoftMax)
93.6 (MSVM)
92.2 (KNN)
94.4 (Ensemble)
Gudigar et al. [17]Shearlet transform + texture + PSO SVM97.38
Khawaldeh et al. [42]Convolutional neural networks91.16

Share and Cite

MDPI and ACS Style

Rajinikanth, V.; Joseph Raj, A.N.; Thanaraj, K.P.; Naik, G.R. A Customized VGG19 Network with Concatenation of Deep and Handcrafted Features for Brain Tumor Detection. Appl. Sci. 2020, 10, 3429. https://doi.org/10.3390/app10103429

AMA Style

Rajinikanth V, Joseph Raj AN, Thanaraj KP, Naik GR. A Customized VGG19 Network with Concatenation of Deep and Handcrafted Features for Brain Tumor Detection. Applied Sciences. 2020; 10(10):3429. https://doi.org/10.3390/app10103429

Chicago/Turabian Style

Rajinikanth, Venkatesan, Alex Noel Joseph Raj, Krishnan Palani Thanaraj, and Ganesh R. Naik. 2020. "A Customized VGG19 Network with Concatenation of Deep and Handcrafted Features for Brain Tumor Detection" Applied Sciences 10, no. 10: 3429. https://doi.org/10.3390/app10103429

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop