Next Article in Journal
Depression Associated with Caregiver Quality of Life in Post-COVID-19 Patients in Two Regions of Peru
Next Article in Special Issue
Neuroendocrine Tumors: An Analysis of Prevalence, Incidence, and Survival in a Hospital-Based Study in Ecuador
Previous Article in Journal
Low-Volume Squat Jump Training Improves Functional Performance Independent of Myofibre Changes in Inactive Young Male Individuals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Earlier Detection of Brain Tumor by Pre-Processing Based on Histogram Equalization with Neural Network

1
Department of Artificial Intelligence and Machine Learning, Saveetha Institute of Medical and Technical Science, Saveetha School of Engineering, Chennai 600124, India
2
Computer Science and Engineering, Faculty of Sciences & Managements, King Khalid University, Dhahran Al Janub, Abha 64351, Saudi Arabia
3
School of Computing, SASTRA Deemed University, Thanjavur 613401, India
4
School of Computer Science (SCS), Taylor’s University, Subang Jaya 47500, Malaysia
5
Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
6
Department of Information Technology, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
*
Author to whom correspondence should be addressed.
Healthcare 2022, 10(7), 1218; https://doi.org/10.3390/healthcare10071218
Submission received: 11 May 2022 / Revised: 11 June 2022 / Accepted: 20 June 2022 / Published: 29 June 2022
(This article belongs to the Special Issue 2nd Edition of Cancer in Human Health and Healthcare)

Abstract

:
MRI is an influential diagnostic imaging technology specifically worn to detect pathological changes in tissues with organs early. It is also a non-invasive imaging method. Medical image segmentation is a complex and challenging process due to the intrinsic nature of images. The most consequential imaging analytical approach is MRI, which has been in use to detect abnormalities in tissues and human organs. The portrait was actualized for CAD (computer-assisted diagnosis) utilizing image processing techniques with deep learning, initially to perceive a brain tumor in a person with early signs of brain tumor. Using AHCN-LNQ (adaptive histogram contrast normalization with learning-based neural quantization), the first image is preprocessed. When compared to extant techniques, the simulation outcome shows that this proposed method achieves an accuracy of 93%, precision of 92%, and 94% of specificity.

1. Introduction

Cancer is still a dangerous illness with several subtypes, posing numerous obstacles in biomedical research. Images and image sequences make up about 80% of all corporate and public unstructured big data. Image fusion is involved in the integration of multi-source complementary information of the image and constructs a new portrait. The main functionality of image fusion is the formulation of distinct, reliable, and accurate figures rather than the source image. At present, image fusion exists practiced in a vast range of applications such as medical imaging, remote sensing, surveillance, and computer vision. Image fusion is performed at distinct levels based on three groups formulated such as feature level, pixel level, and decision level [1]. Among those methods, feature level and decision level are involved in the integration of image features in terms of feature descriptors and probabilistic variables. However, that variable does not offer detailed descriptions of various information sources. This leads to a loss of information infusion process.
In this context, pixel level performs fusing of pixel by pixel of the source image. Generally, the advantage of pixel-level fusion is that it can exist in the transform and spatial domain of the image. Image fusion model based on spatial domain performs investigation of spatial saliency and gradient of images, spatial frequency, dense scale-invariant features, quadtree structure, and so on. Through the incarnation of those variables, weights associated with regions or pixels are identified, and the source image is combined in the form of linear or non-linear fusion results. However, this spatialdomain feature does not offer information about the source image accurately, and fused image content is lost.
The transformation of pixel-level fusion consists of a set of algorithms such as multiscale analysis, pyramid transforms, DCT, DWT, SWT, CVT, CT, NSCT, and so on. This technique provides multiscale transformation based on the construction of basic mathematical functions concerning the applicability and role of images. Consider an example, DCT is performed based on the texture features of the image periodically but fails to identify the point of images; DWT provides details of an image but is not efficient based on consideration of various directions [2]. Programmed techniques are being picked over manual strategies. Through this, it is observed that the transformation technique is not adequate for the extraction and preservation of source image salient information; this, in turn, reduces the performance of image fusion. To overcome that limitation sparse representation-based method emerges with the utilization of the transformation domain. Several multiscale analysis techniques are evolved based on over complete project source image dictionary, with the involvement of image reconstruction through linear combination.
Generally, an image dictionary is trained with the utilization of high-quality images and describes the standard structure of the portrait. Due to limited traits of image pixels, the universal image feature dictionary is insufficient based on the representation of image content for irregular details and causes over-smooth fused results. The ability to explain the discovered highlights in many image sets on various extensions achieves the optimum simplicity in defining the clinical determination and the behavior itinerary [3]. By using MRI imaging, radiologists easily analyze the pathology of a patient’s mind that uses the strategy called CT (computed tomography). Image inspection is used not only for the underlying diagnosis but also for planning, assessing the treatment plan’s feasibility, and other aspects of clinical consideration. X-ray image preparation depicts the cerebrum’s life structures in various planes and reveals data about static structures, as well as data about fundamental tissue uprightness and smooth motion [4].
Paper organization is as follows: a review of literature is deliberated in Section 2. Section 3 designates the AHCN-LNQ method and its methodology. Section 4 offers results and discussion, and Section 5 provides the conclusion and future scope.

Related Works

Brain disease recognition is the most upcoming and reliable field by using a procedure called medical imaging. To perceive the exploration of brain disease, various image processing methods are used. The automated segmentation method for brain tissue using the enhanced k-means technique is presented in [5]. With variation in shapes and size, brain tissue having tumor exist in various location. Manually detecting a brain tumor is not only time-consuming but is also associated with human mistakes, especially when they may rely on knowledge as well as an understanding of medical pathologists. By using automatic classification, cancerous cell identification is conducted. To classify concurrent MR images, an author in [6] proposes Vgg-16, ResNet-34, Alex Net, ResNet-50, and ResNet-18 into inflammatory disinfectors, cerebrovascular, degenerative, and neoplastic categories. In addition, compared their classification performance to that of the state-of-the-art structures, which are pre-trained prototypes. It achieves an accuracy of 95.23% ± 0.6 with the ResNet-50 prototype with five prototypes. By using brain MRI images, a large anomaly was detected. Evaluating the physical analysis output of MRI images is used for prototype results for clinicians. Researchers presented a technique for locating and identifying malignant cells in the brain in [7], which involves five steps: pre-processing, histogram clustering, morphological operations, image capture, and edge detection. Out of a total of 100 brain images, they used 50 for scheme improvement and 50 for scheme testing. For identification and localization of the tumor, cancerous cells in the brain are studied in the proposed method. However, it is impossible to detect brain tumors through identification by using this technique. For detection of brain tumors with more accuracy, MRI is used, which is presented in [8]. For health well-being, it is difficult to find the tumor at an initial stage. There is various research carried out on tumor identification with more accuracy. Hence to find brain tumor intensity with high distinctness, CNN was proposed.
Another study [9] obtained filtered pictures as part of the pre-processing processes in processing the malignant cell of the brain, which was also necessary inside the cancerous region. Filtering segmentation is used as a perfect image method for noise removal. For cancerous cell and brain tumor identification, Otsu and cuckoo methods are introduced in [10], which use MRI to determine tumor identification and formation. By using noise and error-free images, input images are processed to detect tumors with more accuracy. For better identification, a spatial filtering method was used for image smoothening. By using the Otsu method, the input image pixel value is increased. The cuckoo method finds segmented pixels with tumor and image values. The author of [11] suggested brain tumor identification approaches that employ k-means segmentation and various MRI brain images to evaluate brain tumors. In [12] is a proposed novel method for identifying tumors by using BWT and SVM techniques? In extracting non-brain tissue, this method achieves 95% of accuracy. In [13], the author proposed a clustering method for image segmenting and identification. For tumor identification, the contour method is used for segmentation. In [14], the author introduces a Gaussian mixture model for extracting features and brain tumor detection. Brain MRNet using a neural network was introduced in [15]. This design is based on attention modules, as well as a hyper column approach for acquiring residual networks. In pre-processing stage, Brain MRNet was used.
Limitations of the existing technique are as follows:
  • High cost due to existing method inbuilt patterns is complex;
  • Precision was still a challenge, and the dice score was usually about 90 or less;
  • In every spatial data, flattening layer uses leads to feature maps loss;
  • The existing technique took more time for convergence;
  • With the increase in dimensionality, training time is increased and less time-consuming;
  • While network designing, when voxels amount is more, itcauses computational complexity;
  • Due to more data set population, over the fitting issue is attained.

2. Research Methodology

For detecting brain tumors, a novel technique was introduced in this section. This method used an enhanced classification method with the pre-processing method. When compared with existing methods, this method achieves better performance. The proposed method architecture is as follows:
Figure 1 shows the architecture of the proposed method. For pre-processing input images and to remove noise, contrast adaptive histogram equalization is used. For segmenting images and to measure the image value threshold and extracted feature, the Otsu threshold is used. After extraction image is classified using neural learning quantization and brain tumor identification using CAD. Image enhancement has been achieved to acquire the precise uniform gray levels, which is translated by histogram for pixel mapping of gray level, by contrast, adaptive histogram equalization based on probability theory. For a novel image gray level value of a pixel is r (0 ≤ r ≤ 1) with a probability density is p(r), enhanced image with the gray level value ofa pixel is s (0 ≤ s ≤ 1) with probability density is p(s), and it is given by s = T(r). All equalized histogram bars have similar evolution by employing physics meaning of histogram, and it is expressed as (1). All of the formulas below refer to the Algorithm 1.
  p s s d s = p r r d r  
s = T(r) has an increasing interval function and its inverse function is r = T−1 (s) is a monotonic process. Concerning Equation (1), Equation (2) is given as
p s s = p r r 1 d s / d r r = T 1 S = p r r 1 p r r = 1  
For the convolution histogram equalization method, the mapping relationship is as follows:
For discrete states, correlation among I and fi is expressed in Equation (3)
f i = m 1 T r = m 1 k = 0 i q k Q
where m denotes novel image gray intensities, qk is the image pixel with kth gray level, and the entire image pixel is Q. if an image with n gray intensities, Ith gray intensity probability is pi, and its entropy is represented in Equation (4).
e i = p i l o g p i
Complete image entropy is given in Equation (5)
E = i = 0 n 1 e i = i = 0 n 1 p i l o g p i
With p 0 = p 12 = = p n 1 = 1 n , E will attain its greatest. When histogram images are distributed uniformly, the entire image entropy attains its maximum level. It is more appropriate when equalization is enlarged by a dynamic range, which is represented in Equation (3). By using quantization of equalization, an interval is enlarged.

2.1. Pre-Processingusing Adaptive Histogram Contrast Normalization

For improved image, the popular and easy method used is called AHCN (adaptive histogram contrast normalization). In the output image, this method improves density distribution and its contrast. It mainly concentrates on dynamic image extending. To obtain maximum output, input image contrast is enhanced. So, the input image changes its visual quality and brightness. This method does not apply to unique intensity images. The image target is divided into tiles by using this histogram. DoF is attained by each tile in this technique. Using bilinear interpolation, boundary remapping density is executed to smooth tile. Using CDF (Cumulative Distribution Function), the histogram has limited amplification definite values. Above the histogram tile, clipped images are distributed into bins. In terms of image contrast, images are improved using AHCN. The pixel value of an original image is a 0 a 1 , and its probability density is p(a), the pixel value of an improved image is b 0 b 1 , and its probability density is p(b) and b = T (a) functions for mapping, and it is represented in Equation (6).
p a a d a = p b b d b
For inverse function b = T 1 s is represented in Equations (7) and (8) is
p a a = p b b 1 d a d b b = T 1 a
p a a = p   b b 1 p   b b = 1
The relationship between ‘i’ original image and ‘fi’ enhanced image is given in Equations (9) and (10) is
f i = l 1 T b
f i = l 1 k = 0 i t k T n
With kth intensity, t k is animage pixel in numerical. Considering an image with n intensity and probability of ith level is pi, then it is given in Equation (11),
e n i = p i log p i
For all images, the entropy is given as (12), (13),
E n = i = 0 n 1 e n i
E n = i = 0 n 1 p i log p i
where p 0 = p 12 = = p n 1 = 1 n .

2.2. Segmentation Using OTSU Thresholding Method

If the segmentation is not performed correctly, the system’s subsequent phases will create misleading data, affecting the system’s overall performance. The suggested study uses OTSU’s techniques, which generate greater accuracy when detecting and are represented in Equation (14).
σ 2 = i = 0 N X i μ 2 N
where the image pixel value is denoted as X i , mean is μ , and the number of pixels in one image is N. OSTU’s technique is utilized in a variety of image processing programs that perform histogram-based picture thresholding or convert grayscale images to binary. Images are divided into two intraclasses to calculate the optimum threshold. To investigate threshold values that reduce intraclass variance defined by two classes of weighted sum variances in Equation (15):
σ ω 2 t = ω 0 t σ 0 2 t + ω 1 t σ 1 2 t
where ω 0   a n d   ω 1 are two class weights probabilities that are separated by threshold t, σ 0 2   a n d   σ 1 2 are two classes of variance. As in Equations (16) and (17), OSTU evaluates that reduction in intraclass variance is equivalent to maximizing interclass variance.
σ b 2 t = σ 2 σ ω 2 t
= ω 1 t ω 2 t μ 1 t μ 2 t
where class probabilities are denoted by ω 1   a n d   ω 2 and class means is   μ i .
Class probabilities ω 1 t are evaluated from histogram t as in Equation (18):
ω 1 t = 0 t P i
µ 1 t is class probabilities represented in Equation (19):
µ 1 t = 0 t P i   x i
where x(i) is histogram b in center value.

2.3. Classification Using LNQ (Learning-Based Neural Quantization)

In general, NN has been utilized for architectural identification as well as perception in their results more than any other approach that has been documented in current works of ANN as an application. By utilizing neural theory, c (LNQ) is developed and enhanced predicated on learning quantization. To handle errors due to statistical measurement, LNQ shows neuron quantization number activation. The normalized triangular quantization statistics quantification was used in connection to the whole component in NL, as well as their vector input, with the most astronomically immense relationship rate equipollent to 1. Since LNQ is used to handle neural standards, their Euclidean distance has been transmuted in traditional LNQ by utilizing comparable neural values derived using the max–min operation between their input as well as vector reference. For accommodation of dual vectors in the network structure, the max–min operation has been altered. This network consists of one input, cluster, and output layer. The input layer is associated with neurons that are connected to hidden layer neuron clusters, and this is amalgamated with input data based on odor categorization. As a result, the odor category has a more immensely colossal number of neurons in each cluster for the hidden layer, where each cluster has neurons, and every neuron corresponds to one of the sensors. Every cluster has a neural codebook vector that is kenned by the category thatis represented. The input vector has been retrieved for each neural system, and each cluster does a homogeneous computation for learning quantization between vector input and their reference vector utilizing amended improved operation. To execute a minimum operation on each cluster, the propagated output is obtained in which it has enhanced similarity value from vector reference. When vector reference for learning quantization has been established adaptively in the cognition process, learning quantization may readily identify the vector input that relies on a statistical distribution of the input data. Quantization of separate standards is denoted by NL, and it is a weight or generation significance. For influence significance w, quantized version w ˜ is given in Equation (20).
w ˜ = w + n w
where quantization noise is denoted by nw.
If w is uniform, Laplacian, Gamma distribution or Gaussian, SQNR, γ w . The quantization procedure is given in Equation (21)
10 log γ w = 10 log ϵ w 2   ϵ n w 2   κ . β  
where quantization efficiency is κ and β is the image bit-width quantizer.
Algorithm 1: Procedure: Using AHCN-LNQ, image features are learned and preprocessed.
Input:   α x   a n d   β y
Output: δ b
1.
In array image, calculating histogram of every contextual region about neighboring gray levels.
2.
Using CL value N a v g = N r X × N r Y N g r a y to calculate contextual region contrast limited histogram. Where the average number of pixels is denoted as N avg, the number of gray levels is denoted as N gray, X and Y dimension number of pixels is denoted as NrX and NrY.
3.
N C L = N c l i p × N a v g is actual CL, then clip N has normalized CL with range [0, 1]. Pixels are clipped when N CL is less than the number of pixels.
4.
N c l i p isatotal number of concise pixels, N a v g g r a y = N c l i p / N gray is the gray level of standard pixels.
5.
Until the outstanding pixels are scattered, enduring pixels do reallocate.
6.
P (i) input probability of Clipped histogram, which is provided to create transfer process by enhancing intensity values.
7.
Within a sub-matrix contextual area, evaluating new gray level assignment of pixels.
8.
Find δ b
9.
End process.

3. Performance Analysis

Proposed architecture performance is explained in this section. Output image and its graphs are as follows:

3.1. Accuracy

It is several correct presages to the total number of presages. In terms of positives, precision is calculated for binary relegation, which is represented in Equation (22):
Accuracy   = TP   +   TN TP   +   TN   +   FP   +   FN
where: true positive is denoted as TP, false positive is denoted as FP, true negative is denoted as TN, and falsenegative is denoted as FN.

3.2. Precision

It is the ratio of true positive, and it is represented in Equation (23).
P r e c i s i o n = TP TP   +   FP

3.3. Specificity

It is used to measure the actual negatives proportion, which is also called the true negative rate is represented in Equation (24).
Specificity = TN/(TN + FP).

3.4. Image1–3

Figure 2, Figure 3 and Figure 4 provide the architecture of images 1, 2, and 3. The first input image is preprocessed and then segmented. After that, a tumor is detected, which is then extracted using the feature extraction technique. A further brain tumor is relegated, and it is detected.

4. Results and Discussion

Table 1 represents a comparison of brain tumor detection by comparing NN, CNN, k-means, and proposed AHCN-LNQ.
Figure 5 shows a comparison of precision. The X-axis gives many epochs, and the Y-axis gives accuracy in %. When compared with subsisting methods such as NN, CNN, and K-MEANS, the proposed method attains higher accuracy for all images.
Figure 6 represents a comparison of precision. The X-axis gives several epochs, and the Y-axis gives precision in %. When compared with other methods, the proposed method achieves higher precision for all images.
Figure 7 shows the comparison of specificity. The X-axis indicates some epochs, and the Y-axis indicates specificity in %. When compared with the existing method, the proposed method achieves higher specificity for all images. Table 2 shows a comparison of the recognition rate with the existing and proposed methods. When compared with the existing method, the proposed method shows a higher recognition rate.
Figure 8 gives training set accuracy. The X-axis gives thenumber of samples, and the Y-axis gives therecognition rate in percentage. When compared with existing techniques such as NN, CNN, and K-MEANS, the proposed method AHCN-LNQ achieves optimal accuracy.

4.1. Loss Function of Training Set

Table 3 represents a comparison of the testing set loss function. When compared with existing methods, the proposed method achieves a low loss function in terms of the training set.
Figure 9 shows the training set loss function. When compared with other existing neural network techniques, the proposed method achieves a lower loss function in terms of the training set in which the existing method achieves a higher loss function.

4.2. Loss Function of Testing Set

Table 4 gives the comparison of a testing set loss function. The proposed method achieves a low loss function in terms of the testing set when compared with existing methods.
Figure 10 represents a comparison of the testing set loss function. When compared with other subsisting neural network techniques, the proposed method achieves a lower loss function in terms of the testing set in which the existing method achieves a higher loss function.

5. Conclusions

MRI is a consequential diagnostic imaging technology that exists utilized to detect pathological transmutations in tissues in addition to organs early. It is additionally a non-invasive imaging method. This paper proposes the detection of brain tumors using pre-processing through AHCN-LNQ. Detection and performance analysis of brain tumors conducted in an enhanced manner. In terms of precision, specificity, and accuracy, the proposed method achieves optimized outputs. Through proffered and subsisting method sample numbers, the apperception rate becomes premeditated. When compared with other neural network techniques, the proposed method accomplishes less loss function for both testing and training sets. When the number of samples increases, the loss function is maintained by the proposed technique. For all the parameters proposed technique obtained optimal output even for various input samples, and this method yielded optimized brain tumor detection in terms of accuracy of 93%, precision of 92%, and 94% of specificity. Future work includes detection of brain neural system disease and its symptoms earlier for various data sets.

Author Contributions

M.R., S.Q., R.M., N.Z.J., M.M. and M.A.A. have equal contributions. All authors have read and agreed to the published version of the manuscript.

Funding

Taif University Researchers Supporting project number (TURSP-2020/98), Taif University, Taif, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data will be shared for review based on the editorial reviewer’s request.

Acknowledgments

The authors would like to acknowledge Taif University Researchers Supporting Project number (TURSP-2020/98) Taif University, Taif, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. ALzubi, J.A.; Bharathikannan, B.; Tanwar, S.; Manikandan, R.; Khanna, A.; Thaventhiran, C. Boosted neural network ensemble classification for lung cancer disease diagnosis. Appl. Soft Comput. 2019, 80, 579–591. [Google Scholar] [CrossRef]
  2. Rockne, R.; Alvord, E.C.; Rockhill, J.K., Jr.; Swanson, K.R. A mathematical model for brain tumor response to radiation therapy. J. Math. Biol. 2008, 58, 561–578. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Alexander, D.C. Imaging brain microstructure with diffusion MRI practicality and applications. NMR Biomed. 2019, 32, e3841. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Malik, H.; Fatema, N.; Alzubi, J.A. (Eds.) AI and Machine Learning Paradigms for Health Monitoring System: Intelligent Data Analytics; Springer: New York, NY, USA, 2021. [Google Scholar]
  5. Archana, K.S.; Kathiravan, M. Automatic Brain Tissue Segmentation using Modified K-Means Algorithm Based on Image Processing Techniques. Int. J. Innov. Technol. Explor. Eng. 2019, 8, 664–666. [Google Scholar]
  6. Taloand, M. Convolutional neural networks for multi-class brain disease detection using MRI images. Comput. Med. Imaging Graph. 2019, 78, 102–118. [Google Scholar]
  7. Azhari, E.E.M.; Hatta, M.M.M.; Htike, Z.Z.; Win, S.L. Brain tumor detection and localization in magnetic resonance imaging. Int. J. Inf. Technol. Converg. Serv. 2014, 4, 1. [Google Scholar]
  8. Nave, O. A mathematical model for treatment using chemo-immunotherapy. Heliyon 2022, 8, e09288. [Google Scholar] [CrossRef] [PubMed]
  9. Alzubi, J.A.; Kumar, A.; Alzubi, O.; Manikandan, R. Efficient Approaches for Prediction of Brain Tumor using Machine Learning Techniques. Indian J. Public Health Res. Dev. 2019, 10, 267–272. [Google Scholar] [CrossRef]
  10. Lakshmi, S.; Swasthika, E.D. A novel approaches for detecting tumors and brain tumors in brain. ARPN J. Eng. Appl. Sci. 2016, 11, 7035–7040. [Google Scholar]
  11. Falco, J.; Agosti, A.; Vetrano, I.G.; Bizzi, A.; Restelli, F.; Broggi, M.; Schiariti, M.; DiMeco, F.; Ferroli, P.; Ciarletta, P.; et al. In Silico Mathematical Modelling for Glioblastoma: A Critical Review and a Patient-Specific Case. J. Clin. Med. 2021, 10, 2169. [Google Scholar] [CrossRef] [PubMed]
  12. Bahadure, N.B.; Ray, A.K.; Thethi, H.P. Image analysis for MRI based brain tumor detection and feature extraction using biologically inspired BWT and SVM. Int. J. Biomed. Imaging 2017, 2017, 9749108. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Cui, W.; Wang, Y.; Fan, Y.; Feng, Y.; Lei, T. Localized FCM clustering with spatial information for medical image segmentation and bias field estimation. Int. J. Biomed. Imaging 2013, 2013, 13. [Google Scholar] [CrossRef] [PubMed]
  14. Chaddad, A. Automated feature extraction in brain tumor by magnetic resonance imaging using Gaussian mixture models. Int. J. Biomed. Imaging 2015, 2015, 8. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Mesut, T.; Ergen, B.; Cömert, Z. BrainMRNet: Brain tumor detection using magnetic resonance images with a novel convolutional neural network model. Med. Hypotheses 2020, 134, 102–117. [Google Scholar]
Figure 1. Proposed architecture of AHCN-LNQ.
Figure 1. Proposed architecture of AHCN-LNQ.
Healthcare 10 01218 g001
Figure 2. Operation of input image1.
Figure 2. Operation of input image1.
Healthcare 10 01218 g002
Figure 3. Operation of input image2.
Figure 3. Operation of input image2.
Healthcare 10 01218 g003
Figure 4. Operation of input image3.
Figure 4. Operation of input image3.
Healthcare 10 01218 g004
Figure 5. Comparison of accuracy.
Figure 5. Comparison of accuracy.
Healthcare 10 01218 g005
Figure 6. Comparison of precision.
Figure 6. Comparison of precision.
Healthcare 10 01218 g006
Figure 7. Comparison of specificity.
Figure 7. Comparison of specificity.
Healthcare 10 01218 g007
Figure 8. Accuracy of training sets.
Figure 8. Accuracy of training sets.
Healthcare 10 01218 g008
Figure 9. Loss function of training set.
Figure 9. Loss function of training set.
Healthcare 10 01218 g009
Figure 10. Loss function of testing set.
Figure 10. Loss function of testing set.
Healthcare 10 01218 g010
Table 1. Comparative analysis for brain tumor detection.
Table 1. Comparative analysis for brain tumor detection.
ParametersNNCNNK-MEANSAHCN_LNQ
Accuracy92939495
Precision8586.58989.5
Specificity8989.389.589.9
Table 2. Sample rate calculation on basis of recognition rate.
Table 2. Sample rate calculation on basis of recognition rate.
SamplesNNCNNK-MeansAHCN-LNQ
170737680
281848493
383879394
487889194
585899294
Table 3. Loss function for training set.
Table 3. Loss function for training set.
Loss Function of Training Set SamplesNNCNNK-MeansAHCN-LNQ
11.271.241.11.05
21.680.960.840.82
31.680.960.830.82
41.680.970.850.81
51.690.970.840.82
Table 4. Loss function for testing set.
Table 4. Loss function for testing set.
Loss Function of Testing Set SamplesNNCNNK-MeansAHCN-LNQ
11.271.20.980.95
20.950.980.940.92
30.950.980.940.92
40.950.980.950.93
50.960.970.950.91
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ramamoorthy, M.; Qamar, S.; Manikandan, R.; Jhanjhi, N.Z.; Masud, M.; AlZain, M.A. Earlier Detection of Brain Tumor by Pre-Processing Based on Histogram Equalization with Neural Network. Healthcare 2022, 10, 1218. https://doi.org/10.3390/healthcare10071218

AMA Style

Ramamoorthy M, Qamar S, Manikandan R, Jhanjhi NZ, Masud M, AlZain MA. Earlier Detection of Brain Tumor by Pre-Processing Based on Histogram Equalization with Neural Network. Healthcare. 2022; 10(7):1218. https://doi.org/10.3390/healthcare10071218

Chicago/Turabian Style

Ramamoorthy, M., Shamimul Qamar, Ramachandran Manikandan, Noor Zaman Jhanjhi, Mehedi Masud, and Mohammed A. AlZain. 2022. "Earlier Detection of Brain Tumor by Pre-Processing Based on Histogram Equalization with Neural Network" Healthcare 10, no. 7: 1218. https://doi.org/10.3390/healthcare10071218

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop