Next Article in Journal
A Preliminary Study on the Cultural Competence of Nurse Practitioners and Its Affecting Factors
Next Article in Special Issue
Optimal Deep Stacked Sparse Autoencoder Based Osteosarcoma Detection and Classification Model
Previous Article in Journal
Mental Health of Czech University Psychology Students: Negative Mental Health Attitudes, Mental Health Shame and Self-Compassion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Deep-Learning-Enabled Decision-Making Medical System for Pancreatic Tumor Classification on CT Images

1
Department of Computer Sciences, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
2
Department of Computer Science and Information Systems, College of Applied Sciences, AlMaarefa University, Ad Diriyah, Riyadh 13713, Saudi Arabia
3
Department of Computer Science and Engineering, Sphoorthy Engineering College, Telangana, Hyderabad 501510, India
4
Department of Electrical and Electronics Engineering, J. B. Institute of Engineering and Technology, Telangana, Hyderabad 500075, India
5
Department of Information Systems, College of Computing and Information System, Umm Al-Qura University, Mecca 21911, Saudi Arabia
6
Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
7
Research Centre, Future University in Egypt, New Cairo, Cairo 11745, Egypt
8
Department of Computer Science, College of Science & Art at Mahayil, King Khalid University, Abha 61421, Saudi Arabia
*
Author to whom correspondence should be addressed.
Healthcare 2022, 10(4), 677; https://doi.org/10.3390/healthcare10040677
Submission received: 18 February 2022 / Revised: 26 March 2022 / Accepted: 28 March 2022 / Published: 3 April 2022
(This article belongs to the Special Issue Advances of Decision-Making Medical System in Healthcare)

Abstract

:
Decision-making medical systems (DMS) refer to the design of decision techniques in the healthcare sector. They involve a procedure of employing ideas and decisions related to certain processes such as data acquisition, processing, judgment, and conclusion. Pancreatic cancer is a lethal type of cancer, and its prediction is ineffective with current techniques. Automated detection and classification of pancreatic tumors can be provided by the computer-aided diagnosis (CAD) model using radiological images such as computed tomography (CT) and magnetic resonance imaging (MRI). The recently developed machine learning (ML) and deep learning (DL) models can be utilized for the automated and timely detection of pancreatic cancer. In light of this, this article introduces an intelligent deep-learning-enabled decision-making medical system for pancreatic tumor classification (IDLDMS-PTC) using CT images. The major intention of the IDLDMS-PTC technique is to examine the CT images for the existence of pancreatic tumors. The IDLDMS-PTC model derives an emperor penguin optimizer (EPO) with multilevel thresholding (EPO-MLT) technique for pancreatic tumor segmentation. Additionally, the MobileNet model is applied as a feature extractor with optimal auto encoder (AE) for pancreatic tumor classification. In order to optimally adjust the weight and bias values of the AE technique, the multileader optimization (MLO) technique is utilized. The design of the EPO algorithm for optimal threshold selection and the MLO algorithm for parameter tuning shows the novelty. A wide range of simulations was executed on benchmark datasets, and the outcomes reported the promising performance of the IDLDMS-PTC model on the existing methods.

1. Introduction

Pancreatic cancer is relatively rare, but it can be a leading cause of death [1,2]. Recently, the survival rate of pancreatic cancer has been low, and its 5-year survival rate gets drastically reduced to 11% [3]. The surgical resection of the primary tumor is possible; in less than 20% of patients, 5-year survival increased to 20–37% [4]. The evidence that impacts survival outcome is sparse and heterogeneous, although delay in diagnoses could also affect the social well-being of patients and their family and quality of life, in part due to the concern that the delay in the number of investigations and consultations would affect treatment prognosis and options [5]. Earlier diagnoses provide better opportunities to reduce the death rate for pancreatic cancer, but a systematic method for earlier diagnoses remains unknown.
Radiomics, as a newly emerging technology, has provided a large amount of data on healthcare images that could expose hidden features of diseases that are observed unclearly with naked eyes. This technique has been investigated to promote and advance cancer management for carcinoma of the lung, colorectal, breast, and bladder to promote and advance cancer management [6]. Furthermore, radiomics extracts a greater number of higher dimension quantitative features from medical images, involving magnetic resonance imaging (MRI), positron emission tomography (PET), and computed tomography (CT). Therefore, CT image is extensively utilized for pancreatic tumor diagnoses. Since roughly 40% of tumors smaller than 2 cm in diameter evade diagnosis by CT, there is a requirement for a new method to increase radiologist analysis in enhancing the sensitivity for the diagnosis of pancreatic cancer [7].
The application of artificial intelligence (AI) to medical diagnostics was initiated in the early 1980s, and computer-assisted diagnosis (CAD) systems with deep learning (DL) have been newly utilized for assisting physicians in enhancing the efficiency of the interpretation of different medical imaging data [8]. The usage of AI in image detection mostly plays two significant roles: computer-assisted diagnosis and computer-assisted detection of lesions for the representation of optical lesions and biopsies. The DL method using convolution neural networks (CNN) has shown considerable potential in analyzing medical images. The neural network construction depends on a stack of neurons comprised of activation functions and parameters to integrate and extract features from the image and establish a model that captures complicated relationships between diagnoses and images [9]. CNN has been reported to accomplish a higher performance in the imaging diagnoses of different conditions involving diabetic retinopathy, liver masses, and skin cancer. However, the potential advantages of CNN for the diagnosis and detection of pancreatic cancer have not been broadly studied [10]. Mostly, pancreatic cancer presents with ill-defined margins and irregular contours on CT and is frequently obscure at an earlier phase, which poses considerable problems even for trained radiotherapists.
This article introduces an intelligent deep-learning-enabled decision-making medical system for pancreatic tumor classification (IDLDMS-PTC) using CT images. The IDLDMS-PTC technique intends to investigate the CT images for the existence of pancreatic tumors. The IDLDMS-PTC model designs an emperor penguin optimizer (EPO) with multilevel thresholding (EPO-MLT) technique for pancreatic tumor segmentation. Additionally, the MobileNet technique was implemented as a feature extractor with optimal auto encoder (AE) for pancreatic tumor classification. In order to optimally adjust the weight and bias values of the AE method, the multileader optimization (MLO) algorithm was utilized. To assess the effectiveness of the IDLDMS-PTC technique, a comprehensive experimental analysis was carried out on a benchmark dataset.

2. Literature Review

In Sujatha’s research [11], a different diagnosis technique is presented for the recognition of pancreatic tumors with image texture character that is estimated statistically. Diagnoses were completed by deep wavelet neural networks (DWNN). A DL-based hierarchical CNN (HCNN) is presented for pancreatic tumor diagnoses [12]. The RNN was proposed for meeting the problems of spatial discrepancy segmentation over slices of adjacent images. Liang et al. [13] designed an approach that allows automated segmentation of pancreatic GTV based on multiple parametric MRIs with DNN.
Asadpour et al. [14] presented a cascaded architecture for extracting the tumor in adenocarcinoma patients and the volumetric shape of the pancreas. This method is an integration of an elastic atlas that is able to fit on three-dimensional volumetric shapes extracted from CT slices, a CNN using three forwarded paths to label the patches of the image with coarse to fine resolution with a multi-resolution architecture.
The capacity of DL is estimated in [15] to distinguish pancreatic disease on contrast-enhanced magnetic resonance (MR) images with the help of a generative adversarial network (GAN). Classification accuracy of trained InceptionV4 architecture for each patient and patch were made on the validation set, correspondingly. Iwasa et al. [16] estimated the ability of DL for the automated segmentation of pancreatic tumors on CE-EUS video images and the probable factor that affects the automated segmentation. Automated segmentation was implemented by U-Net with 100 epochs and estimated by using four-fold cross-validation. The tumor boundary (TB) and degree of respiratory movement (RM) were classified as to 3-degree intervals from the patient and estimated as feasible factor that affects the segmentation.

3. The Proposed Model

In this study, a novel IDLDMS-PTC technique was derived from examining the CT images for the existence of pancreatic tumors. The proposed IDLDMS-PTC technique comprises several subprocesses, namely GF-based pre-processing, EPO-MLT-based segmentation, MobileNet-based feature extraction, AE-based classification, and MLO-based parameter optimization. The utilization of the EPO approach to better threshold selection and the MLO algorithm for parameter tuning assists in accomplishing improved classification results. Figure 1 illustrates the overall process of the IDLDMS-PTC technique.

3.1. Gabor Filtering Based Pre-Processing

At the initial stage, the pre-processing of the CT images is performed by the use of the GT technique. It is a linear filter whose impulse response is a sinusoidal function multiplied by a Gaussian function. They are nearly passband function. The major benefit of presenting the Gaussian envelope is that the Gabor function is situated in spatial and frequency domains, different from the sinusoidal function, which is entirely delocalized in the spatial (sinusoidal function covers the whole space) and localized accurately in the frequency domain [17]. Thus, this function is highly suited for representing a signal in these domains. Gabor is a 2D bandpass filter; when allocated a frequency and direction, the address of the original image is conserved, and noise is reduced.

3.2. EPO-MLT-Based Segmentation

In the image segmentation process, the EPO-MLT approach was developed for determining the tumor regions in the CT images. The MLT issue was demonstrated by assuming a gray level image I that was segmented containing K + 1 classes [18]. Therefore, K thresholds t 1 ,   t 2 ,   ,   t K are needed for dividing the image into sub-regions as in Equation (3):
C 0 = g u , v I | 0 g u , v t 1 1
C 1 = { g u , v I | t 1 g u , v t 2 1 }
C K = g u , v I | t K g u , v L 1
where C k implies the k t h class of images, t k   k = 1 , , K refers to the k t h threshold values, g u , v stands for the gray level of pixels u , v , and L signifies the gray level of I; these levels are from the range 0 , 1 L 1 . The vital drive of multilevel thresholding is for locating the threshold value that divides pixels into many groups, which are defined as maximized in the subsequent formula:
t 1 * , t 2 * , , t K * = max t 1 , , t k   F t 1 , , t K
where F t 1 , , t K refers to the Otsu’s function, which is determined as:
F = u = 0 K A u ( η u η 1 ) 2 ,
A u = v = τ u t u + 1 1 P v ,
η u = v = l v τ u + 1 1 u P v A v ,   w h e r e   P l = h u / N p
where η 1 refers to the mean intensity of I with t 0 = 0 and t K + 1 = L . The h u and P u implies the frequency and probability of u t h gray level; correspondingly, N p stands for the entire number of pixels from I.
In order to define the optimum threshold values for the MLT approach, the EPO algorithm is applied. The EPO was stimulated by the emperor penguins (EPs) huddling attitudes as initiated in the Antarctic [19]. In order to forage, the EPs usually travel from rafl/colonies. Therefore, an initial function is for defining an effective mover in the swarm from the mathematical progress. The distances amongst Eps X e p were calculated for achieving this, then its temperature profile θ . The temperature profile of EPs is measured as:
θ = θ I t e r   max C I t e r   max
θ = 0   i f   R > 0.5 1   i f   R < 0.5
The higher number of iterations, where C implies the existing iteration, was stated as I t e r   max , and R implies the arbitrary number between zero and one.
As EPs usually huddle together to preserve temperature, careful precaution is taken to protect them from neighborhood collisions. Consequently, it presents two vectors ( U ) and ( V ) whose values are estimated as:
U = M × θ + X g r i d   a c c u r a c y × R a n d ( ) θ
V = R a n d ( )
X g r i d   a c c u r a c y = X X
where M indicates movement with the fixed value of 2, x represents an optimum solution, x e p signifies places of other EPs, [0, 1], and defines the absolute value of Rand.
D = S ( U ) X x V X x
S ( U ) = ( f e C / v e C ) 2
Equations (13) and (14) are created for calculating approximately the distance among EP and optimally fittest searching agents ( D ) . S ( ) illustrates the social force to that an optimally searching agent was managed by EPs; e represents the exponential functions.
At this point, based on the optimum agents achieved utilizing Equation (15), the places of EPs were upgraded.
X e p x + 1 = X x U D e p
It must be apparent that the parameter range selected is equal to individuals of new works. Therefore, the EPO technique has been employed for obtaining the optimal global value with respect to the operator. In EPO, with arbitrarily created individual EPs, the population of emperor penguins is initialized.

3.3. MobileNet-Based Feature Extraction

At the time of feature extraction, the segmented images are passed into the MobileNet model to generate feature vectors. AI has connected the gap between the abilities of machines and humans. Computer vision is a field of AI that allows machines to observe the world like humans. The advancement in this field has been completed over one certain approach named a CNN. CNN contains input, hidden, and output layers [20]. To design a diagnostic system for pancreatic tumor classification, in this study, the MobileNet model is utilized for feature extraction. MobileNet is faster when compared to convolution networks because of its different filter methods for every response channel. This method is built on depth-wise separable convolution that has two successive functions: one is a depth-wise convolutional at the filter phase, which employs convolutional to a single input channel at a time, and the other is a point-wise convolution at the filtering phase, which implements linear integration of output to the depth-wise convolution. ReLU and Batch normalization (BN) layers come after the convolutional process. Computation cost phenomenally reduces in the depth-wise separable model because of filtration at the integrating phase to minimalize its complexity and size. The version applied here employs MobileNet V2 with 3.47 million parameters.

3.4. Optimal AE-Based Classification

Finally, the AE model is used to detect and classify the presence of pancreatic cancer. An AE employs a set of weights recognition for encoding an input vector x into a depiction vector h , represented as latent parameters [21]. Then, it employs a set of weights generative for decoding the depiction vector into an estimated reconstruction of the input vector x . The aim of the AE is to recreate the input information in an unsupervised manner, that is, without utilizing any labels when the dimensionality of the input and the output need to be identical.
The encoder phase of AE takes x m as input and maps to latent parameter h n :
h = f W x + b
where f denotes an activation function, namely sigmoid, s x = 1 / 1 + e x or ReLU, W indicates a weight matrix, and b represents a bias vector. Then, the decoder phase maps h to x that is a reconstruction of x with similar dimensionality.
x = f W h + b
Here, f , W , and b indicate the respective parameters for the decoder that may be distinct from the encoder one. AE is trained for minimizing reconstruction errors, including mean squared error (MSE):
E x , x = x x 2
x is usually averaged through n trained instances. For determining the weight and bias values of the AE technique, the MLO approach can be utilized. It is arithmetically modeled for implementing optimization problems. The major concept of the presented approach is to utilize data from the members of the population. In such cases, member of the population uses the data of different leaders for searching in the problem-solving space as follows:
X i = χ i 1 , , χ i d , x i m
where X i represent the ith member of the population, and χ i d indicates the dth parameter of optimization problem. Members of the population are estimated by placing them in the fitness function [22]. Next, the population matrix is arranged according to the minimum values of the fitness function as follows:
X s o r t = X 1 s o r t X N s o r t X r 1 X r N min f i t max f i t
Here, X s o r t indicates the matrix of population, X 1 s o r t represent the member with optimal fitness value, X N s o r t indicates the member with worst fitness value, X r 1 shows the member of the population with initial rank-based fitness value, X r N represent the member of the population with worst rank-based fitness value, fir indicates the fitness value, and N signifies the number of members of the population. After arranging the population matrix, some amount of the ranked population is chosen as the leader. The leader is upgraded with all the iterations to guide members of the population towards the optimum solution as follows:
L = X l s o r t , X l s o r t X s o r t ,   l = 1 : n L
x s o r t = X 1 s o r t X n L s o r t X N s o r t N × m   L = X 1 s o r t X l s o r t X n L s o r t n l × m
where L indicates the selected leader member matrix and n L shows the leader’s number. The population in MLO is upgraded as follows. Initially, all the members of the population are moved in the searching space according to the leader position. The leader is defined according to the roulette wheel. All the leaders might be chosen for updating different parameters of the presented solution.
f i t i n = f i t i m a x f i t Σ j = 1 N f i t j m a x f i t
P l = f i t l n Σ j = 1 n l f i t j n
C l = P l + C l 1 , C 0 = 0   &   l = 1 : n l
L i , c d = L 1 = X 1 s o r t , 0 r C 1 L c = X c s o r t , C c 1 r C 1 L n l = X n L s o r t , C n l 1 r C n l
χ i , n e w d = χ i d + r a n d L i , c d 2 × x i d
X i = X i , n e w , f i t i , n e w f i t i X i , e l s e
In the equation, f i t i n indicates the normalized fitness function for the ith population member, P l represent the possibility of selection of l’th leader for guiding the parameter, C l indicate the cumulative probability of l ’th leader, x i , n e w d shows the new value for dth dimension of ith parameter, L i , c d denotes the dth dimension of selected cth leader for guiding dth parameter of ith population member, and r shows the arbitrary value within [0, 1]. Next, after upgrading the initial stage, all the members of the population make a slight random move in their own neighborhood. When the new location is highly relevant, the member upgrades its location to new status or else returns to its preceding location.
x i , n e w d = x i d + 2 × 1 τ T × 0.2 + r a n d × 0.4 × x i d
X i = X i , n e w ,   f i t i , n e w f i t i X i ,   e l s e
Here, t denotes the t t h iteration, and T represent the maximal number of iterations.
The MLO algorithm has determined a fitness function of tuning the parameter values of the AE model. The fitness function is used to determine a positive integer for the representation of effectual outcomes of the candidate solutions. Here, the objective is to minimize the classification error rate as given below.
f i t n e s s x i = C l a s s i f i e r E r r o r R a t e x i = n u m b e r   o f   m i s c l a s s i f i e d   i m a g e s T o t a l   n u m b e r   o f   i m a g e s 100

4. Experimental Validation

This section investigates the pancreatic tumor detection and classification performance of the IDLDMS-PTC model using test CT images collected from various sources. The dataset holds a total of 500 images, with 250 images under pancreatic tumor and 250 images under non-pancreatic tumor. The results are investigated under varying sizes of training/testing datasets. Figure 2 depicts sample CT images.

4.1. Results Analysis

Table 1 and Figure 3 report the overall classification outcomes of the IDLDMS-PTC with existing models under varying sizes of training sets (TS). The experimental outcomes indicated that the IDLDMS-PTC model obtained effectual outcomes under distinct TSs. For instance, the IDLDMS-PTC model obtained a higher average s e n s y of 0.9935 where the ODL-PTNTC, WELM, KELM, and ELM models obtained lower average s e n s y values of 0.9989, 0.9969, 0.9697, and 0.9679, respectively. Along with that, the IDLDMS-PTC model gained an increased average s p e c y of 0.9884 where the ODL-PTNTC, WELM, KELM, and ELM models provided reduced average s p e c y values of 0.9775, 0.9715, 0.9692, and 0.9664, respectively. Furthermore, the IDLDMS-PTC model reached a maximum average a c c u y of 0.9935 where the ODL-PTNTC, WELM, KELM, and ELM models resulted in minimum average a c c u y values of 0.9840, 0.9766, 0.9672, and 0.9644, respectively. Moreover, the IDLDMS-PTC model exhibited increased an average F s c o r e of 0.9948 where the ODL-PTNTC, WELM, KELM, and ELM models depicted decreased average F s c o r e values of 0.9882, 0.9739, 0.9688, and 0.9674, respectively.
The accuracy of the IDLDMS-PTC approach under training set (80:20) data is portrayed in Figure 4. The results demonstrated that the IDLDMS-PTC technique accomplished improved validation accuracy compared to training accuracy. It can also be observed that the accuracy values get saturated with the count of epochs.
The loss outcome analysis of the IDLDMS-PTC technique under training set (80:20) data is depicted in Figure 5. The figure exposed that the IDLDMS-PTC technique has denoted the reduced validation loss over the training loss. It can be additionally observed that the loss values get saturated with the count of epochs.
Table 2 and Figure 6 report the overall classification outcomes of the IDLDMS-PTC with existing techniques under varying sizes of CV. The experimental outcomes indicate that the IDLDMS-PTC algorithm obtained effectual outcomes under distinct CVs. For instance, the IDLDMS-PTC model obtained a higher average s e n s y of 0.9884 where the ODL-PTNTC, WELM, KELM, and ELM approaches achieved minimum average s e n s y values of 0.9788, 0.9738, 0.9571, and 0.9557, respectively. Followed by that, the IDLDMS-PTC model gained an increased average s p e c y of 0.9965 where the ODL-PTNTC, WELM, KELM, and ELM approaches provided decreased average s p e c y values of 0.9938, 0.9819, 0.9813, and 0.9789, respectively.
Additionally, the IDLDMS-PTC model reached a maximal average a c c u y of 0.9894, where the ODL-PTNTC, WELM, KELM, and ELM approaches resulted in minimum average a c c u y values of 0.9808, 0.9685, 0.9665, and 0.95977, respectively. Moreover, the IDLDMS-PTC model exhibited an increased average F s c o r e of 0.9904 where the ODL-PTNTC, WELM, KELM, and ELM models outperformed decreased average F s c o r e values of 0.9863, 0.9756, 0.9727, and 0.9710, respectively.
The accuracy outcome analysis of the IDLDMS-PTC technique under CV of 7 data is illustrated in Figure 7. The results demonstrate that the IDLDMS-PTC technique accomplished improved validation accuracy compared to training accuracy. It is also observable that the accuracy values get saturated with the epoch count.
The loss outcome analysis of the IDLDMS-PTC technique under CV of 7 data is displayed in Figure 8. The figure demonstrates that the IDLDMS-PTC methodology signified the reduced validation loss over the training loss. It is additionally noted that the loss values get saturated with the epoch count.

4.2. Discussion

In order to further ensure the betterment of the proposed model, a detailed comparative study of the IDLDMS-PTC approach with recent approaches is offered in Table 3 [23,24,25,26]. Figure 9 portrays the s e n s y examination of the IDLDMS-PTC technique with existing approaches. The figure defined that the CNN-10x10 and CNN-30x30 models obtained lower s e n s y values of 0.8050 and 0.8810, respectively. Afterward, the CNN-50x50 and CNN-70x70 techniques attained slightly increased s e n s y values of 0.9110 and 0.9150, respectively. In line with, the WELM, KELM, and ELM models resulted in moderately closer s e n s y of 0.9776, 0.9666, and 0.9627, respectively. Though the ODL-PTNTC model accomplished near optimum s e n s y of 0.9873, the presented IDLDMS-PTC methodology reached a maximum s e n s y of 0.9935.
Figure 10 depicts the s p e c y examination of the IDLDMS-PTC technique with existing approaches. The figure reported that the CNN-10x10 and CNN-30x30 methods obtained lower s p e c y values of 0.8180 and 0.8540. Likewise, the CNN-50x50 and CNN-70x70 models attained slightly increased s p e c y values of 0.8650 and 08670, respectively. Moreover, the WELM, KELM, and ELM models resulted in moderately closer s p e c y of 0.9767, 0.9753, and 0.9727, respectively. Next, the ODL-PTNTC model accomplished near-optimal s p e c y of 0.9775, and the projected IDLDMS-PTC technique attained increased s p e c y of 0.9884.
Figure 11 compares the a c c y examination of the IDLDMS-PTC system with existing algorithms. The figure revealed that the CNN-10x10 and CNN-30x30 methods obtained lower a c c y values of 0.8160 and 0.8590, respectively. Similarly, the CNN-50x50 and CNN-70x70 methods attained slightly increased a c c y values of 0.8730 and 0.8740, respectively. Additionally, the WELM, KELM, and ELM algorithms resulted in moderately closer a c c y of 0.9726, 0.9669, and 0.9621, respectively. Eventually, the ODL-PTNTC system has accomplished near-optimal a c c y of 0.9840, and the presented IDLDMS-PTC technique has reached a maximum a c c y of 0.9935.
By looking into the above-mentioned tables and figures, it can be ensured that the IDLDMS-PTC model resulted in superior pancreatic tumor detection and classification performance over the other methods. The reduced network size, minimum parameters, and faster performance of the MobileNet model help in assisting improved performance. Additionally, the utilization of the EPO technique for optimum threshold selection and the MLO algorithm for parameter tuning assists in accomplishing improved classification results.

5. Conclusions

In this study, a novel IDLDMS-PTC approach was derived from examining the CT images for the existence of pancreatic tumors. The proposed IDLDMS-PTC technique comprises several subprocesses, namely GF-based pre-processing, EPO-MLT-based segmentation, MobileNet-based feature extraction, AE-based classification, and MLO-based parameter optimization. The utilization of the EPO technique to optimum threshold selection and the MLO algorithm for parameter tuning assists in accomplishing improved classification results. To assess the effectiveness of the IDLDMS-PTC technique, a comprehensive experimental analysis was carried out on a benchmark dataset. Extensive comparative outcomes exposed the promising performance of the IDLDMS-PTC model over the existing methods. Therefore, the IDLDMS-PTC technique can be utilized as an effective tool for the healthcare system. In the future, deep instance segmentation approaches will be applied to improve the classifier results of the IDLDMS-PTC model.

Author Contributions

Conceptualization, T.V. and S.S.A.; methodology, A.K.D.; software, I.S.H.P.; validation, H.A., S.S.A. and A.M.; formal analysis, H.M.; investigation, T.V.; resources, A.M.; data curation, H.M.; writing—original draft preparation, S.S.A., H.A., P.D. and A.K.D.; writing—review and editing, H.M. and I.S.H.P.; visualization, A.K.D.; supervision, H.A.; project administration, S.S.A.; funding acquisition, H.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number (RGP 2/46/43). Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R303), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code: (22UQU4210118DSR04).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article, as no datasets were generated during the current study.

Conflicts of Interest

The authors declare that they have no conflict of interest. The manuscript was written through contributions from all authors. All authors have given approval to the final version of the manuscript.

References

  1. Australian Institute of Health and Welfare. Cancer in Australia 2019; Cancer Series No. 119; Cat. No. Can 123; AIHW: Canberra, Australia, 2019. Available online: https://www.aihw.gov.au/reports/cancer/cancer-in-australia-2019/data (accessed on 23 December 2021).
  2. Siegel, R.L.; Miller, K.D.; Jemal, A. Cancer statistics. CA Cancer J. Clin. 2020, 2020, 30. [Google Scholar]
  3. Australian Institute of Health and Welfare. Cancer Data in Australia; Cat. no. Can 122; AIHW: Canberra, Australia, 2020. Available online: https://www.Aihw.Gov.Au/reports/cancer/cancerdata-in-australia (accessed on 23 December 2021).
  4. American Cancer Society. Cancer Facts & Figures 2020. Available online: https://www.Cancer.Org/research/cancer-facts-statistics/all-cancer-facts-figures/cancer-factsfigures-2020.Html (accessed on 23 December 2021).
  5. Thompson, B.; Philcox, S.; Devereaux, B.; Metz, A.; Croagh, D.; Windsor, J.; Davaris, A.; Gupta, S.; Barlow, J.; Rhee, J.; et al. A decision support tool for the detection of pancreatic cancer in general practice: A modified Delphi consensus. Pancreatology 2021, 21, 1476–1481. [Google Scholar] [CrossRef] [PubMed]
  6. Bakator, M.; Radosav, D. Deep learning and medical diagnosis: A review of literature. Multimodal Technol. Interact. 2018, 2, 47. [Google Scholar] [CrossRef] [Green Version]
  7. Aggarwal, R.; Sounderajah, V.; Martin, G.; Ting, D.S.W.; Karthikesalingam, A.; King, D.; Ashrafian, H.; Darzi, A. Diagnostic accuracy of deep learning in medical imaging: A systematic review and meta-analysis. NPJ Digit. Med. 2021, 4, 1–23. [Google Scholar] [CrossRef] [PubMed]
  8. Fourcade, A.; Khonsari, R.H. Deep learning in medical image analysis: A third eye for doctors. J. Stomatol. Oral Maxillofac. Surg. 2019, 120, 279–288. [Google Scholar] [CrossRef]
  9. Marentakis, P.; Karaiskos, P.; Kouloulias, V.; Kelekis, N.; Argentos, S.; Oikonomopoulos, N.; Loukas, C. Lung cancer histology classification from CT images based on radiomics and deep learning models. Med. Biol. Eng. Comput. 2021, 59, 215–226. [Google Scholar] [CrossRef]
  10. Baldota, S.; Sharma, S.; Malathy, C. Deep Transfer Learning for Pancreatic Cancer Detection. In Proceedings of the 12th International Conference on Computing Communication and Networking Technologies, (ICCCNT), IEEE, West Bengal, India, 6–8 July 2021; pp. 1–7. [Google Scholar]
  11. Sujatha, K.; Krishnakumar, R.; Deepalakshmi, B.; Bhavani, N.P.G.; Srividhya, V. Soft sensors for screening and detection of pancreatic tumor using nanoimaging and deep learning neural networks. In Handbook of Nanomaterials for Sensing Applications; Elsevier: Amsterdam, The Netherlands, 2021; pp. 449–463. [Google Scholar]
  12. Xuan, W.; You, G. Detection and diagnosis of pancreatic tumor using deep learning-based hierarchical convolutional neural network on the internet of medical things platform. Future Gener. Comput. Syst. 2020, 111, 132–142. [Google Scholar] [CrossRef]
  13. Liang, Y.; Schott, D.; Zhang, Y.; Wang, Z.; Nasief, H.; Paulson, E.; Hall, W.; Knechtges, P.; Erickson, B.; Li, X.A. Auto-segmentation of pancreatic tumor in multi-parametric MRI using deep convolutional neural networks. Radiother. Oncol. 2020, 145, 193–200. [Google Scholar] [CrossRef]
  14. Asadpour, V.; Parker, R.A.; Mayock, P.R.; Sampson, S.E.; Chen, W.; Wu, B. Pancreatic cancer tumor analysis in CT images using patch-based multi-resolution convolutional neural network. Biomed. Signal Processing Control. 2021, 68, 102652. [Google Scholar] [CrossRef]
  15. Gao, X.; Wang, X. Performance of deep learning for differentiating pancreatic diseases on contrast-enhanced magnetic resonance imaging: A preliminary study. Diagn. Interv. Imaging 2020, 101, 91–100. [Google Scholar] [CrossRef]
  16. Iwasa, Y.; Iwashita, T.; Takeuchi, Y.; Ichikawa, H.; Mita, N.; Uemura, S.; Shimizu, M.; Kuo, Y.T.; Wang, H.P.; Hara, T. Automatic Segmentation of Pancreatic Tumors Using Deep Learning on a Video Image of Contrast-Enhanced Endoscopic Ultrasound. J. Clin. Med. 2021, 10, 3589. [Google Scholar] [CrossRef] [PubMed]
  17. Wang, P.; Wang, Z.; Lv, D.; Zhang, C.; Wang, Y. Low illumination color image enhancement based on Gabor filtering and Retinex theory. Multimed. Tools Appl. 2021, 80, 17705–17719. [Google Scholar] [CrossRef]
  18. Abd El Aziz, M.; Ewees, A.A.; Hassanien, A.E. Whale optimization algorithm and moth-flame optimization for multilevel thresholding image segmentation. Expert Syst. Appl. 2017, 83, 242–256. [Google Scholar] [CrossRef]
  19. Dhiman, G.; Oliva, D.; Kaur, A.; Singh, K.K.; Vimal, S.; Sharma, A.; Cengiz, K. BEPO: A novel binary emperor penguin optimizer for automatic feature selection. Knowl.-Based Syst. 2021, 211, 106560. [Google Scholar] [CrossRef]
  20. Dilshad, S.; Singh, N.; Atif, M.; Hanif, A.; Yaqub, N.; Farooq, W.A.; Ahmad, H.; Chu, Y.M.; Masood, M.T. Automated image classification of chest X-rays of COVID-19 using deep transfer learning. Results Phys. 2021, 28, 104529. [Google Scholar] [CrossRef] [PubMed]
  21. Sun, Y.; Xue, B.; Zhang, M.; Yen, G.G. A particle swarm optimization-based flexible convolutional autoencoder for image classification. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 2295–2309. [Google Scholar] [CrossRef] [Green Version]
  22. Dehghani, M.; Montazeri, Z.; Dehghani, A.; Ramirez-Mendoza, R.A.; Samet, H.; Guerrero, J.M.; Dhiman, G. MLO: Multi leader optimizer. Int. J. Intell. Eng. Syst. 2020, 13, 364–373. [Google Scholar] [CrossRef]
  23. Althobaiti, M.M.; Almulihi, A.; Ashour, A.A.; Mansour, R.F.; Gupta, D. Design of Optimal Deep Learning-Based Pancreatic Tumor and Nontumor Classification Model Using Computed Tomography Scans. J. Healthc. Eng. 2022, 2022, 1–15. [Google Scholar] [CrossRef]
  24. Liu, K.L.; Wu, T.; Chen, P.T.; Tsai, Y.M.; Roth, H.; Wu, M.S.; Liao, W.C.; Wang, W. Deep learning to distinguish pancreatic cancer tissue from non-cancerous pancreatic tissue: A retrospective study with cross-racial external validation. Lancet Digit. Health 2020, 2, e303–e313. [Google Scholar] [CrossRef]
  25. Si, K.; Xue, Y.; Yu, X.; Zhu, X.; Li, Q.; Gong, W.; Liang, T.; Duan, S. Fully end-to-end deep-learning-based diagnosis of pancreatic tumors. Theranostics 2021, 11, 1982. [Google Scholar] [CrossRef]
  26. Ma, H.; Liu, Z.X.; Zhang, J.J.; Wu, F.T.; Xu, C.F.; Shen, Z.; Yu, C.H.; Li, Y.M. Construction of a convolutional neural network classifier developed by computed tomography images for pancreatic cancer diagnosis. World J. Gastroenterol. 2020, 26, 5156. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Overall process of IDLDMS-PTC technique.
Figure 1. Overall process of IDLDMS-PTC technique.
Healthcare 10 00677 g001
Figure 2. Sample images: (a) tumor; (b) non-tumor.
Figure 2. Sample images: (a) tumor; (b) non-tumor.
Healthcare 10 00677 g002
Figure 3. Result analysis of IDLDMS-PTC technique under different sizes of training set: (a) sensitivity, (b) specificity, (c) accuracy, and (d) F-score.
Figure 3. Result analysis of IDLDMS-PTC technique under different sizes of training set: (a) sensitivity, (b) specificity, (c) accuracy, and (d) F-score.
Healthcare 10 00677 g003
Figure 4. Accuracy of IDLDMS-PTC technique under training set (80:20).
Figure 4. Accuracy of IDLDMS-PTC technique under training set (80:20).
Healthcare 10 00677 g004
Figure 5. Loss of IDLDMS-PTC technique under training set (80:20).
Figure 5. Loss of IDLDMS-PTC technique under training set (80:20).
Healthcare 10 00677 g005
Figure 6. Result analysis of IDLDMS-PTC technique under different sizes of CV: (a) sensitivity, (b) specificity, (c) accuracy, and (d) F-score.
Figure 6. Result analysis of IDLDMS-PTC technique under different sizes of CV: (a) sensitivity, (b) specificity, (c) accuracy, and (d) F-score.
Healthcare 10 00677 g006
Figure 7. Accuracy analysis of IDLDMS-PTC technique under CV of 7.
Figure 7. Accuracy analysis of IDLDMS-PTC technique under CV of 7.
Healthcare 10 00677 g007
Figure 8. Loss analysis of IDLDMS-PTC technique under CV of 7.
Figure 8. Loss analysis of IDLDMS-PTC technique under CV of 7.
Healthcare 10 00677 g008
Figure 9. S e n s y analysis of IDLDMS-PTC technique compared with recent approaches.
Figure 9. S e n s y analysis of IDLDMS-PTC technique compared with recent approaches.
Healthcare 10 00677 g009
Figure 10. S p e c y analysis of IDLDMS-PTC technique compared with recent approaches.
Figure 10. S p e c y analysis of IDLDMS-PTC technique compared with recent approaches.
Healthcare 10 00677 g010
Figure 11. A c c y analysis of IDLDMS-PTC technique compared with recent approaches.
Figure 11. A c c y analysis of IDLDMS-PTC technique compared with recent approaches.
Healthcare 10 00677 g011
Table 1. Results analysis of IDLDMS-PTC technique under different sizes of training set with existing approaches.
Table 1. Results analysis of IDLDMS-PTC technique under different sizes of training set with existing approaches.
Training (%)IDLDMS-PTCODL-PTNTCWELMKELMELM
Sensitivity
TS = 400.99950.99890.99690.96970.9679
TS = 500.99120.98550.98350.98230.9749
TS = 600.98510.98320.97200.98000.9712
TS = 700.99600.97590.96530.97020.9713
TS = 800.99580.99310.98890.97820.9628
Average0.99350.98730.98130.97610.9696
Specificity
TS = 400.97670.96960.96220.96870.9696
TS = 500.98530.97200.96840.97120.9543
TS = 600.99920.98820.98700.95780.9778
TS = 700.99590.97570.96460.97190.9531
TS = 800.98470.98180.97530.97640.9774
Average0.98840.97750.97150.96920.9664
Accuracy
TS = 400.99370.98340.98290.95440.9746
TS = 500.99110.99080.97600.97920.9893
TS = 600.99740.98500.98470.94870.9459
TS = 700.98760.97210.96520.97020.9432
TS = 800.99780.98860.97420.98370.9690
Average0.99350.98400.97660.96720.9644
F-score
TS = 400.99190.98920.98390.98590.9432
TS = 500.99400.99080.95760.98840.9651
TS = 600.99890.99840.99700.97100.9708
TS = 700.98730.98270.96710.95550.9788
TS = 800.98890.97980.96400.94310.9793
Average0.99480.98820.97390.96880.9674
Table 2. Result analysis of IDLDMS-PTC technique under different sizes of CV with existing approaches.
Table 2. Result analysis of IDLDMS-PTC technique under different sizes of CV with existing approaches.
No. of FoldsIDLDMS-PTCODL-PTNTCWELMKELMELM
Sensitivity
CV=60.98640.97730.97570.96070.9431
CV=70.97680.97610.96030.96540.9741
CV=80.99310.98530.98290.94450.9748
CV=90.98850.96700.96440.96050.9375
CV=100.99700.98850.98590.95460.9489
Average0.98840.97880.97380.95710.9557
Specificity
CV = 60.99810.99420.99270.96250.9803
CV = 70.99380.98800.98570.96720.9851
CV = 80.99670.99450.96990.98720.9937
CV = 90.99830.99750.98220.99650.9650
CV = 100.99580.99470.97880.99320.9706
Average0.99650.99380.98190.98130.9789
Accuracy
CV = 60.99860.99770.98460.99340.9379
CV = 70.99870.99440.96980.99040.9746
CV = 80.98070.96230.95460.96100.9422
CV = 90.98310.96480.96180.94860.9609
CV = 100.98600.98470.97150.93890.9829
Average0.98940.98080.96850.96650.9597
F-score
CV = 60.98610.98240.97990.98100.9657
CV = 70.98500.98320.97960.97440.9615
CV = 80.98640.98050.97840.97640.9692
CV = 90.99950.99920.95780.97860.9987
CV = 100.99480.98630.98210.95310.9600
Average0.99040.98630.97560.97270.9710
Table 3. Comparative analysis of IDLDMS-PTC technique with recent approaches.
Table 3. Comparative analysis of IDLDMS-PTC technique with recent approaches.
MethodsSensitivitySpecificityAccuracy
IDLDMS-PTC0.99350.98840.9935
ODL-PTNTC0.98730.97750.9840
WELM0.97760.97670.9726
KELM0.96660.97530.9669
ELM0.96270.97270.9621
CNN-10x100.80500.81800.8160
CNN-30x300.88100.85400.8590
CNN-50x500.91100.86500.8730
CNN-70x700.91500.86700.8740
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Vaiyapuri, T.; Dutta, A.K.; Punithavathi, I.S.H.; Duraipandy, P.; Alotaibi, S.S.; Alsolai, H.; Mohamed, A.; Mahgoub, H. Intelligent Deep-Learning-Enabled Decision-Making Medical System for Pancreatic Tumor Classification on CT Images. Healthcare 2022, 10, 677. https://doi.org/10.3390/healthcare10040677

AMA Style

Vaiyapuri T, Dutta AK, Punithavathi ISH, Duraipandy P, Alotaibi SS, Alsolai H, Mohamed A, Mahgoub H. Intelligent Deep-Learning-Enabled Decision-Making Medical System for Pancreatic Tumor Classification on CT Images. Healthcare. 2022; 10(4):677. https://doi.org/10.3390/healthcare10040677

Chicago/Turabian Style

Vaiyapuri, Thavavel, Ashit Kumar Dutta, I. S. Hephzi Punithavathi, P. Duraipandy, Saud S. Alotaibi, Hadeel Alsolai, Abdullah Mohamed, and Hany Mahgoub. 2022. "Intelligent Deep-Learning-Enabled Decision-Making Medical System for Pancreatic Tumor Classification on CT Images" Healthcare 10, no. 4: 677. https://doi.org/10.3390/healthcare10040677

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop