Next Article in Journal
Challenging Scenarios and Debated Indications for Laparoscopic Liver Resections for Hepatocellular Carcinoma
Next Article in Special Issue
Performing Automatic Identification and Staging of Urothelial Carcinoma in Bladder Cancer Patients Using a Hybrid Deep-Machine Learning Approach
Previous Article in Journal
The Mucin Family of Proteins: Candidates as Potential Biomarkers for Colon Cancer
Previous Article in Special Issue
Interpretable and Reliable Oral Cancer Classifier with Attention Mechanism and Expert Knowledge Embedding via Attention Map
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Aquila Optimizer with Explainable Artificial Intelligence-Enabled Cancer Diagnosis on Medical Imaging

by
Salem Alkhalaf
1,*,
Fahad Alturise
1,
Adel Aboud Bahaddad
2,
Bushra M. Elamin Elnaim
3,
Samah Shabana
4,
Sayed Abdel-Khalek
5 and
Romany F. Mansour
6
1
Department of Computer, College of Science and Arts in Ar Rass, Qassim University, Ar Rass 58892, Saudi Arabia
2
Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
3
Department of Computer Science, College of Science and Humanities in Al-Sulail, Prince Sattam Bin Abdulaziz University, Al-Kharj 16278, Saudi Arabia
4
Pharmacognosy Department, Faculty of Pharmaceutical Sciences and Drug Manufacturing, Misr University for Science and Technology (MUST), Giza 3236101, Egypt
5
Department of Mathematics, College of Science, Taif University, Taif 21944, Saudi Arabia
6
Department of Mathematics, Faculty of Science, New Valley University, El-Kharga 1064188, Egypt
*
Author to whom correspondence should be addressed.
Cancers 2023, 15(5), 1492; https://doi.org/10.3390/cancers15051492
Submission received: 28 January 2023 / Revised: 19 February 2023 / Accepted: 20 February 2023 / Published: 27 February 2023

Abstract

:

Simple Summary

For automated cancer diagnosis on medical imaging, explainable artificial intelligence technology uses advanced image analysis methods like deep learning to make a diagnosis and analyze medical images, as well as provide a clear explanation for how it arrived at its diagnosis. The objective of XAI is to provide patients and doctors with a better understanding of the system’s decision-making process and to increase transparency and trust in the diagnosis method. The manual classification of cancer using medical images is a tedious and tiresome process, which necessitates the design of automated tools for the decision-making process. In this study, we explored the significant application of explainable artificial intelligence and an ensemble of deep-learning models for automated cancer diagnosis. To demonstrate the enhanced performance of the proposed model, a widespread comparison study is made with recent models, and the results exhibit the significance of the proposed model on benchmark test images. Therefore, the proposed model has the potential as an automated, accurate, and rapid tool for supporting the detection and classification process of cancer.

Abstract

Explainable Artificial Intelligence (XAI) is a branch of AI that mainly focuses on developing systems that provide understandable and clear explanations for their decisions. In the context of cancer diagnoses on medical imaging, an XAI technology uses advanced image analysis methods like deep learning (DL) to make a diagnosis and analyze medical images, as well as provide a clear explanation for how it arrived at its diagnoses. This includes highlighting specific areas of the image that the system recognized as indicative of cancer while also providing data on the fundamental AI algorithm and decision-making process used. The objective of XAI is to provide patients and doctors with a better understanding of the system’s decision-making process and to increase transparency and trust in the diagnosis method. Therefore, this study develops an Adaptive Aquila Optimizer with Explainable Artificial Intelligence Enabled Cancer Diagnosis (AAOXAI-CD) technique on Medical Imaging. The proposed AAOXAI-CD technique intends to accomplish the effectual colorectal and osteosarcoma cancer classification process. To achieve this, the AAOXAI-CD technique initially employs the Faster SqueezeNet model for feature vector generation. As well, the hyperparameter tuning of the Faster SqueezeNet model takes place with the use of the AAO algorithm. For cancer classification, the majority weighted voting ensemble model with three DL classifiers, namely recurrent neural network (RNN), gated recurrent unit (GRU), and bidirectional long short-term memory (BiLSTM). Furthermore, the AAOXAI-CD technique combines the XAI approach LIME for better understanding and explainability of the black-box method for accurate cancer detection. The simulation evaluation of the AAOXAI-CD methodology can be tested on medical cancer imaging databases, and the outcomes ensured the auspicious outcome of the AAOXAI-CD methodology than other current approaches.

1. Introduction

Diagnosis of cancer is an indispensable problem in the medical sector. Initial identification of cancer is vital for better chances of treatment and the best course of action [1]. Therefore, cancer can be considered as one major topic where numerous authors carried out various research to attain higher performance in treatment prevention and diagnosis. Initial identification of tumors can increase treatment options and chances of survival of patients. Medical images like Magnetic Resonance Imaging, mammograms, microscopic images, and ultrasound were the typical technique for diagnosing cancer [2].
In recent times, computer-aided diagnosis (CAD) mechanism was utilized to help doctors in diagnosing tumors so that the accuracy level of diagnosis gets enhanced. CAD helps in reducing missed cancer lesions because of medical practitioner fatigue, minimizing data overloading and work pressure, and reducing the variability of intra-and-inter readers of imageries [3]. Problems like technical reasons are relevant to imaging quality, and errors caused by humans have augmented the misdiagnosis of breast cancer in the interpretation of radiologists. To solve these limitations, CAD mechanisms were advanced to automate breast cancer diagnosis and categorize malignant and benign lesions [4]. The CAD mechanism enhances the performance of radiologists in discriminating and finding abnormal and normal tissues. Such a process can be executed only as a double reader, but decisions are made by radiologists [5]. Figure 1 represents the structure of explainable artificial intelligence.
Recent advancements in the resolution of medical imaging modalities have enhanced diagnostic accuracy [6]. Effective use of imaging data for enhancing the diagnosis becomes significant. Currently, computer-aided diagnosis systems (CAD) have advanced a novel context in radiology to make use of data that should be implemented in the diagnosis of different diseases and different imaging modalities [7,8,9,10]. The efficacy of radiologists’ analysis can be enhanced in the context of consistency and accuracy in diagnosis or detection, while production can be enhanced by minimizing the hours needed to read the imageries. The results can be extracted through several methods in computer vision (CV) for presenting certain important variables like the likelihood of malignancy and the location of suspicious lesions of the detected lesions [11]. Then, DL technology has now significantly advanced, increasing expectations for the likelihood of computer software relevant to tumor screening again. Deep learning (DL) is a type of neural network (NNs). This NN has an output layer, an input layer, and a hidden layer. DL can be a NN with a lot of hidden layers. In the past, DL had more achievements, i.e., incredible performance improvements, particularly in speech recognition and image classification [12]. Recently, DL has been utilized in various areas. As they can solve complicated issues, DNNs are now common in the healthcare field. However, decision-making by these methods was fundamentally a black-box procedure making it problematic for doctors to determine whether choices were dependable. The usage of explainable artificial intelligence (XAI) can be recommended as the key to this issue [13].

1.1. Related Works

Van der Velden et al. [14] presented an outline of explainable AI (XAI) utilized in DL-related medical image analysis. A structure of XAI criteria can be presented for classifying DL-related medical image analysis techniques. As per the structure and anatomical location, studies on the XAI mechanism in medical image analysis were categorized and surveyed. Esmaeili et al. [15] intend to assess the performance of selective DL methods on localizing cancer lesions and differentiating lesions from healthier areas in MRI contrasts. Despite an important correlation between lesion localization accuracy and classification, the familiar AI techniques inspected in this study categorize certain cancer brains dependent upon other non-related attributes. The outcomes advocate that the abovementioned AI methods can formulate an intuition for method interpretability and play a significant role in the performance assessment of DL methods.
In [16], a new automatic classification system by merging several DL methods was devised for identifying prostate cancer from MRI and ultrasound (US) imageries. To enrich the performance of the model, particularly on the MRI data, the fusion model can be advanced by integrating the optimal pretrained method as feature extractors with shallow ML techniques (e.g., K-NN, SVM, RF, and Adaboost). At last, the fusion model can be inspected by explainable AI to identify the fact why it finds samples as Malignant or Benign Stage in prostate tumors. Kobylińska et al. [17] modeled selective techniques from the XAI domain in the instance of methods implemented for assessing lung cancer risk in the screening process of lung cancer using low-dose CT. The usage of such methods offers a good understanding of differences and similarities among the three typically used methods in screening lung cancer they are LCART, BACH, and PLCOm2012.
In [18], an explainable AI (XAI) structure was devised in this study for presenting the local and global analysis of auxiliary identification of hepatitis while maintaining good predictive outcomes. Firstly, a public hepatitis classifier benchmark from UCI was utilized for testing the structure feasibility. Afterward, the transparent and black-box ML methods were used to predict the deterioration of hepatitis. Transparent methods like KNN, LR, and DT were selected. While the black-box method like the RF, XGBoost, and SVM were selected. Watson and Al Moubayed [19] devised a method agnostic explainability-related technique for the precise identification of adversarial instances on two datasets with various properties and complexity: chest X-ray (CXR) data and Electronic Health Record (EHR). In [20], the XAI tool can be applied to the breast cancer (BC) dataset and offers a graphical analysis. The medical implication and molecular processes behind circulating adiponectin, HOMA, leptin, and BC resistance were sightseen, and XAI techniques were utilized for constructing methods for the diagnosis of new BC biomarkers.

1.2. Paper Contributions

This study develops an Adaptive Aquila Optimizer with Explainable Artificial Intelligence Enabled Cancer Diagnosis (AAOXAI-CD) technique on Medical Imaging. The proposed AAOXAI-CD technique uses the Faster SqueezeNet model for feature vector generation. As well as the execution of hyperparameter tuning of the Faster SqueezeNet model done with the AAO algorithm. For cancer classification, the majority weighted voting ensemble model with three DL classifiers, namely recurrent neural network (RNN), gated recurrent unit (GRU), and bidirectional long short-term memory (BiLSTM). Furthermore, the AAOXAI-CD technique combines the XAI approach LIME for better understanding and explainability of the black-box method for accurate cancer detection. The simulation evaluation of the AAOXAI-CD technique is tested on medical cancer imaging databases.

2. Materials and Methods

In this article, we have developed an automated cancer diagnosis approach using the AAOXAI-CD approach on medical images. The proposed AAOXAI-CD system attained the effectual colorectal and osteosarcoma cancer classification process. It encompasses Faster SqueezeNet-based feature vector generation, AAO-based parameter tuning, ensemble classification, and XAI modeling. Figure 2 defines the overall flow of the AAOXAI-CD approach. The overall process involved in the proposed model is given in Algorithm 1.
Algorithm 1: Process Involved in AAOXAI-CD Technique
Step 1: Input Dataset (Training Images)
Step 2: Image Pre-Processing
Step 3: Feature Extraction Using Faster SqueezeNet Model
Step 4: Parameter Tuning Process
   Step 4.1: Initialize the Population and Its Parameters
   Step 4.2: Calculate the Fitness Values
   Step 4.3: Exploration Process and Exploitation Process
   Step 4.4: Update the Fitness Values
   Step 4.5: Obtain Best Solution
Step 5: Ensemble of Classifier (RNN, GRU, and Bi-LSTM)
Step 6: Classification Output

2.1. Feature Extraction Using Faster SqueezeNet

Primarily, the AAOXAI-CD technique employed the Faster SqueezeNet method for feature vector generation. Fast SqueezeNet was proposed to enrich the real-time performance and accuracy of cancer classification [21]. We added BatchNorm and residual structure to prevent overfitting. Simultaneously, like DenseNet, concat is employed to interconnect dissimilar layers to increase the expressiveness of the first few layers in the network. Figure 3 represents the architecture of the Faster SqueezeNet method.
Fast SqueezeNet comprises a global average pooling layer, 1 BatchNorm layer, 3 block layers, and 4 convolutional layers. In the following ways, Fast SqueezeNet can be improved:
(1) To further enrich the information flow among layers DenseNet is imitated, and a distinct connection mode is devised. This covers a fire module and pooling layer, and lastly, 2 concat layers are interconnected to the following convolution layer.
The present layer receives each feature map of the previous layer, and we apply x 0 , ,   x l 1 as input; then, x l is expressed as
x l = H l ( [ x 0 , x 1 , , x l 1 ] ) ,
where [ x 0 , x 1 , , x l 1 ] represent the connection of feature graphs produced in layers 0 ,   1 ,   ,   l 1 and H l ( · ) concatenated more than one input data. Now,  characterizes the max pooling layer, x 1 designates Fire layers, and x l indicates the concat layer.
Initially, the performance of the network is improved without excessively raising the number of network variables, and simultaneously, any two-layer network could directly transmit data.
(2) We learned from the ResNet structure and suggested constituent elements, which comprise a fire module and pooling layer, to ensure improved network convergence. Lastly, afterward, two layers were summed, and it was interconnected to the next convolution layers.
In ResNet, shortcut connection employs identity mapping that implies input of a convolutional stack will be added directly to the resultant of the convolutional stack. Formally, the underlying mapping can be represented as H   ( x ) , considering the stacked non-linear layer fits another mapping of F ( x ) : = H ( x ) x . The original mapping is rewritten into F ( x ) + x . F ( x ) + x is comprehended by the structure named shortcut connection in the encrypting process.
In this work, the hyperparameter tuning of the Faster SqueezeNet method occurs by employing the AAO algorithm. This abovementioned algorithm is based on the distinct hunting strategies of Aquila for different prey [22]. For faster-moving prey, the Aquila needs to obtain the prey in a precise and faster manner, where the global exploration capability of the model was reflected. The optimizer technique was characterized by mimicking 4 behaviors of Aquila hunting. Firstly, the population needs to arbitrarily generate in-between the lower bound (LB) and upper bound (UB) dependent upon the problem, as given in Equation (2). The approximate optimum solution at the time of the iteration can be defined as the optimum solution. The present set of candidate solutions X was made at random by using the following expression:
X = [ χ 1 1 x 1 D x n 1 x n D ]
X i , j = r a n d × ( U B j L B j ) + L B j ,   i = 1 , 2 ,   ,   N j = 1 , 2 ,   D
where n signifies the overall amount of candidate solutions, D indicates the dimensionality of problems, and x n ,   D represents the location of n - th solutions in d dimensional space. R a n d denotes a randomly generated value, and U B j and L B j signify the j - th dimensional upper and lower boundary of the problem.
Initially, choose search spaces by hovering above in vertical bends. Aquila hovers above to identify the prey area and rapidly choose the better prey region as follows:
X 1 ( t + 1 ) = X b e s i ( t ) × ( 1 t T ) + ( X M ( t ) X b e s i ( t ) ) × r a n d
X M ( t ) = 1 N i = 1 N X i ( t ) , j = 1 , 2 ,   ,   D
where X 1 ( t + 1 ) symbolizes the location of the individual at t + 1   time , X b e s i ( t + 1 ) signifies the present global optimum site at the t - th iteration, T and t symbolize the maximal amount of iterations and the present amount of iterations, correspondingly, X ( t ) represents the average location of the individual at the existing iteration, and R a n d represents the randomly generated value within [ 0 , 1 ] in Gaussian distribution. The next strategy was a short gliding attack in isometric flight. Aquila flies over the targeted prey to prepare for assault while they find prey region from a higher altitude. This can be formulated as
X 2 ( t + 1 ) = X b e s t ( t ) × l e v y ( D ) + X R ( t ) + ( y x ) × r a n d
l e v y   ( D ) = s × u × σ | v | 1 β
σ = ( Γ ( 1 + β ) × sin ( π β 2 ) Γ ( 1 + β 2 ) × β × 2 ( β 1 2 ) )
where X 2 ( t + 1 ) denotes the new solution for the following iteration of t , D means spatial dimensions, levy (D) denotes Lévy flight distribution functions, X ( t ) indicates the arbitrary location of Aquila in [ 1 ,   N ] , s take the values of 1.5, y and χ presents the spiral situations in search region as follows:
y = r ×   cos   ( θ )
x = r ×   sin   ( θ )
r = r 1 + 0.00565 × D 1
θ = 0.005 × D 1 + 3 × π 2
where r 1 takes the fixed index between 1 and 20, D 1 denotes the integers from 1 to the length of the search region. The third strategy was a slow-descent attack and low-flying. The Aquila locks onto a hunting target in the hunting region and, with attack ready, makes the initial attacks in the vertical descent, thereby testing prey response. These behaviors are given as follows:
X 3 ( t + 1 ) = ( X b e s i ( t ) X M ( t ) ) × α r a n d + ( ( U B L B ) × r a n d + L B ) × δ
where X 3 ( t + 1 ) denotes the solution of the following iteration of t , δ , and α denotes the mining adjustment parameter within ( 0 , 1 ) , LB and UB represent the lower and upper boundaries of the issue. The fourth strategy was grabbing and walking prey. Once the Aquila approaches the prey, it starts to attack prey based on arbitrary movements of prey. These behaviors can be described as follows
X 4 ( t + 1 ) = Q F × X b e s t ( t ) ( G 1 × X ( t ) × r a n d ) G 2 × l e v y ( D )
Q F ( t ) = t 2 × r a n d 1 ( 1 T ) 2
G 1 = 2 × r a n d 1
G 1 = 2 × ( 1 t T )
where X 4 ( t + 1 ) denotes the new solution for the following iteration of t , Q F represents the mass function leveraged for balancing the search process, and F ( 0 , 1 ) G 1 represents various strategies utilized by the Aquila for prey escape; G 2 signifies slope value from the initial location to the final location at the chase time of Aquila’s prey, which takes values from 2 to 0 , R a n d denotes the random number within [ 0 , 1] in Gaussian distribution; and T and t denotes the maximal amount of iterations and existing amount of iterations, correspondingly. Niche thought is from biology in which microhabitats represent roles or functions of the organization in a specific environment, and organizations with general features are named species. In the AAO algorithm, Niche thought is used, which applies a sharing model for comparing the distance among individuals in a habitat. A specific threshold was set to increase the fitness of an individual with the highest fitness, ensuring that the individual state is optimal. For an individual with the lowest fitness, a penalty was presented to make them update and further find the optimum value in another region to guarantee the diversity of the population at the iteration and attain the optimum solution. Here, the distance among individuals of the smallest habitat population was evaluated as follows:
d i j = | X i X j |
The data exchange function among Xi and Xj individuals is given below
s h ( d i j ) = { 1 d i i ρ , d i j < ρ 0 ,   d i j > ρ
where ρ denotes the radius of data sharing in microhabitats and d i j < ρ guarantees that individuals live in the microhabitat environments. After sharing the data, the optimum adaptation can be adjusted in time, as follows.
F i b e s t = F i s h , i = 1 , 2 ,   ,   N
where F i means optimum adaptation after sharing, and F j denotes original adaptation.
The AAO method not only derived a fitness function from attaining superior classification performance as well describes positive values to symbolize the enhanced outcome of the candidate solutions. The reduction of classification error rates was treated as the fitness function.
f i t n e s s ( x i ) = C l a s s i f i e r E r r o r R a t e ( x i ) = n u m b e r   o f   m i s c l a s s i f i e d   s a m p l e s T o t a l   n u m b e r   o f   s a m p l e s × 100

2.2. Ensemble Learning-Based Classification

In this work, the DL paradigm is integrated, and the best outcome is selected by the weighted voting method. Assumed the D base classification model and amount of classes as n for voting, predictive class c k of weighted voting for every instance as follows
c k = arg   max j i = 1 D ( Δ ji × w i ) ,
where Δ ji signifies binary parameter. As soon as i th base classification classifies the k instances into j th classes, then Δ ji = 1 ; or else, Δ ji = 0 . w i represents the weight of i th base classification in the ensemble.
Acc = k { 1 | c k is   the   true   class   of   instance   k } Size   of   test   instances × 100 % .

2.2.1. RNN Model

Initially, Elman recommended the recurrent unit as its essential block (1990). If they are used to exceedingly long sequences, the elementary RNN cell has common problems of expanding gradient and disappearing gradient [23]. It is a fact that the elementary RNN cell could not hold long-term dependence eventually. Hence it demonstrates that this cell has shortcomings. The backpropagated gradient tends to reduce once the sequence is particularly long, which prevents the effective updating of the weight. However, once the gradient is substantial, they might explode across a longer sequence, which renders the weight matrix unstable. The above two difficulties stem from the intractable nature of the gradient, which has made it more difficult for RNN cells to identify and be accountable for a long-term relationship. Equations (24) and (25) demonstrate the mathematical expression for RNN architecture.
h t 1 = σ ( P h × h t 1 + P x × x t + B a )
y t =   tan h   ( P o × h t + B o )
where h t denotes the hidden state, and it was the only type of memory in the RNN cell. P h and P x epitomize the weight matrix for the hidden state and P o bias vector for cell output correspondingly, x t and y t characterize the inputs and outputs of the cell at the t time step, correspondingly, B a and B o represent the bias vector for the hidden state and cell outputs, correspondingly.
The latter hidden state is conditioned on the hidden state of the previous time step and the existing inputs. The cellular feedback loop connects the current state to the succeeding one. This bond is crucial to consider prior data while adjusting the present cell state. In such cases, the hyperbolic tangent function, represented by Tanh, turned on the overt state, and the sigmoid function was applied, represented by, to turn on the latent state.

2.2.2. GRU Model

The RNN is a kind of ANN model with a cyclic structure and is appropriate for data processing in sequence. The gradient is lost, and learning ability is greatly reduced once the time interval is large [24]. Hochreiter and Schmidhuber resolved these problems and developed the LSTM. The LSTM was extensively applied in time-series data, and its basic concept is that the cell state was interconnected as a conveyor belt. In that regard, the gradient propagates although distance among the states rises. In LSTM cells, the cell state can be controlled by using three gating functions forget, input, and output gates. In 2014, the GRU was developed as a network that enhanced the learning accuracy of LSTM by adjusting the LSTM model. Different from LSTM, the GRU has a fast-learning speed and is encompassed two gating functions. Furthermore, parameters are smaller than LSTM since the hidden and cell states are incorporated into a single hidden state. Accordingly, the GRU shows outstanding performance for long-term dependency in time-series data processing and takes lesser computational time when compared to the LSTM. The GRU equations to determine the hidden state are shown below:
r t = σ ( W r x t + U r h t 1 + b r )
z t = σ ( W z x t + U z h t 1 + b z )
h t = ( 1 z t ) h t 1 + z t tan h   ( W h x t + U h ( r t E h t 1 ) + b h )
From the expression, r t denotes the reset gate and z t indicates the update gate at time t . x t represents input value at t time, W and U indicate weights, and b refers to bias. h t denotes the hidden state at time t . shows the component-wise (Hadamard) multiplication.

2.2.3. BiLSTM Model

RNN has the structural feature of the node connected in a loop, making them appropriate for data processing; however, it is frequently confronted with the problem of vanishing gradient [25]. The GRU and long and short-term memory (LSTM) improved on RNN by adding several threshold gates to mitigate gradient vanishing problems and enhance classification accuracy. Meanwhile, the LSTM method has a memory unit that prevents the network from facing gradient vanishing problems.
The LSTM could enhance the deficiencies of RNN; generally, the resultant of the present time was relevant to the state information of the past time, as well as state information of future time. The Bi-LSTM network was established concerning the problem that was integrating historical and future data by interconnecting two LSTMs. The architecture of the BiLSTM network comprises the back-to-forth and front-to-back LSTM layers. The forward and backward layers calculate the input dataset, and lastly, the architecture of two layers is integrated to obtain the output of the BiLSTM network as follows:
o t = g ( ω 1 i t + ω 2 0 t 1 ) o t = g ( ω 3 i t + ω 5 0 t 1 ) , y t = f ( ω 4 0 t + ω 6 0 t )
In Equation (29), ω denotes weighted parameters in the BiLSTM network, i t shows input at t time , 0 t indicates the results of the forward hidden layer at   t time, 0 t represents the output of the backward hidden layer at t time and y t represents the last resultant of the network.

2.3. Modeling of XAI Using LIMA Approach

The AAOXAI-CD technique combines the XAI approach LIME for a better understanding and explainability of the black-box method for accurate cancer detection [26]. Local interpretable model-agnostic explanation (LIME) describes various ML approaches for regression prediction, using the featured value change of the data sample to transform the featured values into the contribution of the predictor. The explainer gives a local interpretation of the data samples. For example, the interpretable model in LIME often uses linear regression (LR) or decision trees (DTs) and are trained by the smaller perturbation (removing specific words, hiding part of the image, and adding random noise) in the model. The quality of these models seems to be increasing and was used to resolve the best part of the business victimization dataset. Similarly, there were persistent tradeoffs between model accuracy and interpretability. Generally, the performance can be improved and enhanced by applying sophisticated techniques such as call trees, random forest, material, boosting, and SVM, which are “blackbox” techniques. The LIME provides a clear explanation of the problems with the blackbox classifiers. The LIME is a way of understanding an ML BlackBox method by perturbing the input dataset and seeing how prediction changes. The LIME is used for any ML black-box models. The fundamental steps are shown as follows:
A TabularExplainer is initialized by the data used for the data training about the features and various class names.
In the class explain_instance, a technique called explain_instance accepts the reference to the instance where the explanation is essential, plus the number of features to be added in the explanation and the trained model’s prediction technique.

3. Results and Discussion

The proposed model is simulated using Python 3.6.5 tool on PC i5-8600k, GeForce 1050 Ti 4 GB, 16 GB RAM, 250 GB SSD, and 1 TB HDD. The parameter settings are given as follows: learning rate: 0.01, dropout: 0.5, batch size: 5, epoch count: 50, and activation: ReLU. In this section, the simulation values of the AAOXAI-CD technique can be tested utilizing dual datasets: the colorectal cancer dataset (dataset 1) and the osteosarcoma dataset (dataset 2). Figure 4 defines the sample images of Colorectal Cancer. For experimental validation, 70:30 and 80:20 of the training set (TRS) and testing set (TSS) is used. Dataset 1 (Warwick-QU dataset) [27] comprises 165 images with 91 malignant tumors and 74 benign tumor images. The data were collected using the Zeiss MIRAX MIDI Scanner by implementing an image data weight range of 1.187 kilobytes, 716 kilobytes, and an image data resolution range of 567 × 430 pixels to 775 × 522 pixels with all pixels having a distance of 0.6 µm from the actual distance. Next, dataset 2 [28] contains 1144 images under 3 classes. It covers 536 images under Non-Tumor (NT) class, 345 images under viable tumor (VT), and 263 images under non-Viable Tumor (NVT). Figure 5 defines the sample images of osteosarcoma.
In Figure 6, the cancer classifier outcomes of the AAOXAI-CD method in terms of classification performance under dataset-1. The outcomes demonstrate that the AAOXAI-CD system has identified benign and malignant samples.
In Table 1, the overall classifier results of the AAOXAI-CD method on dataset-1. The results demonstrate that the AAOXAI-CD method has identified benign and malignant samples. For instance, with 80% of TRS, the AAOXAI-CD technique reaches an average a c c u y of 98.65%, p r e c n of 98.33%, r e c a l of 98.65%, s p e c y of 98.65%, F s c o r e of 98.47%, and MCC of 96.98%. Meanwhile, with 20% of TSS, the AAOXAI-CD system reaches an average a c c u y of 97.06%, p r e c n of 97.06%, r e c a l of 97.06%, s p e c y of 97.06%, F s c o r e of 96.97%, and MCC of 94.12%. Furthermore, with 70% of TRS, the AAOXAI-CD algorithm reaches an average a c c u y of 99%, p r e c n of 99.24%, r e c a l of 99%, s p e c y of 99%, F s c o r e of 99.11%, and MCC of 98.24%.
The TLOS and VLOS of the AAOXAI-CD model on dataset-1 are defined in Figure 8. The figure inferred that the AAOXAI-CD approach has superior performance with minimal values of TLOS and VLOS. Notably, the AAOXAI-CD model has minimal VLOS outcomes.
In Table 2 and Figure 9, the comparative interpretation of the AAOXAI-CD system with recent methods on dataset-1 [29,30,31]. The figures represented that the ResNet-18(60–40), ResNet-50 (60–40), and CP-CNN models resulted in the least performance. Although the AAI-CCDC technique results in moderately improved outcomes, the AAOXAI-CD technique accomplishes maximum performance with p r e c n of 99.24%, r e c a l of 99%, and a c c u y of 99%.
In Figure 10, the cancer classification outcomes of the AAOXAI-CD system in terms of classification performance under dataset-2. The results demonstrate that the AAOXAI-CD technique has identified benign and malignant samples.
In Table 3, the overall classifier results of the AAOXAI-CD system on dataset-2. The results demonstrate that the AAOXAI-CD method has identified benign and malignant samples. For instance, with 80% of TRS, the AAOXAI-CD technique reaches an average a c c u y of 98.11%, p r e c n of 97.60%, r e c a l of 96.77%, s p e c y of 98.37%, F s c o r e of 97.16%, and MCC of 95.66%. Meanwhile, with 20% of TSS, the AAOXAI-CD algorithm reaches an average a c c u y of 99.42%, p r e c n of 99.16%, r e c a l of 98.61%, s p e c y of 99.49%, F s c o r e of 98.87%, and MCC of 98.44%. Furthermore, with 70% of TRS, the AAOXAI-CD technique reaches an average a c c u y of 98.67%, p r e c n of 97.70%, r e c a l of 97.26%, s p e c y of 99.07%, F s c o r e of 97.42%, and MCC of 96.56%.
The TLOS and VLOS of the AAOXAI-CD model on dataset-2 are defined in Figure 12. The figure inferred the AAOXAI-CD system has better outcomes having minimal values of TLOS and VLOS. Visibly the AAOXAI-CD model has minimal VLOS outcomes.
Table 4 and Figure 13 show a brief study of the AAOXAI-CD method with the recent method on dataset-2 [32,33]. The experimental values represented that the CNN-Xception, CNN-EfficientNet, CNN-ResNet-50, and CNN-MobileNet-V2 models resulted in the least performance. Although the WDODTL-ODC and HBODL-AOC techniques result in moderately improved outcomes, the AAOXAI-CD technique accomplishes maximum performance with of p r e c n 99.05%, of r e c a l 98.91%, and a c c u y of 99.42%.
From the above-mentioned results, it is assured that the proposed model achieves effectual classification performance over other DL models. The enhanced performance of the proposed model is due to the inclusion of AAO-based hyperparameter tuning and ensemble classification processes. In addition, the use of LIME helps to build an effective predictive modeling technique in cancer diagnosis. Without transparency, it is hard to gain the trust of healthcare professionals and employ predictive approaches in their daily operations. XAI has received considerable interest in recent times. It enables the clients to generate instances and comprehend how the classification model accomplishes the results. Healthcare institutions are keenly designing predictive models for supporting operations. The XAI can be combined to improve the transparency of healthcare predictive modeling. The interactions between healthcare professionals and the AI system are important for transferring knowledge and adopting models in healthcare operations.

4. Conclusions

In this study, we have developed an automated cancer diagnosis method using the AAOXAI-CD technique on medical images. The proposed AAOXAI-CD system attained the effectual colorectal and osteosarcoma cancer classification process. Primarily, the AAOXAI-CD technique utilized the Faster SqueezeNet model for feature vector generation. Moreover, the hyperparameter tuning of the Faster SqueezeNet model takes place with the AAO algorithm. For cancer classification, the majority-weighted voting ensemble model with three DL classifiers, namely RNN, GRU, and BiLSTM. Furthermore, the AAOXAI-CD technique combines the XAI approach LIME for better understanding and explainability of the black-box method for accurate cancer detection. The experimental evaluation of the AAOXAI-CD approach was tested on medical cancer imaging databases, and the outcomes ensured the promising outcome of the AAOXAI-CD method over other recent methods. In the future, a feature fusion-based classification model can be designed to boost the performance of the AAOXAI-CD technique.

Author Contributions

Conceptualization, S.A.-K.; Methodology, S.A. and R.F.M.; Validation, F.A. and B.M.E.E.; Formal analysis, R.F.M.; Investigation, S.S.; Resources, A.A.B. and B.M.E.E.; Data curation, F.A. and A.A.B.; Writing—original draft, S.A.-K.; Writing—review & editing, S.A.; Supervision, S.A. and S.A.-K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This article does not contain any studies with human participants performed by any of the authors.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article as no datasets were generated during the current study.

Acknowledgments

Researchers would like to thank the Deanship of Scientific Research, Qassim University for funding publication of this project.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Cordova, C.; Muñoz, R.; Olivares, R.; Minonzio, J.G.; Lozano, C.; Gonzalez, P.; Marchant, I.; González-Arriagada, W.; Olivero, P. HER2 classification in breast cancer cells: A new explainable machine learning application for immunohistochemistry. Oncol. Lett. 2023, 25, 44. [Google Scholar] [CrossRef] [PubMed]
  2. Hauser, K.; Kurz, A.; Haggenmüller, S.; Maron, R.C.; von Kalle, C.; Utikal, J.S.; Meier, F.; Hobelsberger, S.; Gellrich, F.F.; Sergon, M.; et al. Explainable artificial intelligence in skin cancer recognition: A systematic review. Eur. J. Cancer 2022, 167, 54–69. [Google Scholar] [CrossRef] [PubMed]
  3. Farmani, A.; Soroosh, M.; Mozaffari, M.H.; Daghooghi, T. Optical nanosensors for cancer and virus detections. In Nanosensors for Smart Cities; Elsevier: Amsterdam, The Netherlands, 2020; pp. 419–432. [Google Scholar]
  4. Salehnezhad, Z.; Soroosh, M.; Farmani, A. Design and numerical simulation of a sensitive plasmonic-based nanosensor utilizing MoS2 monolayer and graphene. Diam. Relat. Mater. 2023, 131, 109594. [Google Scholar] [CrossRef]
  5. Amoroso, N.; Pomarico, D.; Fanizzi, A.; Didonna, V.; Giotta, F.; La Forgia, D.; Latorre, A.; Monaco, A.; Pantaleo, E.; Petruzzellis, N.; et al. A roadmap towards breast cancer therapies supported by explainable artificial intelligence. Appl. Sci. 2021, 11, 4881. [Google Scholar] [CrossRef]
  6. Eminaga, O.; Loening, A.; Lu, A.; Brooks, J.D.; Rubin, D. Detection of prostate cancer and determination of its significance using explainable artificial intelligence. J. Clin. Oncol. 2020, 38, 5555. [Google Scholar] [CrossRef]
  7. Sakai, A.; Komatsu, M.; Komatsu, R.; Matsuoka, R.; Yasutomi, S.; Dozen, A.; Shozu, K.; Arakaki, T.; Machino, H.; Asada, K.; et al. Medical professional enhancement using explainable artificial intelligence in fetal cardiac ultrasound screening. Biomedicines 2022, 10, 551. [Google Scholar] [CrossRef]
  8. Ragab, M.; Albukhari, A.; Alyami, J.; Mansour, R.F. Ensemble deep-learning-enabled clinical decision support system for breast cancer diagnosis and classification on ultrasound images. Biology 2022, 11, 439. [Google Scholar] [CrossRef]
  9. Escorcia-Gutierrez, J.; Mansour, R.F.; Beleño, K.; Jiménez-Cabas, J.; Pérez, M.; Madera, N.; Velasquez, K. Automated deep learning empowered breast cancer diagnosis using biomedical mammogram images. Comput. Mater. Contin. 2022, 71, 3–4221. [Google Scholar] [CrossRef]
  10. Mansour, R.F.; Alfar, N.M.; Abdel-Khalek, S.; Abdelhaq, M.; Saeed, R.A.; Alsaqour, R. Optimal deep learning based fusion model for biomedical image classification. Expert Syst. 2022, 39, e12764. [Google Scholar] [CrossRef]
  11. Davagdorj, K.; Bae, J.W.; Pham, V.H.; Theera-Umpon, N.; Ryu, K.H. Explainable artificial intelligence based framework for non-communicable diseases prediction. IEEE Access 2021, 9, 123672–123688. [Google Scholar] [CrossRef]
  12. Severn, C.; Suresh, K.; Görg, C.; Choi, Y.S.; Jain, R.; Ghosh, D. A Pipeline for the Implementation and Visualization of Explainable Machine Learning for Medical Imaging Using Radiomics Features. Sensors 2022, 22, 5205. [Google Scholar] [CrossRef] [PubMed]
  13. Pintelas, E.; Liaskos, M.; Livieris, I.E.; Kotsiantis, S.; Pintelas, P. Explainable machine learning framework for image classification problems: Case study on glioma cancer prediction. J. Imaging 2020, 6, 37. [Google Scholar] [CrossRef] [PubMed]
  14. Van der Velden, B.H.; Kuijf, H.J.; Gilhuijs, K.G.; Viergever, M.A. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Med. Image Anal. 2022, 79, 102470. [Google Scholar] [CrossRef] [PubMed]
  15. Esmaeili, M.; Vettukattil, R.; Banitalebi, H.; Krogh, N.R.; Geitung, J.T. Explainable artificial intelligence for human-machine interaction in brain tumor localization. J. Pers. Med. 2021, 11, 1213. [Google Scholar] [CrossRef]
  16. Hassan, M.R.; Islam, M.F.; Uddin, M.Z.; Ghoshal, G.; Hassan, M.M.; Huda, S.; Fortino, G. Prostate cancer classification from ultrasound and MRI images using deep learning based Explainable Artificial Intelligence. Future Gener. Comput. Syst. 2022, 127, 462–472. [Google Scholar] [CrossRef]
  17. Kobylińska, K.; Orłowski, T.; Adamek, M.; Biecek, P. Explainable machine learning for lung cancer screening models. Applied Sciences 2022, 12, 1926. [Google Scholar] [CrossRef]
  18. Peng, J.; Zou, K.; Zhou, M.; Teng, Y.; Zhu, X.; Zhang, F.; Xu, J. An explainable artificial intelligence framework for the deterioration risk prediction of hepatitis patients. J. Med. Syst. 2021, 45, 61. [Google Scholar] [CrossRef]
  19. Watson, M.; Al Moubayed, N. Attack-agnostic adversarial detection on medical data using explainable machine learning. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 8180–8187. [Google Scholar]
  20. Idrees, M.; Sohail, A. Explainable machine learning of the breast cancer staging for designing smart biomarker sensors. Sens. Int. 2022, 3, 100202. [Google Scholar] [CrossRef]
  21. Xu, Y.; Yang, G.; Luo, J.; He, J. An Electronic component recognition algorithm based on deep learning with a faster SqueezeNet. Math. Probl. Eng. 2020, 2020, 2940286. [Google Scholar] [CrossRef]
  22. Zhang, Y.; Xu, X.; Zhang, N.; Zhang, K.; Dong, W.; Li, X. Adaptive Aquila Optimizer Combining Niche Thought with Dispersed Chaotic Swarm. Sensors 2023, 23, 755. [Google Scholar] [CrossRef]
  23. Bowes, B.D.; Sadler, J.M.; Morsy, M.M.; Behl, M.; Goodall, J.L. Forecasting groundwater table in a flood prone coastal city with long short-term memory and recurrent neural networks. Water 2019, 11, 1098. [Google Scholar] [CrossRef] [Green Version]
  24. Kim, A.R.; Kim, H.S.; Kang, C.H.; Kim, S.Y. The Design of the 1D CNN–GRU Network Based on the RCS for Classification of Multiclass Missiles. Remote Sens. 2023, 15, 577. [Google Scholar] [CrossRef]
  25. Wang, Q.; Cao, D.; Zhang, S.; Zhou, Y.; Yao, L. The Cable Fault Diagnosis for XLPE Cable Based on 1DCNNs-BiLSTM Network. J. Control. Sci. Eng. 2023, 2023, 1068078. [Google Scholar] [CrossRef]
  26. Zafar, M.R.; Khan, N.M. DLIME: A deterministic local interpretable model-agnostic explanations approach for computer-aided diagnosis systems. arXiv 2019, arXiv:1906.10263. [Google Scholar]
  27. Sirinukunwattana, K.; Snead, D.R.J.; Rajpoot, N.M. A Stochastic Polygons Model for Glandular Structures in Colon Histology Images. IEEE Trans. Med. Imaging 2015, 34, 2366–2378. [Google Scholar] [CrossRef] [Green Version]
  28. Leavey, P.; Sengupta, A.; Rakheja, D.; Daescu, O.; Arunachalam, H.B.; Mishra, R. Osteosarcoma data from UT Southwestern/UT Dallas for Viable and Necrotic Tumor Assessment [Data set]. Cancer Imaging Arch. 2019, 14. [Google Scholar] [CrossRef]
  29. Ragab, M.; Albukhari, A. Automated Artificial Intelligence Empowered Colorectal Cancer Detection and classification Model. Comput. Mater. Contin. 2022, 72, 5577–5591. [Google Scholar] [CrossRef]
  30. Sarwinda, D.; Paradisa, R.H.; Bustamam, A.; Anggia, P. Deep Learning in Image Classification using Residual Network (ResNet) Variants for Detection of Colorectal Cancer. Procedia Comput. Sci. 2021, 179, 423–431. [Google Scholar] [CrossRef]
  31. Sirinukunwattana, K.; Raza, S.E.A.; Tsang, Y.W.; Snead, D.R.; Cree, I.A.; Rajpoot, N.M. Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE Trans. Med. Imaging 2016, 35, 1196–1206. [Google Scholar] [CrossRef] [Green Version]
  32. Vaiyapuri, T.; Jothi, A.; Narayanasamy, K.; Kamatchi, K.; Kadry, S.; Kim, J. Design of a Honey Badger Optimization Algorithm with a Deep Transfer Learning-Based Osteosarcoma Classification Model. Cancers 2022, 14, 6066. [Google Scholar] [CrossRef]
  33. Fakieh, B.; Al-Ghamdi, A.S.A.-M.; Ragab, M. Optimal Deep Stacked Sparse Autoencoder Based Osteosarcoma Detection and Classification Model. Healthcare 2022, 10, 1040. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Structure of XAI.
Figure 1. Structure of XAI.
Cancers 15 01492 g001
Figure 2. The overall flow of AAOXAI-CD approach.
Figure 2. The overall flow of AAOXAI-CD approach.
Cancers 15 01492 g002
Figure 3. Architecture of Faster SqueezeNet.
Figure 3. Architecture of Faster SqueezeNet.
Cancers 15 01492 g003
Figure 4. Sample Images of Colorectal Cancer.
Figure 4. Sample Images of Colorectal Cancer.
Cancers 15 01492 g004
Figure 5. Sample images of osteosarcoma.
Figure 5. Sample images of osteosarcoma.
Cancers 15 01492 g005
Figure 6. Confusion matrices of the AAOXAI-CD system on dataset-1 (a,b) TRS/TSS of 80:20 and (c,d) TRS/TSS of 70:30.
Figure 6. Confusion matrices of the AAOXAI-CD system on dataset-1 (a,b) TRS/TSS of 80:20 and (c,d) TRS/TSS of 70:30.
Cancers 15 01492 g006
Figure 7. TACY and VACY analysis of the AAOXAI-CD approach on dataset-1.
Figure 7. TACY and VACY analysis of the AAOXAI-CD approach on dataset-1.
Cancers 15 01492 g007
Figure 8. TLOS and VLOS analysis of AAOXAI-CD approach on dataset-1.
Figure 8. TLOS and VLOS analysis of AAOXAI-CD approach on dataset-1.
Cancers 15 01492 g008
Figure 9. Comparative analysis of the AAOXAI-CD approach on dataset-1.
Figure 9. Comparative analysis of the AAOXAI-CD approach on dataset-1.
Cancers 15 01492 g009
Figure 10. Confusion matrices of AAOXAI-CD system on dataset-2 (a,b) TRS/TSS of 80:20 and (c,d) TRS/TSS of 70:30.
Figure 10. Confusion matrices of AAOXAI-CD system on dataset-2 (a,b) TRS/TSS of 80:20 and (c,d) TRS/TSS of 70:30.
Cancers 15 01492 g010
Figure 11. TACY and VACY analysis of AAOXAI-CD approach on dataset-2.
Figure 11. TACY and VACY analysis of AAOXAI-CD approach on dataset-2.
Cancers 15 01492 g011
Figure 12. TLOS and VLOS analysis of AAOXAI-CD method on dataset-2.
Figure 12. TLOS and VLOS analysis of AAOXAI-CD method on dataset-2.
Cancers 15 01492 g012
Figure 13. Comparative analysis of the AAOXAI-CD approach on dataset-2.
Figure 13. Comparative analysis of the AAOXAI-CD approach on dataset-2.
Cancers 15 01492 g013
Table 1. Classifier outcome of the AAOXAI-CD approach on dataset-1.
Table 1. Classifier outcome of the AAOXAI-CD approach on dataset-1.
Classes A c c u y P r e c n R e c a l S p e c y F s c o r e MCC
Training Phase (80%)
Benign100.0096.67100.0097.3098.3196.98
Malignant97.30100.0097.30100.0098.6396.98
Average98.6598.3398.6598.6598.4796.98
Testing Phase (20%)
Benign100.0094.12100.0094.1296.9794.12
Malignant94.12100.0094.12100.0096.9794.12
Average97.0697.0697.0697.0696.9794.12
ClassesAccuracyPrecisionRecallSpecificityF-ScoreMCC
Training Phase (70%)
Benign98.00100.0098.00100.0098.9998.24
Malignant100.0098.48100.0098.0099.2498.24
Average99.0099.2499.0099.0099.1198.24
Testing Phase (30%)
Benign95.83100.0095.83100.0097.8796.06
Malignant100.0096.30100.0095.8398.1196.06
Average97.9298.1597.9297.9297.9996.06
The TACY and VACY of the AAOXAI-CD model on dataset-1 are defined in Figure 7. The figure exhibited that the AAOXAI-CD method has improvised performance with augmented values of TACY and VACY. Visibly, the AAOXAI-CD model has maximum TACY outcomes.
Table 2. Analysis outcome of AAOXAI-CD method with other systems on dataset-1.
Table 2. Analysis outcome of AAOXAI-CD method with other systems on dataset-1.
MethodsPrecisionRecallAccuracy
ResNet-18 (60–40)82.0063.0072.00
ResNet-18 (80–20)86.0082.0084.00
ResNet-50 (60–40)91.0059.0076.00
ResNet-50 (80–20)82.0092.0087.00
SC-CNN Model80.0082.0081.00
CP-CNN Model71.0068.0069.00
AAI-CCDC Model96.0098.0097.00
AAOXAI-CD99.2499.0099.00
Table 3. Classifier outcome of AAOXAI-CD approach on dataset-2.
Table 3. Classifier outcome of AAOXAI-CD approach on dataset-2.
Classes A c c u y P r e c n R e c a l S p e c y F s c o r e MCC
Training Phase (80%)
VT98.6998.9496.8899.5297.8996.95
NVT98.4798.5594.8899.5796.6895.72
NT97.1695.3198.5496.0296.9094.32
Average98.1197.6096.7798.3797.1695.66
Testing Phase (20%)
VT99.5698.28100.0099.4299.1398.85
NVT99.13100.0095.83100.0097.8797.36
NT99.5699.20100.0099.0599.6099.12
Average99.4299.1698.6199.4998.8798.44
Classes A c c u y P r e c n R e c a l S p e c y F s c o r e MCC
Training Phase (70%)
VT98.1294.2499.5797.5496.8395.57
NVT98.0098.8592.4799.6795.5694.35
NT99.88100.0099.74100.0099.8799.75
Average98.6797.7097.2699.0797.4296.56
Testing Phase (30%)
VT99.7199.14100.0099.5699.5799.35
NVT99.1398.6897.4099.6398.0497.48
NT99.4299.3499.3499.4899.3498.82
Average99.4299.0598.9199.5698.9898.55
The TACY and VACY of the AAOXAI-CD model on dataset-2 are defined in Figure 11. The figure highlighted that the AAOXAI-CD method has performance with increased values of TACY and VACY. Remarkably, the AAOXAI-CD model has higher TACY outcomes.
Table 4. Comparative analysis of AAOXAI-CD approach with other systems on dataset-2.
Table 4. Comparative analysis of AAOXAI-CD approach with other systems on dataset-2.
MethodsPrecisionRecallAccuracy
AAOXAI-CD99.0598.9199.42
HBODL-AOC98.9498.1298.43
WDODTL-ODC98.7697.6598.17
CNN-EfficientNet97.0097.0097.00
CNN-Xception94.0096.0096.00
CNN-ResNet-5098.0094.0097.00
CNN-MobileNet-V298.0098.0098.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alkhalaf, S.; Alturise, F.; Bahaddad, A.A.; Elnaim, B.M.E.; Shabana, S.; Abdel-Khalek, S.; Mansour, R.F. Adaptive Aquila Optimizer with Explainable Artificial Intelligence-Enabled Cancer Diagnosis on Medical Imaging. Cancers 2023, 15, 1492. https://doi.org/10.3390/cancers15051492

AMA Style

Alkhalaf S, Alturise F, Bahaddad AA, Elnaim BME, Shabana S, Abdel-Khalek S, Mansour RF. Adaptive Aquila Optimizer with Explainable Artificial Intelligence-Enabled Cancer Diagnosis on Medical Imaging. Cancers. 2023; 15(5):1492. https://doi.org/10.3390/cancers15051492

Chicago/Turabian Style

Alkhalaf, Salem, Fahad Alturise, Adel Aboud Bahaddad, Bushra M. Elamin Elnaim, Samah Shabana, Sayed Abdel-Khalek, and Romany F. Mansour. 2023. "Adaptive Aquila Optimizer with Explainable Artificial Intelligence-Enabled Cancer Diagnosis on Medical Imaging" Cancers 15, no. 5: 1492. https://doi.org/10.3390/cancers15051492

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop