Next Article in Journal
Optimisation of Sample Preparation from Primary Mouse Tissue to Maintain RNA Integrity for Methods Examining Translational Control
Next Article in Special Issue
Improving Skin Lesion Segmentation with Self-Training
Previous Article in Journal
Clinical Relevance of Red Blood Cell Distribution Width (RDW) in Endometrial Cancer: A Retrospective Single-Center Experience from Korea
Previous Article in Special Issue
Machine Learning Integrating 99mTc Sestamibi SPECT/CT and Radiomics Data Achieves Optimal Characterization of Renal Oncocytic Tumors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dung Beetle Optimization with Deep Feature Fusion Model for Lung Cancer Detection and Classification

by
Mohammad Alamgeer
1,*,
Nuha Alruwais
2,
Haya Mesfer Alshahrani
3,
Abdullah Mohamed
4 and
Mohammed Assiri
5
1
Department of Information Systems, College of Science & Art at Mahayil, King Khalid University, Abha 61421, Saudi Arabia
2
Department of Computer Science and Engineering, College of Applied Studies and Community Services, King Saud University, P.O. Box 22459, Riyadh 11495, Saudi Arabia
3
Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
4
Research Centre, Future University in Egypt, New Cairo 11845, Egypt
5
Department of Computer Science, College of Sciences and Humanities-Aflaj, Prince Sattam bin Abdulaziz University, Aflaj 16273, Saudi Arabia
*
Author to whom correspondence should be addressed.
Cancers 2023, 15(15), 3982; https://doi.org/10.3390/cancers15153982
Submission received: 25 May 2023 / Revised: 27 July 2023 / Accepted: 31 July 2023 / Published: 5 August 2023
(This article belongs to the Special Issue Image Analysis and Machine Learning in Cancers)

Abstract

:

Simple Summary

Medical imaging devices can be vital in primary-stage lung tumor analysis and the observation of lung tumors from the treatment. Many medical imaging modalities like computed tomography (CT), chest X-ray (CXR), molecular imaging, magnetic resonance imaging (MRI), and positron emission tomography (PET) systems are widely analyzed for lung cancer detection. This article presents a new dung beetle optimization modified deep feature fusion model for lung cancer detection and classification (DBOMDFF-LCC) technique.

Abstract

Lung cancer is the main cause of cancer deaths all over the world. An important reason for these deaths was late analysis and worse prediction. With the accelerated improvement of deep learning (DL) approaches, DL can be effectively and widely executed for several real-world applications in healthcare systems, like medical image interpretation and disease analysis. Medical imaging devices can be vital in primary-stage lung tumor analysis and the observation of lung tumors from the treatment. Many medical imaging modalities like computed tomography (CT), chest X-ray (CXR), molecular imaging, magnetic resonance imaging (MRI), and positron emission tomography (PET) systems are widely analyzed for lung cancer detection. This article presents a new dung beetle optimization modified deep feature fusion model for lung cancer detection and classification (DBOMDFF-LCC) technique. The presented DBOMDFF-LCC technique mainly depends upon the feature fusion and hyperparameter tuning process. To accomplish this, the DBOMDFF-LCC technique uses a feature fusion process comprising three DL models, namely residual network (ResNet), densely connected network (DenseNet), and Inception-ResNet-v2. Furthermore, the DBO approach was employed for the optimum hyperparameter selection of three DL approaches. For lung cancer detection purposes, the DBOMDFF-LCC system utilizes a long short-term memory (LSTM) approach. The simulation result analysis of the DBOMDFF-LCC technique of the medical dataset is investigated using different evaluation metrics. The extensive comparative results highlighted the betterment of the DBOMDFF-LCC technique of lung cancer classification.

1. Introduction

Over the last few decades, lung cancer has been a major cause of mortality. One of the common symptoms of lung tumors is coughing, which requires special consideration because most of the patients who have a cough are smokers, the main group affected by chronic obstructive pulmonary disease, which itself causes coughing [1,2]. Thoracic computed tomography (CT) or chest X-rays (CXRs) are two common techniques for the diagnosis of lung tumors. Sometimes, positron emission tomography (PET) and magnetic resonance imaging (MRI) can be utilized during staging the size of cancer spreads, while CT and CXR assist to determine better therapeutic management [3]. Biopsy and bronchoscopy are necessary to provide information on the histological type and to define the actual diagnoses of lung tumors [4,5]. In earlier investigations, the occurrence of a benign tumor after a nodule discovery and diagnostic operation was proven to be as high as 40%, which highlights the importance of rigorous nodule screening before further invasive treatment to avoid unwanted complications or loss of pulmonary capacity and limit surgical risk [6].
Specific characteristics should be measured and recognized to identify malignant nodules [7,8]. Cancer probability can be assessed by using the recognized features and their fusion. But, this task can be highly challenging, even for medical experts, because nodule presence and positive cancer diagnoses are not simply interrelated [9]. A computer-aided diagnoses (CAD) approach uses earlier analyzed features that are in some way associated with cancer suspicion, like shape, sphericity, volume, subtlety, speculation, solidity, etc. They use Machine Learning (ML) systems such as Support Vector Machines (SVMs) to categorize the nodules as benign or malignant [10,11]. Although several studies use similar ML algorithms, the problem with this method is that for the system perform well, various parameters should be input on an individual basis for each case, thereby making it hard to reproduce proficient outcomes [12]. In addition, this makes the approach prone to variability among dissimilar screening parameters and different CT scans. The benefit of utilizing deep learning (DL) in CAD systems is that it could implement end-to-end recognition by learning one of the important features in a trained model [13,14]. This enables the network to work effectively when there is variation, as it captures nodule features in CT scans with different parameters [15]. When the network is trained, it can be predictable and capable of generalizing its learning and identifying malignant nodules in new cases.
This article presents a new dung beetle optimization modified deep feature fusion model for lung cancer detection and classification (DBOMDFF-LCC) technique. The presented DBOMDFF-LCC technique mainly depends upon the feature fusion and hyperparameter tuning process. To accomplish this, the DBOMDFF-LCC technique uses a feature fusion process comprising three DL models, namely residual network (ResNet), densely connected network (DenseNet), and Inception-ResNet-v2. Additionally, the DBO system can be employed for the optimum hyperparameter selection of the three DL approaches. For lung cancer detection purposes, the DBOMDFF-LCC system utilizes a long short-term memory (LSTM) system. The simulation result analysis of the DBOMDFF-LCC technique of the medical dataset is investigated using different evaluation metrics.

2. Related Works

Dhivya and Sharmila [16] proposed a multimodal method named Ensemble Deep Lung Disease Predictor (EDepLDP) architecture and developed a reliable solution for the quick recognition of different diseases using CXR and CT scans. Firstly, the images collected are segmented using U-Net architecture to obtain enhanced lung Regions of Interest (ROIs). Next, Xception and InceptionResNetV2 are used for hierarchically extracting informative features from segmented CXR scans. Yu et al. [17] developed a paediatric fine-grained diagnoses-assistant system to give precise and prompt diagnoses. This model has two phases: a disease identification stage and a test result structurization stage. The initial phase structuralizes the test outcomes by extracting numeric values from medical records, and the disease detection phase offers a diagnosis dependent upon text-form medical records and the structured information attained in the primary step. Agarwal et al. [18] developed a DL-based multilayer multimodal fusion system which emphasizes extracting the features of various layers and their combination. The disease detection method considered discriminative data from all the layers.
Behrad and Abadeh [19] developed one of the common multi-modalities, including fusion approaches and DL models. Also, the authors explained learning strategies such as end-to-end learning, multitask learning, and transfer learning. Next, the authors provided a summary of the DL method for a multi-modal medical data study. Ullah et al. [20] developed a strong DL model for the anatomical design in chest radiographs that exploits a dual encoded-decoded CNN. The pretrained encoded outcome is given as squeeze-and-excitation (SE) for increasing the representation power of the network. Wang et al. [21] developed and evaluated the efficiency of a DL architecture (3D-ResNet) dependent upon CT scans to differentiate nontuberculous mycobacterium lung disease (NTM-LD) in Mycobacterium TB lung disease (MTB-LD).
Akbulut [22] introduced a strong mechanism based on a new customized DL algorithm (ACL) that trained LSTM and attention models synchronously with the CNN model. The significant traces and stains in the CXR images are highlighted with the marker-controlled watershed (MCW) segmentation method. Moreover, the contribution of the strategy used in the presented method to classification accuracy was thoroughly assessed. Chouhan et al. [23] suggested a novel DL architecture for the diagnosis of pneumonia utilizing the TL model. Next, the authors developed an ensemble module which integrates output from each pretrained model that outperforms individual models, obtaining a remarkable performance in pneumonia detection.
Dalmaz et al. [24] presented a new approach dependent upon adversarial diffusion modeling, SynDiff, to enhance the efficiency of medical image translation. For capturing a direct connection of the image distribution, SynDiff leverages a conditional diffusion procedure which gradually maps the noise and source image onto the target image. Dalmaz et al. [25] proposed a novel generative adversarial approach for medical image synthesis, ResViT, that leverages the contextual sensitivity of vision transformers together with the precision of convolutional functions and realism of adversarial learning. The ResViT generator utilizes a central bottleneck containing a new aggregated residual transformer (ART) block which synergistically integrates residual convolution and transformer elements. Yurt et al. [26] examined a multi-stream system which aggregates data through several source images using a mixture of several one-to-one streams and joint many-to-one streams. The corresponding mapping features created in the one-to-one streams and shared mapping features created in the many-to-one stream were integrated with the fusion block.

3. The Proposed Model

An automated lung cancer detection tool named the DBOMDFF-LCC system was established in this study. The aim of the projected DBOMDFF-LCC system is based on the feature fusion and hyperparameter tuning process. The DBOMDFF-LCC technique comprises three stage processes, namely feature fusion process, DBO-based hyperparameter selection, and LSTM classification. Figure 1 demonstrates the overall flow of the DBOMDFF-LCC system.

3.1. Feature Fusion Process

Primarily, the DBOMDFF-LCC technique uses a feature fusion process comprising three DL models, namely ResNet, DenseNet, and Inception-ResNet-v2. Entropy-based feature fusion is a procedure which integrates several features in distinct sources or modalities as a single feature representation utilizing the entropy model. The purpose is to capture complementary data in various features and improve the entire discriminative power of fused feature representations.

3.1.1. ResNet

The ResNet18 model consists of five convolutional structures, an activation function (Softmax) layer, and a fully connected layer [27]. The initial Conv structure comprises an activation, 1 D Conv, and BN layers. The complete parameters of this layer are as follows: the activation function of the activation layer utilized is ReLU, the number of Conv kernels from the 1D Conv layer was 64 , the dimensional of Conv kernel is 7 , and the padding mode remains unchanged. The second to fifth Conv structures had a similar form: they included a feature map block and Conv block; however, the count of Conv kernels differed based on the block. The numbers of Conv kernels of the second to fifth Conv designs are 64 , 128 , 256 , and 512 , respectively.
There were eight layers in all the Conv blocks: the BN layer, the activation layer, the 1 D Conv layer, the 1-bit short-circuit linking layer, the 1D Conv layer, the activation layer, and the feature fusion layer. The parameter of the block was: ReLU can be exploited as an activation function of the activation layer, the Conv kernel size of the 1D Conv layer is fixed as three, and the padding mode remains unchanged. The mapping feature and Conv blocks have a similar infrastructure but are varied in the sense in which the 1D short-circuit linked layer has been altered for the mapping feature layer.

3.1.2. DenseNet

The DenseNet201 structure has been trained primarily on ImageNet databases and contains three transition layers, four dense blocks, m a x -pooling, and convolutional layers [28]. The preceding layer was directly connected to the next layers from the network, which allows the mapping feature of the preceding layer that concatenated with the final layer, enhances the data flow among the layers, and permits the model to effectively extract and capture the gait features.
f l = H l f 0 ,   f 1 ,   ,   f | 1
In Equation (1), l displays the layer and [ f 0 ,   f 1 ,   / f l 1 ] represents the feature concatenation. H l signifies the composite function which contains a   3 × 3 convolution function, BN, and ReLU activation. A dense block has been added as the model for adjusting the dimensional mapping features. The objective of the bottleneck layer is to diminish the count of input features that generate the network computational effect. The transition layer was inserted, then all the dense blocks except the final one were inserted to diminish the original size of mapping features by half. The transition layer carries out a   1 × 1 convolution layer and then 2 × 2 avg-pooling. The ability of every layer to add novel data to the network combined data is determined by the less growth rate.

3.1.3. Inception-ResNet-v2

In the Inception-Resnet-v2 model, the pretrained topmost layer was previously removed since this model is highly particular to the trained rate [29]. This model utilizes the tricks and decisions of the Inception model with a residual connection variant. No preprocessing is conducted. First, the image was resized to 244 × 244 , the input size for DCNN, and then resized to [0–1]. The resizing of images does not affect the shape of the cellular structure or the accuracy, and it permits lessening the computation rate. The topmost layer consists of a global average pooling layer, an FC layer of 256 neurons (with ReLU activation) and, lastly, the neuron that allows classification in the four classes (with Softmax activation). At an earlier stage, only the FC layer was trained. During the second stage, the DCNN was retrained on the topmost layer, and then finetuning of the weight of any pre-trained network layers was carried out. It is not uncommon to keep the weight of any bottom layers (caused by over-fitting issues) and only carry out the fine-tuning of high-level features. The most common features (blobs and edges) can thus be retained.

3.2. Hyperparameter Tuning Process

In this work, the DBO system can be employed for the optimum hyperparameter selection of the three DL approaches. The DBO is a recent swarm intelligence (SI) method based on dung beetle (DB) behaviors, namely dancing, ball rolling, stealing, breeding, foraging, and other activities, and the DBO method includes four optimization techniques: breeding, rolling balls, stealing, and foraging [30]. Unobstructed and obstructed modes are two behaviors of DB rolling.

3.2.1. Obstacle-Free Mode

The DB exploits the sun in order to find direction in dung ball rolling once they move forward without any obstacles. In the DBO algorithm, as the light concentration changes, the location of the DB also changed as follows:
x i t + 1 = x i t + a × k × x i t 1 + b × x i t x w o r s t t
In Equation (2), t denotes the number of the existing iterations, k e (0, 0.2] represents a set parameter signifying the flexure co-efficient, and x i t represents the place of i - th DBs from the population at t th permutation. b denotes the invariant quantity within [ 0 ,   1 ], and α shows the natural co-efficient with the value of both [−1, 1], with −1 representing a deviation from the original direction and 1 signifying no deviation. x w o r s t t denotes the worst position from the existing specie, and the alteration in light concentration can be simulated using | x i t x w o r s t t | .

3.2.2. Barrier Mode

The DB, once it meets an obstacle which prevents it from moving forward, desires to dance to recover a novel way forward. The author uses a tangent function to stimulate the dancing behaviors to attain the newest rolling direction that is only assumed from the range of [ 0 ,   τ c ] , and the beetle continues rolling the dung ball as soon as it finds a novel direction. The formula for upgrading the location:
x i t + 1 = x i t + tan θ x i t x i t 1
If θ = 0 ,   π 2 ,   τ π , no changes occur in the location of DBs.
The female DB rolls the dung ball to a safer region for laying eggs and hides them to give a proper habitat for the progeny. The study presents a frontier option approach for modelling the brood ball position of a female DB:
L f * = m a x { x g b e s t t × ( 1 R ) ,   L f } U f * = m i n { x g b e s t t × ( 1 + R ) ,   U f }
In Equation (4), The lower and upper boundaries of the optimizer problems are L f and U f , respectively. R = 1 t T m a x and T m a x show the upper boundary of iterations. The existing population obtains the global optimal at x g b e s t t . The author defines the spawning’s lower and upper boundaries with L f and U f , which implies the position of DB spawn has been adjusted dynamically with iteration counts.
After a female DB finds the spawning region, she lays her eggs in that region. The region in which the location occurs is adjusted dynamically with the iteration counts; hence, the location of nestling spheres is dynamic in the iteration.
B i t + 1 = x g b e s t t + b 1 × B i t L f * + b 2 × B i t U f *
In Equation (5), B i t + 1 denotes the place of i th brood balls at the t th iterations,   a n d   D denotes the number of parameters in the optimization issues. b 1 and   b 2 characterize two arbitrary and independent vectors that have a D component and the location of nestling balls should be limited to the spawning region.
These behaviors are aimed mostly at smaller DBs. Some mature DBs emerge from the ground looking for food, and the optimum foraging region for smaller DBs is updated dynamically.
L f l = m a x { x l b e s t t × ( 1 R ) ,   L f } U f l = m i n { x l b e s t t × ( 1 + R ) ,   U f }
In Equation (6), R is similar to the prior definition, and x l b e s t t signifies the location optimum location for the present population. The author uses L f l and U f l to determine the bottom and top bounds of the foraging area of lesser DBs, respectively. The position upgrade is given below:
x i t + 1 = x i t + C 1 × x i t L f l + C 2 × x i t U f l
In Equation (7), C 1 is a number which follows a standard distribution while selected arbitrarily, as C 1 N ( 0 ,   1 ) , and C 2 shows the arbitrary vector within [ 0 ,   1 ] of 1 × D .
During DB stealing, there exist any DBs that steal dung balls from other individuals, and the author updates the setting of thieving DBs:
x i t + 1 = x l b e s t t + S × g × x i t x g b e s t t + x i t x l b e s t t
In Equation (8), S indicates a constant value and g denotes the vector of dimensional D that is selected arbitrarily, which obeys a standard distribution.
The DBO system progresses to a FF to accomplish better classifier results. It resolves a positive integer to exemplify the good effectiveness of candidate outcomes. During this study, the minimizing classifier error rate was supposed to be FF, as depicted in Equation (9).
f i t n e s s x i = C l a s s i f i e r   E r r o r   R a t e x i = n o .   o f   m i s c l a s s i f i e d   i n s t a n c e s T o t a l   n o .   o f   i n s t a n c e s 100

3.3. Lung Cancer Detection Process

To detect and classify lung cancer, the fused feature vectors are passed into the LSTM approach [31]. As the time interval rises, the recurrent HN approaches zero. This leads to the gradient diminishing a vulnerability that can be encountered while applying RNN for long-term data sequence modeling. The memory cell has a node connected with the recurrent edge of a set weighted node, thus guaranteeing that the gradient survives a longer time step without vanishing. The multiplicative gate allows the model to store data over a longer period, thus removing the gradient vanishing problem usually observed in traditional NN models.
Assume input sequence data are represented as x =   x 1 + x 2 + x 3 , , x t and output series data are represented as y = y 1 + y 2 + y 3 , , y t , where τ denotes the forecast horizon. The LSTM calculates the forecast outcome automatically in the next time step using the prior data, without predefining the lag observation to utilize:
i t = σ W x i x t + W h i h t 1 + W c i c t 1 + b i
f t = σ W x f x t + W h f h t 1 + W c f c t 1 + b f
c t = f t c t 1 + i t g W x c x t + W h c h t 1 + b c
o t = σ W x o x t + W h o h t 1 + W c o c t + b o
h t = o t h c t
where 0 represents a standard logistic sigmoid function and W and b illustrate the weighted matrix and bias vector, respectively, defined as:
σ x = 1 1 + e x
g x = 4 1 + e x 2
h x = 2 1 + x 1
where the parameters c   i , f ,   a n d   o indicate the cell activation vector, input gate, forget gate, and output gate, respectively. g ( . ) and h ( . ) denote the respective transformations of the sigmoid function. This certain feature makes LSTM an accurate and reliable method for lung cancer detection.

4. Experimental Validation

In this section, the results of the DBOMDFF-LCC approach are examined on the lungdb database [32], comprising 100 samples and 3 classes, as demonstrated in Table 1. Figure 2 represents the sample images. For experimental validation, 80:20 and 70:30 of training/testing dataset is used.
The confusion matrices of the DBOMDFF-LCC approach to the lung cancer recognition process are demonstrated in Figure 3. The outcomes stated that the DBOMDFF-LCC system recognizes three classes proficiently.
In Table 2 and Figure 4, the overall lung cancer detection results of the DBOMDFF-LCC technique are exemplified on 80:20 of TRP/TSP. The outcomes exhibit that the DBOMDFF-LCC system recognizes all three classes efficiently. For samples with 80% of TRP, the DBOMDFF-LCC system gains average a c c u y , p r e c n , s e n s y , s p e c y , and F s c o r e of 99.17%, 98.81%, 98.72%, 99.37%, and 98.74%, respectively. Also, with 20% of TSP, the DBOMDFF-LCC method reaches average a c c u y , p r e c n , s e n s y , s p e c y , and F s c o r e of 96.67%, 95.83%, 95.83%, 97.44%, and 95.56%, respectively.
In Table 3 and Figure 5, the overall lung cancer detection results of the DBOMDFF-LCC system are demonstrated on 70:30 of TRP/TSP. The outcome exhibited that the DBOMDFF-LCC system recognizes all three classes efficiently. For instance, with 70% of TRP, the DBOMDFF-LCC method reaches average a c c u y , p r e c n , s e n s y , s p e c y , and F s c o r e of 99.05%, 98.72%, 98.48%, 99.26%, and 98.57%, respectively. In addition, with 30% of TSP, the DBOMDFF-LCC approach attains an average a c c u y , p r e c n , s e n s y , s p e c y , and F s c o r e of 95.56%, 92.96%, 94.87%, 96.90%, and 93.51%, respectively.
Figure 6 demonstrates the classifier outcome of the DBOMDFF-LCC method on 80:20/70:30. Figure 6a,c demonstrates the accuracy examination of the DBOMDFF-LCC model on 80:20/70:30. The result stated that the DBOMDFF-LCC technique attains maximum accuracy values over higher epochs. In addition, the higher validation accuracy over training accuracy illustrates that the DBOMDFF-LCC method learns capably on the test database. Finally, Figure 6b,d illuminates the loss examination of the DBOMDFF-LCC approach on 80:20/70:30. The outcome implied that the DBOMDFF-LCC approach gains adjacent values of training and validation loss. The DBOMDFF-LCC system learns effectively on the test database.
Figure 7 demonstrates the classifier results of the DBOMDFF-LCC algorithm at 80:20/70:30. Figure 7a,c establishes the PR examination of the DBOMDFF-LCC approach on 80:20/70:30. The results implied that the DBOMDFF-LCC technique results in superior values of PR. In addition, it is clear that the DBOMDFF-LCC methodology can reach higher PR values in all classes. Lastly, Figure 7b,d illustrates the ROC examination of the DBOMDFF-LCC model under 80:20/70:30. The outcome implied that the DBOMDFF-LCC system resulted in improved ROC values. Also, the DBOMDFF-LCC method can extend enhanced ROC values on all classes.
In Table 4 and Figure 8, a comparison result of the DBOMDFF-LCC method is offered with existing systems [33]. The outcome highlighted that the DBOMDFF-LCC approach reaches enhanced results. Based on a c c u y , the DBOMDFF-LCC technique obtains a higher a c c u y of 99.17%, while the ODNN, KNN, DNN, YOLO-DLN, DBN-LND, and AGFLCC-DGM models accomplish a lower a c c u y of 92.12%, 96.52%, 95.45%, 94.75%, 95%, and 98.91%, respectively. Meanwhile, based on p r e c n , the DBOMDFF-LCC approach gains a superior p r e c n of 98.81%, while the ODNN, KNN, DNN, YOLO-DLN, DBN-LND, and AGFLCC-DGM approaches achieve a lesser p r e c n of 91.29%, 97.03%, 96.95%, 96.49%, 97.92%, and 96.88%, respectively. Furthermore, with respect to s e n s y , the DBOMDFF-LCC technique obtains a higher s e n s y of 98.72%, while the ODNN, KNN, DNN, YOLO-DLN, DBN-LND, and AGFLCC-DGM systems accomplish a lower s e n s y of 88.56%, 86.45%, 92.85%, 94.70%, 93.50%, and 98.46%, respectively. These results show the maximum lung cancer detection efficiency of the DBOMDFF-LCC technique. The enhanced performance of the proposed model is due to the feature fusion and hyperparameter tuning process.

5. Conclusions

An automated lung cancer detection tool named DBOMDFF-LCC system was established in this study. The aim of the projected DBOMDFF-LCC algorithm is based on the feature fusion and hyperparameter tuning process. Primarily, the DBOMDFF-LCC technique uses a feature fusion process comprising three DL models, namely ResNet, DenseNet, and Inception-ResNet-v2. Additionally, the DBO system was employed for the optimum hyperparameter selection of the three DL algorithms. For lung cancer detection purposes, the DBOMDFF-LCC technique utilized the LSTM approach. The simulation result analysis of the DBOMDFF-LCC system on the medical dataset is investigated using different evaluation metrics. The extensive comparative results highlighted the betterment of the DBOMDFF-LCC technique of lung cancer classification.

Author Contributions

Conceptualization, M.A. (Mohammad Alamgeer); Methodology, N.A.; Validation, A.M.; Formal analysis, M.A. (Mohammed Assiri); Data curation, H.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through a large group Research Project under grant number (RGP2/134/44). Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R237), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. Research Supporting Project number (RSPD2023R608), King Saud University, Riyadh, Saudi Arabia. This study is supported via funding from Prince Sattam bin Abdulaziz University project number (PSAU/2023/R/1444). This study is partially funded by the Future University in Egypt (FUE).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in this article.

Conflicts of Interest

The authors declare that they have no conflict of interest. The manuscript was written through the contributions of all authors. All authors have approved the final version of the manuscript.

References

  1. Yang, Y.; Yang, J.; Shen, L.; Chen, J.; Xia, L.; Ni, B.; Ge, L.; Wang, Y.; Lu, S. A multi-omics-based serial deep learning approach to predict clinical outcomes of single-agent anti-PD-1/PD-L1 immunotherapy in advanced stage non-small-cell lung cancer. Am. J. Transl. Res. 2021, 13, 743. [Google Scholar]
  2. Yin, M.; Liang, X.; Wang, Z.; Zhou, Y.; He, Y.; Xue, Y.; Gao, J.; Lin, J.; Yu, C.; Liu, L.; et al. Identification of Asymptomatic COVID-19 Patients on Chest CT Images Using Transformer-Based or Convolutional Neural Network–Based Deep Learning Models. J. Digit. Imaging 2023, 36, 827–836. [Google Scholar] [CrossRef] [PubMed]
  3. Wang, Z.; Yin, Z.; Argyris, Y.A. Detecting medical misinformation on social media using multimodal deep learning. IEEE J. Biomed. Health Inform. 2020, 25, 2193–2203. [Google Scholar] [CrossRef] [PubMed]
  4. Karaddi, S.H.; Sharma, L.D. Automated multi-class classification of lung diseases from CXR-images using pre-trained convolutional neural networks. Expert Syst. Appl. 2023, 211, 118650. [Google Scholar] [CrossRef]
  5. Sait, U.; KV, G.L.; Shivakumar, S.; Kumar, T.; Bhaumik, R.; Prajapati, S.; Bhalla, K.; Chakrapani, A. A deep-learning based multimodal system for COVID-19 diagnosis using breathing sounds and chest X-ray images. Appl. Soft Comput. 2021, 109, 107522. [Google Scholar] [CrossRef] [PubMed]
  6. Khan, M.A.; Khan, A.; Alhaisoni, M.; Alqahtani, A.; Alsubai, S.; Alharbi, M.; Malik, N.A.; Damaševičius, R. Multimodal brain tumor detection and classification using deep saliency map and improved dragonfly optimization algorithm. Int. J. Imaging Syst. Technol. 2023, 33, 572–587. [Google Scholar] [CrossRef]
  7. Xu, C.; Wang, Y.; Zhang, D.; Han, L.; Zhang, Y.; Chen, J.; Li, S. BMAnet: Boundary mining with adversarial learning for semi-supervised 2D myocardial infarction segmentation. IEEE J. Biomed. Health Inform. 2022, 27, 87–96. [Google Scholar] [CrossRef]
  8. Zhang, D.; Xu, C.; Li, S. Heuristic multi-modal integration framework for liver tumor detection from multi-modal non-enhanced MRIs. Expert Syst. Appl. 2023, 221, 119782. [Google Scholar] [CrossRef]
  9. Li, S.; Xie, Y.; Wang, G.; Zhang, L.; Zhou, W. Adaptive multimodal fusion with attention guided deep supervision net for grading hepatocellular carcinoma. IEEE J. Biomed. Health Inform. 2022, 26, 4123–4131. [Google Scholar] [CrossRef]
  10. Barrett, J.; Viana, T. EMM-LC Fusion: Enhanced Multimodal Fusion for Lung Cancer Classification. Ai 2022, 3, 659–682. [Google Scholar] [CrossRef]
  11. Zhang, X.; Zhang, Y.; Zhang, G.; Qiu, X.; Tan, W.; Yin, X.; Liao, L. Deep learning with radiomics for disease diagnosis and treatment: Challenges and potential. Front. Oncol. 2022, 12, 773840. [Google Scholar] [CrossRef] [PubMed]
  12. Chassagnon, G.; Vakalopoulou, M.; Régent, A.; Sahasrabudhe, M.; Marini, R.; Hoang-Thi, T.N.; Dinh-Xuan, A.T.; Dunogué, B.; Mouthon, L.; Paragios, N.; et al. Elastic registration–driven deep learning for longitudinal assessment of systemic sclerosis interstitial lung disease at CT. Radiology 2021, 298, 189–198. [Google Scholar] [CrossRef] [PubMed]
  13. Naz, Z.; Khan, M.U.G.; Saba, T.; Rehman, A.; Nobanee, H.; Bahaj, S.A. An Explainable AI-Enabled Framework for Interpreting Pulmonary Diseases from Chest Radiographs. Cancers 2023, 15, 314. [Google Scholar] [CrossRef]
  14. Moujahid, H.; Cherradi, B.; Gannour, O.E.; Bahatti, L.; Terrada, O.; Hamida, S. Convolutional neural network based classification of patients with pneumonia using X-ray lung images. Adv.Sci. Technol. Eng. Syst. J. 2020, 5, 167–175. [Google Scholar] [CrossRef]
  15. Verma, P.; Dumka, A.; Singh, R.; Ashok, A.; Singh, A.; Aljahdali, H.M.; Kadry, S.; Rauf, H.T. A deep learning based approach for patient pulmonary CT image screening to predict coronavirus (SARS-CoV-2) infection. Diagnostics 2021, 11, 1735. [Google Scholar] [CrossRef] [PubMed]
  16. Dhivya, N.; Sharmila, P. Multimodal Feature and Transfer Learning in Deep Ensemble Model for Lung Disease Prediction. J. Data Acquis. Process. 2023, 38, 271. [Google Scholar]
  17. Yu, G.; Yu, Z.; Shi, Y.; Wang, Y.; Liu, X.; Li, Z.; Zhao, Y.; Sun, F.; Yu, Y.; Shu, Q. Identification of pediatric respiratory diseases using a fine-grained diagnosis system. J. Biomed. Inform. 2021, 117, 103754. [Google Scholar] [CrossRef]
  18. Agarwal, S.; Arya, K.V.; Meena, Y.K. MutliFusionNet: Multilayer Multimodal Fusion of Deep Neural Networks for Chest X-ray Image Classification. 2023. [CrossRef]
  19. Behrad, F.; Abadeh, M.S. An overview of deep learning methods for multimodal medical data mining. Expert Syst. Appl. 2022, 200, 117006. [Google Scholar] [CrossRef]
  20. Ullah, I.; Ali, F.; Shah, B.; El-Sappagh, S.; Abuhmed, T.; Park, S.H. A deep learning based dual encoder–decoder framework for anatomical structure segmentation in chest X-ray images. Sci. Rep. 2023, 13, 791. [Google Scholar] [CrossRef]
  21. Wang, L.; Ding, W.; Mo, Y.; Shi, D.; Zhang, S.; Zhong, L.; Wang, K.; Wang, J.; Huang, C.; Zhang, S.; et al. Distinguishing nontuberculous mycobacteria from Mycobacterium tuberculosis lung disease from CT images using a deep learning framework. Eur. J. Nucl. Med. Mol. Imaging 2021, 48, 4293–4306. [Google Scholar] [CrossRef]
  22. Akbulut, Y. Automated Pneumonia Based Lung Diseases Classification with Robust Technique Based on a Customized Deep Learning Approach. Diagnostics 2023, 13, 260. [Google Scholar] [CrossRef] [PubMed]
  23. Chouhan, V.; Singh, S.K.; Khamparia, A.; Gupta, D.; Tiwari, P.; Moreira, C.; Damaševičius, R.; De Albuquerque, V.H.C. A novel transfer learning based approach for pneumonia detection in chest X-ray images. Appl. Sci. 2020, 10, 559. [Google Scholar] [CrossRef] [Green Version]
  24. Özbey, M.; Dalmaz, O.; Dar, S.U.; Bedel, H.A.; Özturk, Ş.; Güngör, A.; Çukur, T. Unsupervised medical image translation with adversarial diffusion models. IEEE Trans. Med. Imaging. 2023. [Google Scholar] [CrossRef] [PubMed]
  25. Dalmaz, O.; Yurt, M.; Çukur, T. ResViT: Residual vision transformers for multimodal medical image synthesis. IEEE Trans. Med. Imaging 2022, 41, 2598–2614. [Google Scholar] [CrossRef]
  26. Yurt, M.; Dar, S.U.; Erdem, A.; Erdem, E.; Oguz, K.K.; Çukur, T. mustGAN: Multi-stream generative adversarial networks for MR image synthesis. Med. Image Anal. 2021, 70, 101944. [Google Scholar] [CrossRef]
  27. Zhao, Y.; Zhang, X.; Feng, W.; Xu, J. Deep Learning Classification by ResNet-18 Based on the Real Spectral Dataset from Multispectral Remote Sensing Images. Remote Sens. 2022, 14, 4883. [Google Scholar] [CrossRef]
  28. Venu, S.K. An ensemble-based approach by fine-tuning the deep transfer learning models to classify pneumonia from chest X-ray images. arXiv 2020, arXiv:2011.05543. [Google Scholar]
  29. Ferreira, C.A.; Melo, T.; Sousa, P.; Meyer, M.I.; Shakibapour, E.; Costa, P.; Campilho, A. Classification of breast cancer histology images through transfer learning using a pre-trained inception resnet v2. In Image Analysis and Recognition, Proceedings of the 15th International Conference, ICIAR 2018, Póvoa de Varzim, Portugal, 27–29 June 2018; Springer International Publishing: Cham, Switzerland, 2018; pp. 763–770. [Google Scholar]
  30. Zhang, R.; Zhu, Y. Predicting the Mechanical Properties of Heat-Treated Woods Using Optimization-Algorithm-Based BPNN. Forests 2023, 14, 935. [Google Scholar] [CrossRef]
  31. Essien, A.; Giannetti, C. A deep learning framework for univariate time series prediction using convolutional LSTM stacked autoencoders. In Proceedings of the 2019 IEEE International Symposium on INnovations in Intelligent SysTems and Applications (INISTA), Sofia, Bulgaria, 3–5 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–6. [Google Scholar]
  32. Available online: http://www.via.cornell.edu/lungdb.html (accessed on 16 February 2023).
  33. Lakshmanaprabu, S.K.; Mohanty, S.N.; Shankar, K.; Arunkumar, N.; Ramirez, G. Optimal deep learning model for classification of lung cancer on CT images. Future Gener. Comput. Syst. 2019, 92, 374–382. [Google Scholar]
Figure 1. Overall flow of DBOMDFF-LCC system.
Figure 1. Overall flow of DBOMDFF-LCC system.
Cancers 15 03982 g001
Figure 2. Sample images. (a) Normal, (b) benign, (c) malignant [32].
Figure 2. Sample images. (a) Normal, (b) benign, (c) malignant [32].
Cancers 15 03982 g002
Figure 3. Confusion matrices of DBOMDFF-LCC system. (a,b) 80:20 of TRP/TSP and (c,d) 70:30 of TRP/TSP.
Figure 3. Confusion matrices of DBOMDFF-LCC system. (a,b) 80:20 of TRP/TSP and (c,d) 70:30 of TRP/TSP.
Cancers 15 03982 g003
Figure 4. Average outcome of DBOMDFF-LCC approach on 80:20 of TRP/TSP.
Figure 4. Average outcome of DBOMDFF-LCC approach on 80:20 of TRP/TSP.
Cancers 15 03982 g004
Figure 5. Average outcome of DBOMDFF-LCC system on 70:30 of TRP/TSP.
Figure 5. Average outcome of DBOMDFF-LCC system on 70:30 of TRP/TSP.
Cancers 15 03982 g005
Figure 6. (a,c) Accuracy curve on 80:20/70:30 and (b,d) loss curve on 80:20/70:30.
Figure 6. (a,c) Accuracy curve on 80:20/70:30 and (b,d) loss curve on 80:20/70:30.
Cancers 15 03982 g006
Figure 7. (a,c) PR curve on 80:20/70:30 and (b,d) ROC curve on 80:20/70:30.
Figure 7. (a,c) PR curve on 80:20/70:30 and (b,d) ROC curve on 80:20/70:30.
Cancers 15 03982 g007
Figure 8. Comparative outcome of DBOMDFF-LCC system with other approaches.
Figure 8. Comparative outcome of DBOMDFF-LCC system with other approaches.
Cancers 15 03982 g008
Table 1. Details of databases.
Table 1. Details of databases.
ClassNo. of Samples
Normal35
Benign32
Malignant33
Total Samples100
Table 2. Lung cancer detection outcome of DBOMDFF-LCC approach on 80:20 of TRP/TSP.
Table 2. Lung cancer detection outcome of DBOMDFF-LCC approach on 80:20 of TRP/TSP.
Class A c c u y P r e c n S e n s y S p e c y F S c o r e
Training Phase (80%)
Normal98.7596.43100.0098.1198.18
Benign100.00100.00100.00100.00100.00
Malignant98.75100.0096.15100.0098.04
Average99.1798.8198.7299.3798.74
Testing Phase (20%)
Normal95.00100.0087.50100.0093.33
Benign100.00100.00100.00100.00100.00
Malignant95.0087.50100.0092.3193.33
Average96.6795.8395.8397.4495.56
Table 3. Lung cancer detection outcome of DBOMDFF-LCC system on 70:30 of TRP/TSP.
Table 3. Lung cancer detection outcome of DBOMDFF-LCC system on 70:30 of TRP/TSP.
Class A c c u y P r e c n S e n s y S p e c y F S c o r e
Training Phase (70%)
Normal98.57100.0095.45100.0097.67
Benign100.00100.00100.00100.00100.00
Malignant98.5796.15100.0097.7898.04
Average99.0598.7298.4899.2698.57
Testing Phase (30%)
Normal93.33100.0084.62100.0091.67
Benign96.6790.00100.0095.2494.74
Malignant96.6788.89100.0095.4594.12
Average95.5692.9694.8796.9093.51
Table 4. Comparative outcome of DBOMDFF-LCC algorithm with other approaches [33].
Table 4. Comparative outcome of DBOMDFF-LCC algorithm with other approaches [33].
Methods A c c u y P r e c n S e n s y S p e c y
ODNN Model92.1291.2988.5688.54
KNN Model96.5297.0386.4592.10
DNN Model95.4596.9592.8589.40
YOLO-DLN94.7596.4994.7095.10
DBN-LND95.0097.9293.5090.20
AGFLCC-DGM98.9196.8898.4698.89
DBOMDFF-LCC99.1798.8198.7299.37
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alamgeer, M.; Alruwais, N.; Alshahrani, H.M.; Mohamed, A.; Assiri, M. Dung Beetle Optimization with Deep Feature Fusion Model for Lung Cancer Detection and Classification. Cancers 2023, 15, 3982. https://doi.org/10.3390/cancers15153982

AMA Style

Alamgeer M, Alruwais N, Alshahrani HM, Mohamed A, Assiri M. Dung Beetle Optimization with Deep Feature Fusion Model for Lung Cancer Detection and Classification. Cancers. 2023; 15(15):3982. https://doi.org/10.3390/cancers15153982

Chicago/Turabian Style

Alamgeer, Mohammad, Nuha Alruwais, Haya Mesfer Alshahrani, Abdullah Mohamed, and Mohammed Assiri. 2023. "Dung Beetle Optimization with Deep Feature Fusion Model for Lung Cancer Detection and Classification" Cancers 15, no. 15: 3982. https://doi.org/10.3390/cancers15153982

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop