Next Article in Journal
Artificial Intelligence and Radiomics: Clinical Applications for Patients with Advanced Melanoma Treated with Immunotherapy
Next Article in Special Issue
Recent Advancements and Perspectives in the Diagnosis of Skin Diseases Using Machine Learning and Deep Learning: A Review
Previous Article in Journal
Nuclear Medicine and Cancer Theragnostics: Basic Concepts
Previous Article in Special Issue
SkinNet-INIO: Multiclass Skin Lesion Localization and Classification Using Fusion-Assisted Deep Neural Networks and Improved Nature-Inspired Optimization Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MSRNet: Multiclass Skin Lesion Recognition Using Additional Residual Block Based Fine-Tuned Deep Models Information Fusion and Best Feature Selection

1
Department of CS, COMSATS University Islamabad, Wah Campus, Islamabad 45550, Pakistan
2
Department of Computer Science and Mathematics, Lebanese American University, Beirut 1102-2801, Lebanon
3
Department of CS, HITEC University, Taxila 47080, Pakistan
4
Center of Excellence Forest 4.0, Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania
5
College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia
6
Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11564, Saudi Arabia
7
Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology (NTNU), 7034 Trondheim, Norway
*
Author to whom correspondence should be addressed.
Diagnostics 2023, 13(19), 3063; https://doi.org/10.3390/diagnostics13193063
Submission received: 13 July 2023 / Revised: 19 September 2023 / Accepted: 24 September 2023 / Published: 26 September 2023

Abstract

:
Cancer is one of the leading significant causes of illness and chronic disease worldwide. Skin cancer, particularly melanoma, is becoming a severe health problem due to its rising prevalence. The considerable death rate linked with melanoma requires early detection to receive immediate and successful treatment. Lesion detection and classification are more challenging due to many forms of artifacts such as hairs, noise, and irregularity of lesion shape, color, irrelevant features, and textures. In this work, we proposed a deep-learning architecture for classifying multiclass skin cancer and melanoma detection. The proposed architecture consists of four core steps: image preprocessing, feature extraction and fusion, feature selection, and classification. A novel contrast enhancement technique is proposed based on the image luminance information. After that, two pre-trained deep models, DarkNet-53 and DensNet-201, are modified in terms of a residual block at the end and trained through transfer learning. In the learning process, the Genetic algorithm is applied to select hyperparameters. The resultant features are fused using a two-step approach named serial-harmonic mean. This step increases the accuracy of the correct classification, but some irrelevant information is also observed. Therefore, an algorithm is developed to select the best features called marine predator optimization (MPA) controlled Reyni Entropy. The selected features are finally classified using machine learning classifiers for the final classification. Two datasets, ISIC2018 and ISIC2019, have been selected for the experimental process. On these datasets, the obtained maximum accuracy of 85.4% and 98.80%, respectively. To prove the effectiveness of the proposed methods, a detailed comparison is conducted with several recent techniques and shows the proposed framework outperforms.

1. Introduction

The most deadly kind of skin cancer, melanoma, has increased dramatically worldwide. Consequently, early and prompt diagnosis is crucial for reducing the severity of the disease. The analysis of medical images of different organs of the body to detect irregular behavior plays a vital role in the medical field, such as skin cancer [1], brain cancer [2], lung cancer [3], breast cancer [4], and retina [5]. Skin cancer is one of the more prevalent diseases today [6]. It is one of the most common forms of cancer in humans because it is the body’s largest organ [7]. The skin lesion is generally divided into two classes, i.e., melanoma and non-melanoma [8]. The World Health Organization (WHO) reports that there were 104,350 cases of skin cancer overall and 11,650 fatalities in the United States in 2019 [9]. In 2020, 196,060 new cases of skin cancer are anticipated. It is believed that 40,160 and 60,190 of the latter are men and women, respectively [10]. Based on these figures, it is possible to anticipate that in 2020, the situations will more than triple while the death rate will decrease by over 5.3%. In the United States, 106,110 new instances of melanoma are anticipated to be diagnosed in 2021, while 7180 people will pass away from the disease.
Melanocytes are the cells in which melanoma develops when these cells overgrow and form a malignant tumor [11]. The hands, face, neck, lips, and other exposed skin parts are particularly affected by it [12]. Early detection of melanoma increases the likelihood of being successfully treated; otherwise, it will spread to other body areas and cause an agonizing death [13]. After an eye exam, it might be challenging for specialists to diagnose skin cancer in its early stages [14] as modern specialized, computer-aided detection (CAD) technology has been employed to identify all types of tumors since the early 2000s [15].
Melanoma includes complex patterns of multiple components and exhibits asymmetrical pigment distribution on the acral skin. The blue nevus (blue-grey region) aids in detecting malignancy, whereas these pigment networks, dots, or globule distributions help identify melanocytic diseases. Any lesion that does not exhibit the traits above is said to be non-melanocytic.
Dermoscopy, a non-invasive imaging technique, has been created to assist dermatologists in their clinical examination to effectively diagnose melanoma [16]. Due to good visual perception, the dermoscopy device can be useful for discriminating between malignant and benign skin lesions. The capacity of dermatologists to discriminate between melanoma and non-melanoma images has been improved by the development of several traditional approaches, including the ABCD rule [17], 7-point checklist [18], Menzies procedure [19], and CASH [20]. Due to intra-class similarity, an expert person’s accurate diagnosis of skin cancer is challenging. Furthermore, melanoma and non-melanoma skin cancer kinds are very similar in color, size, and other characteristics.
Additionally, eye examination-based melanoma diagnosis is laborious, expensive, and time-consuming [21]. Hence, developing a computerized technique for accurately diagnosing and classifying skin cancer is very important. Several computerized techniques have been introduced in the literature for detecting and classifying skin cancer. A computerized technique is based on a few important steps such as preprocessing the dermoscopic images, lesion detection, feature extraction, and classification. Deep learning (DL) techniques have successfully detected and classified cancer diseases in medical imaging [22,23]. For the skin cancer classification, DL techniques give promising results that reveal its importance in medical imaging [24].

1.1. Motivation

The skin is the largest and most important organ in the human body. Skin cancer is currently the most common and deadliest type of cancer. It is a very specific area of research in image processing and computer vision [25]. As was previously mentioned, melanoma is the cancer that causes the greatest destruction and spreads the fastest worldwide. The exceedingly complicated makeup of the lesion makes a clinical diagnosis a poor choice. Despite extensive research and the development of numerous techniques, the issue of accurately detecting and classifying skin lesions remains difficult. The primary objective of this research is to develop a trustworthy computer-based melanoma detection technique that can surpass existing computer-aided detection methods [26].

1.2. Problem Statement

Advanced machine learning techniques like deep learning are frequently applied in medical imaging for detection and classification. Experts are actively studying skin cancer, and computer vision experts have developed several strategies. Many obstacles make skin lesion segmentation and classification less accurate. This scientific project faces several obstacles, including Low-contrast skin lesions, variations in lesion shape, and irregularity, which degrade the performance of accurate feature extraction. Imbalanced skin classes increase the probability rate of a higher number of image classes that impact the prediction performance of other classes. The researcher occasionally combined data from multiple sources to improve forecast accuracy, but this process significantly influenced the system’s calculation time. Redundant and irrelevant features increase the mistake rate and testing time during training and testing. Furthermore, melanoma, akiec, and nevi were all mistaken for one another during the prediction process. For an accurate multiclass classification problem, adding hidden layers to a neural network or other classifier is always difficult.

1.3. Major Contributions

The major contributions of this work are as follows:
  • A contrast enhancement technique is proposed based on the luminance channel and Retinex Model. The proposed technique enhanced the quality of contrast between infected and healthy regions.
  • Fine-tuned two pretrained models and added residual blocks at the end for better learning on the selected datasets.
  • Proposed a serial-Harmonic mean fusion technique
  • We developed an optimization technique named Marine Predator controlled Reyni Entropy for best feature selection.

2. Related Work

Nowadays, traditional clinical methods for melanoma diagnosis are ineffective. There is room for a CAD system to classify skin cancer accurately [27]. Colored skin lesions are examined and researched via a method called dermoscopy. It showed a new aspect of skin lesions, enabling diagnostic tools to accurately differentiate between melanoma and non-melanoma lesions. A computer uses dermoscopy to accurately diagnose and categorize skin abnormalities [28,29,30]. The next four important processes are preprocessing, lesion segmentation, feature extraction, and lesion classification. There are a lot of unanswered issues when it comes to accurately detecting and classifying skin lesions.
Deep learning models can be used to optimize the efficiency and quality of skin cancer classification [21]. According to previous literature, the most common approach in dermoscopic Image Analysis (DIA) since 2015 is a convolutional neural network used as a classifier. The latest Advanced computer vision and digital image processing research have revealed the significance of deep learning techniques to attain excellent accuracy in image segmentation, detection, and classification in complex problems [31]. To identify malignant lesions, Codella et al. [32] studied and presented mostly used deep neural networks, such as deep residual networks and deep convolutional neural network models. Simon et al. [33] presented a Deep Learning structure for skin lesion segmentation and classification. The main strength of this work was categorizing the tissues into 12 dermatologist classes. After that, they trained a deep CNN using these characteristics for final classification. They tested the introduced framework on dermoscopy images and compared it with clinical accuracy. During the comparison phase, the clinical method achieved an accuracy of 93.6, whereas the computerized method attained 97.9%. This shows that the computerized methods would perform better than the clinical techniques. Amin et al. [34] introduced an integrated design for deep feature fusion through preprocessing, segmentation, and feature extraction; firstly, they resized the images and converted RGB into luminance channel, then they used the Otsu algorithm and Biorthogonal 2-D wavelet transform to segment the infected part of skin after that pre-trained Alex net and VGG16 use to extract the deep features after that optimal feature is selected by using PCA for classification. Al.masni et al. [35] suggested a frequently used deep learning framework, merging both segmentation and skin lesion classification phases. They utilized a resolution convolutional network (FRCN) to perform the segmentation process over dermoscopic images. After that, different classifiers Inception-v3, ResNet-50, and Inception-ResNet-v2, are used over segmented images. The proposed structure of the deep learning model is experienced by three different dataset ISIC2016, ISIC2017 and ISIC2018 which hold two, three or seven classes of skin lesion with highly balanced, segmentation, and augmentation. The classifiers of Inception-v with 377.04%, ResNet-50 with 79.95%, Inception-ResNet-v2 with 81.79%, and DenseNet-201 with 81.27% showed their predicted accuracies for the dataset of ISIC2016. ResNet-50 outperformed ISIC 2017 in three classes (81.2%, 81.5%, 81.3%, and 73.4%), and ISIC2018 in seven classes (88.05%, 89.28%, 87.74%, and 88.70%), indicating its better performance.
Pacheco et al. [36] used the Thirteen best deep learning networks and observed that the SENet convolutional neural network and Adam optimization are the perfect architecture. The proposed model obtained 91% performance on the ISIC2019 Dataset. The research presented by Farooq et al. [37] enhances the classification performance of 86% of two excellent neural networks, Mobile Net and Inception Net, by utilizing the Kaggle updated dataset of skin cancer. A pioneering-based CNN-based research was conducted by Esteva et al. Lui et al. [38] Proposed a method of categorization of skin lesions; they used a traditional deep learning mode that included Dense Net and Resnet, as well as the MFL module, and achieved an accuracy of 87% on the ISIC 2017 dataset. Pedro et al. [39] introduced a classification model based on Linear SVM and Feedforward Neural Network (FNN), achieving a 90% accuracy on the dermo fit dataset. Milton et al. [40] proposed a comprehensive study of numerous deep learning methods for skin cancer. This study was conducted on many neural networks like Inception Resnet-V2, PNASNet-5, SENet-154, and Inception-V4 on publicly available ISIC-2018 Dataset. The best performance of 76% results was obtained on the PNASNet-5 model. Khatib et al. [41] Resnet-101 Architecture was presented for the classification of skin lesions. On a well-known PH2 database, the suggested model used fine-tuned CNN models to identify the multiple types of skin lesions via transfer learning and achieved an accuracy of 90%. Almaraz et al. [42] used the ABCD rule based on color, shapes, and texture as handcrafted features and Mobile NetV2 neural network architecture by using information measures for the classification of melanoma. The presented technique achieved excellent accuracy of 92.4% on the HAM10000 dataset. Table 1 presented the summary of the few existing techniques.

3. Proposed Work

In this section, the proposed method for melanoma classification is presented. The proposed method comprises preprocessing, feature extraction and fusion, feature selection, and classification steps. Figure 1 shows the proposed melanoma classification using deep learning. This figure shows that the deep features are extracted from two pre-trained CNN models, DarkNet-53 and DenseNet-201. The extracted deep features are fused using a novel technique that is later optimized using a feature selection algorithm. The selected features are finally employed for the classification. The description of each step is given in the below sub-sections.

3.1. Proposed Contrast Enhancement

3.1.1. Datasets Description

In this work, two datasets have been utilized for the experimental process, such as ISIC2018 [40] and ISIC2019 [44]. Both datasets have been publically available for research purposes (https://challenge.isic-archive.com/data/#2019, accessed on 11 August 2023). The ISIC2018 dataset consists of 10,015 dermoscopic images for training and 1512 testing images. The training images include 1113 of Melanoma (MEL), 6705 of Melanocytic nevus (NV), 514 samples of Basal cell carcinoma (BCC), 327 images of Actinic keratosis (AK), 1099 images of Benign keratosis (BKL), 115 images of Dermatofibroma (DF), and 142 images of Vascular (VASC), respectively.
The ISIC2019 [44] dataset comprises 25,331 training images and 8238 test images. Overall, the total number of images is 33,569. This dataset consists of eight classes: MEL, NV, BCC, AK, BKL, DF, VASC, and SCC (squamous cell carcinoma). All images of both datasets have been in RGB format with different resolutions. We resized all the images into 512 × 512 × 3, which was later resized according to the selected CNN models. A few sample images are shown in Figure 2.

3.1.2. Contrast Enhancement

The lesion diagnosis system’s most crucial phase is contrast enhancement. The issue of low contrast is addressed in the literature using a diversity of enhancing approaches. This article uses a novel technique that uses texture and color information for improvement. Because it is observed that, in contrast to patches of healthy skin, skin lesions are more likely to have texture and color information. The textural information is calculated using normalized luminance channels as follows:
φ L u , v = λ × F Y 16 ,
F Y = Y 3 for   Y > 0.01 7.787 Y + 16 λ   elsewhere
where λ = 116, Y = Y ~ 100 , Y ~ = ω i × G , i 0.212 , 0.715 , 0.072 . The G denotes the green channel, which is extracted from the original RGB image as G = G j = 1 3 ϕ j . The whole expression is simplified as follows:
L u , v = φ L ( j = 1 3 I ( u , v ) 3 )
where I u , v the original RGB is an image and φ L is luminance function. Then the Gaussian function is performed on the luminance image to examine the textural information in the lesion area. The Gaussian function is defined as follows:
ρ u , v , σ = L u , v φ u , v , σ L ( u , v )
where φ u , v , σ = L u , v × G ( σ ) . It means that L u , v is smoothed by a Gaussian filter with parameter σ (standard deviation). The σ is calculated as follows:
σ = u v 2 N u v N 2
The above expression ρ u , v , σ is simplified as:
ρ u , v , σ = L u , v L u , v × φ u , v , σ φ u , v , σ
= L u , v 1 φ u , v , σ φ u , v , σ ,
= L u , v × Z φ u , v , σ
where Z = 1 φ u , v , σ . Generally, the low-intensity pixel in dermoscopic images occurs in the lesion area. Hence, we perform an activation function to differentiate the lesion and skin pixels in the image. The activation function is defined as:
F ( A ) = { φ L ~ u , v i f ρ u , v , σ > φ ( u , v , σ ) Lesion   Area φ H ~ u , v otherwise     Healthy   Skin   Area
where, φ L ~ u , v , φ H ~ u , v represents the lesion and healthy skin area, respectively. Finally, to adjust the color intensities of resultant pixels, we utilized the Retinex Model [45]. This model is utilized for color adjustment, which is defined as follows:
φ R e t i n e x u , v = φ L i ~ u , v φ L i ~ u , v G ( σ )
where i L , A , B ; denotes the convolution operation and G ( σ ) is the Gaussian filter with standard deviation. Some sample results of the preprocessing step are shown in Figure 3. This figure clearly shows that the problem of poor contrast is resolved by implementing the proposed technique. These enhanced images are further utilized in the model’s learning phase.

3.1.3. Transfer Learning

Transfer learning (TL) is used to improve the efficiency of the process and reduce the number of resources essential. When elements of a pre-trained machine learning model are reused in a new machine learning model, this is known as transfer learning. In transfer learning, define feature vector and probability distribution as A = f v , P ( f v ) and f v = v 1 , v 2 , . . , v n . In which ground truth G = g 1 , g 2 , . . , g n and objective function O = { G , l x , whereas l ( x ) is an unknown label class. P ( g | x ) is a probabilistic representation of the function. Transfer learning and the learning rate are denoted as T o and L o . T f will be used to show the targeted function and targeted output is T f . The main goal of transfer learning is to improve the learning rate for predicting the targeted item using the recognition function ( l x ) depending upon that training for learning from T o and T f where T o   T f and L o T f . Pattern recognition is improved via inductive transfer learning. You’ll need an annotated database for fast training and testing when using inductive transfer learning. A general model of TL is shown in Figure 4.

3.2. Deep Models Fine-Tuning and Feature Extraction

In this work, two pretrained deep learning models, such as DarkNet-53 and DensNet-201 are fine-tuned and trained through TL for deep feature extraction.
Fine-Tuned DarkNet-53 Model: A convolutional neural network with 53 layers is known as DarkNet-53 [46]. The ImageNet database contains a pre-trained version of the network trained on more than a million images. This network mainly comprises 53 convolutional layers, 1 × 1 and 3 × 3, located at the front of the residual layer. A batch normalization (BN) layer and a LeakyReLU layer follow each convolutional layer. Several residual blocks of this network are repeated, such as 1, 2, 4, and 8. We deleted the last three layers of the model for the fine-tuning model and added three new layers. In addition, we added a new residual block having three convolutional layers of filter size 3 × 3 and stride 1. After that, the training of this model is performed using TL. After the training, features are extracted from the deeper layer called the global average pool layer of dimensional Nx1024.
Fine-Tuned DenseNet-201 Model: DenseNet-201 [47] is the name of a convolutional neural network with 201 layers. A pretrained version of the model that has been tested on more than a million images is present in the ImageNet database. The DenseNet-201 uses the condensed network to produce models that are easy to train and incredibly computationally effective since feature recycling by several layers improves variety in the input to the subsequent layer and performs better.
Figure 5 shows the original architecture of DensNet-201. In the fine-tuning process, we replaced the last three layers at the initial stage with three new layers. After that, a residual block of six layers was added, including three convolutional filter sizes 3 × 3 and stride 1. This block is added after the T3. This fine-tuned model is trained using TL, whereas the global average pooling layer is selected for the deep feature extraction. On this layer, 1920 features are extracted for each image.

3.3. Feature Fusion

We are taking two feature vectors F v 1 D a r , F v 2 S q u and fusion vector represented as F u s v . The dimension of these vectors is R × N . Where N is represented the length of extracted features and R denotes the number of training images. The initial vector length of each feature vector is R × 1024 and R × 1920 , appropriately. The following formula is used to compute the correlation coefficient between both feature vectors D a r and S q u of each row.
f D a r , S q u = C O V D a r , S q u V a r D a r V a r S q u
The range of these values lies between (−1,1), where −1 for weak correlation and +1 for strong correlation. The equation of the maximum correlation vector is as follows:
C V D a r , S q u = φ   f   ( ( m 1 ( D a r ) , m 2 ( S q u ) )
In this case, φ denotes the Supremum of the overall Borel functions ; S q u : ω ω which is located between (0, 1). The C V D a r , S q u is the maximum correlation. After that, a harmonic mean-based threshold function is designed for the final fusion as follows:
H = n 1 f 1 + 1 f 2 + + 1 f k
where H denotes the harmonic mean, f denotes the features of C V D a r , S q u and k denotes the feature of a single row. A harmonic mean is used to give a higher weightage of the small value features. The main reason is the reduction of several small value features important for classification. Finally, a threshold function is employed and a fused vector is obtained.
T h = F u s i o n   k     f o r     C V k H E x t r a   f e a t u r e s   m     f o r     C V m < H
The F u s i o n   k feature vector is considered for further processing. In this work, a fused vector is obtained of dimension N × 2012 , where N is the number of training images.

3.4. Feature Selection

Feature selection is a hot research area in computer vision for the curse of dimensionality. Many techniques have been introduced in the literature for feature selection for improved accuracy and less computational time. In this work, a metaheuristic algorithm is implemented named the Marine Predator Algorithm (MPA) [48] and modified further with an entropy technique called Reyni Entropy.
The MPA was proposed to mimic the behavior of marine predators in search of Prey, in which the predators use L’evy and Brownian movements as their optimal foraging mechanisms. The velocity ratio v of the Prey to the predator is used to make a tradeoff between L’evy and Brownian strategies. When v is small or equal to 0.1, the best strategy for the predator is to move in the L’evy steps (exploration phase) regardless of whether the Prey is moving in Brownian or L’evy. However, if v is equal to 1, then the best approach for the predator is to move in Brownian steps if the Prey is moving in L’evy steps. Finally, when >10v, the predator should not move at all, regardless of whether the Prey is moving in Brownian or L’evy because it will come in itself (exploitation phase). The mathematical model of the MPA is as follows:
Initialization: In the first step, the initial solution is uniformly distributed over the search space area using the following formula, where A F u s i o n ( k ) .
x = A min + i ( A max A min )
where i→ is a vector generated randomly within ⊗ represents the entry-wise multiplication, and Amin, and Amax are the vectors containing the dimensions’ lower and upper bounds.
Elite and Prey matrix construction: Based on the survival of the fitness theory, the top predator is the one that is best in foraging. Thus, the top predator is used to construct a matrix called Elite.
E l i t e = A 1 1 , 1 A 1 1 , 2 A 1 1 , d A 1 2 , 1 A 1 2 , 1 A 1 2 , d A 1 N , 1 A 1 N , 2 A 1 N , d
where A1 → represents the top predator vector and is replicated N times to build up the elite matrix (N is the number of individuals in the population), and d is the number of dimensions. This matrix will be updated at the end of each iteration if the top predator is updated. Another matrix, p, represents Prey and has the same dimensions as Elite and is used by the predators to update their positions as follows:
P A 1 1 , 1 A 1 1 , 2 A 1 1 , d A 1 2 , 1 A 1 2 , 1 A 1 2 , d A 1 N , 1 A 1 N , 2 A 1 N , d
where A N , d denotes the n t h dimensional of d Prey. The optimization process consists of three steps, high-velocity ratio, unit-velocity ratio, and low-velocity ratio. In the high-velocity ratio, the Prey quickly searches the food, and mathematically, it is defined as follows:
i f   t < 1 3 t m a x
V i = R x E l i t e i R x P i
P i = P i + F . N V i
where, R x denotes the numerical vector, denotes the entry-wise multiplication, F denotes the fixed numerical value that is 0.4 in this work, N denotes the numerically generated random vector, t is a current iteration, and t m a x denotes the maximum iterations, respectively.
After that, a unit velocity ratio-based transition stage is considered that is defined as follows:
i f 1 3 t m a x < t < 2 3 t m a x
For the first half, the population is calculated as:
V i = R L E l i t e i R L P i
P i = P i + F . N V i
For the second half, the population is computed as follows:
V i = R B R B E l i t e i P i
P i = P i + F . A P V i
where A P is an adaptive parameter that is used for the computation of step size as follows:
A P = 1 t t m a x 2 t t m a x
In the last step, a low velocity ratio is opted [48]. Then, a FAD is computed for the final prey selection as follows:
P i = P i + A P x m i n + R x m a x x m i n B     i f   r < 0.4 P i + 0.4 1 r + r P r 1 P r 1 i f     r 0.4
Here, B is a binary vector of value 1 or 0. The Reyni entropy is computed to remove the uncertainty among selected Prey P i and then compute the fitness. The Prey, which satisfied the entropy function, is passed for the fitness calculation.
E n t P i = 1 1 α log i = 1 n P i α ,   α > 1   a n d 1
Here, E n t denotes the entropy value of each row of selected ith prey. We are using this value in the following for the final selection.
F n c =   S e l k   f o r     P i E n t i g n o r e ,     E l s e w h e r e
The selected vector S e l k is finally employed for the fitness calculation. This process continues until the number of iterations is completed. In this work, 200 iterations have been selected. After 200 iterations, we got the final feature vector of dimensional N × 1768 for ISIC2018 and N × 1559 for ISIC2019 dataset, respectively. The selected features are finally classified using machine learning classifiers.

4. Experiments and Results

The experimental process of the proposed method is discussed in this section. The proposed method is examined using two different datasets such as ISIC2018 and ISIC2019. These datasets are publicly available for the researchers of medical imaging. Ten classifiers are used to examine the classification accuracy, including Quadratic SVM (QSVM), Wide Neural Network (WNN), Cubic SVM (CSVM), Fine Tree (FT), Gaussian Naive Bayes (GNB), Weighted KNN (WKNN), Cubic KNN (CKNN), Narrow Neural Network (NNN), Bilayered Neural Network (BNN), and Trilayered Neural Network (TNN). The best one is selected based on the highest accuracy value employed for the visual prediction. Each classifier performance is computed based on performance measures such as sensitivity, F1-Score, precision rate, accuracy, FPR, and testing time (sec). Training and testing sets were split before data augmentation into 50:50, meaning 50% of the images in each class were used for training, while the remaining 50% were taken for testing. The validation images are merged into testing images that are utilized for the classification results. The total number of epochs is 100 with a learning rate of 0.0002, momentum of 0.6557, and batch size of 128. All the experiments are evaluated in MATLAB2022b on an Intel Core i7 7th generation CPU possessing 8 GB of RAM and 8 GB graphics card of RTX3060.

4.1. ISIC 2018 Dataset Results

The results of this ISIC2018 dataset are presented in four steps. In the first step, fine-tuned DarkNet-53 deep model features are extracted and performed classification. The classification results are given in Table 2. This table shows that the highest noted accuracy is 79.3% of Cubic SVM. The recall rate of this classifier is 49.2%, the sensitivity rate of 72%, the F1-score of 58.6, and the FNR is 27.3%, respectively. Furthermore, the computed time of the Cubic SVM classifier during the testing process is 114.2 s (sec). The rest of the classifiers obtained an accuracy in the range of 55.3–79%.
Table 3 presents the results of DensNet-201 deep features using the ISIC2018 dataset. On this dataset, the obtained highest accuracy of 81.5% by Cubic SVM. The recall rate of this classifier is 53.8%, the sensitivity rate is 74.5%, the F1-score is 62.4%, and FNR is 25.5%. Furthermore, the computational time of the Cubic SVM is 259.6 s (sec). The rest of the classifiers’ accuracy range is between 59 and 81.3%. Compared to the accuracy and other performance measures of both tables (Table 2 and Table 3), it is observed that the accuracy of the DenseNet-201 model is improved than the DarkNet-53 model. However, the DarkNet-53 model is computationally faster than the DenseNet features.
Table 4 shows the proposed fusion results on the ISIC2018 dataset. In this table, quadratic SVM obtained the highest accuracy of 86.2%, while other computed metrics, such as recall rate, precision rate, F1-Score, and last FNR, are 61%, 80%, and 69.2, respectively. The Cubic SVM achieved an accuracy of 86.1%. The computational time of the fusion process is increased, which is a drawback of this step; however, the improvement in accuracy is strength. Compared to the fusion results with Table 2 and Table 3, an almost 5% improvement in the accuracy is observed for Cubic SVM. For the quadratic SVM, the improvement is also above 5%.
The classification results of the proposed feature selection method are given in Table 5. The quadratic SVM obtained the highest accuracy of 85.4%, while other computed measures included a recall rate of 60.8%, precision rate of 78.1%, F1-Score of 68.3%, and FNR of 21.9%, respectively. After the fusion process, the computational time is almost half, as shown in this table. Overall, the selection process maintains consistent accuracy and reduces computational time. The confusion matrix of quadratic SVM is shown in Figure 6 to verify the proposed feature selection performance.

4.2. ISIC2019 Dataset Results

The results of this ISIC2019 dataset are discussed in this subsection. Results are computed in several steps, such as fine-tuned DarkNet-19 model features, DenseNet-201 features, fusion, and selection of best features.
In the first step, fine-tuned DarkNet-53 deep model features are extracted and performed classification. The classification results are given in Table 6. This table shows that 98.1% of Cubic SVM is the highest noted accuracy. The recall rate of this classifier is 98.0%, the precision rate is 98.2%, the F1-score is 98.0, and the FNR is 1.8%, respectively. Furthermore, the computed time of the Cubic SVM classifier during the testing process is 267.8 s (sec). The rest of the classifiers obtained accuracy in the range of 56.2–98%.
Table 7 presents the results of DensNet-201 deep features using the ISIC2019 dataset. On this dataset, the obtained highest accuracy of 98.9% by Cubic SVM. The recall rate of this classifier is 98.9%, the sensitivity rate is 98.9%, the F1-score is 98.9%, and FNR is 1.1%, respectively. Furthermore, the computational time of the Cubic SVM is 1177.6.6 s (sec), which is too high. The rest of the classifiers’ accuracy range is between 61.5 and 98.7%. Compared to the accuracy and other performance measures of both tables (Table 6 and Table 7), it is observed that the accuracy of the DenseNet201 model is improved than the DarkNet-53 model. However, the DenseNet-201 model execution time is too high, which is challenging for this method.
After that, the fusion technique is applied, and the results are presented in Table 8. Table 8 shows the proposed fusion results on the ISIC2019 dataset. In this table, quadratic SVM obtained the highest accuracy of 99.1%, while other computed metrics, such as recall rate, precision rate, F1-Score, and FNR, are 99.02%, 99.1%, 99.0, and 0.9, respectively. The computational time of the fusion process is 1329.7 (sec), which is significantly increased than the previous two steps. Compared to the fusion results with Table 2 and Table 3, an almost 1% improvement in the accuracy is observed for Cubic SVM. For the quadratic SVM, the improvement is also above 1%.
The classification results of the proposed feature selection method are given in Table 9. The Cubic SVM obtained the maximum accuracy of 98.9%, while other computed measures include a recall rate of 98.8%, F1 score of 98.8%, and FNR of 1.1%, respectively. Furthermore, the computational time of the Cubic SVM classifier during the testing phase is 655.1 s (sec). Compared with the fusion results, the feature selection technique results are consistent, and time is significantly reduced. Figure 6 shows the confusion matrix of Cubic SVM for the feature selection results. The confusion matrix can be utilized for the verification of proposed results.

4.3. Discussion and Analysis

A detailed discussion of the proposed framework has been conducted in this section. In addition, a detailed ablation study is performed to show the importance of each step. Figure 1 shows the proposed model that includes several middle steps. The contrast of both datasets has been enhanced using the proposed technique, discussed in Section 3.1.2. After that, two pre-trained models were trained and obtained the classification results. Later on, fusion is performed and obtains improved accuracy. However, it is also observed that the time was increased during the fusion process. Therefore, a new feature selection technique is developed for better accuracy with less computational time. All the numerical results are discussed in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9. In addition, the confusion matrix of both datasets has been illustrated in Figure 6 and Figure 7. These confusion matrixes show how much the correct prediction has been conducted for each class.
Moreover, time is computed for each classifier for all experiments. Based on the noted time in the tables, it is observed that the computational time of the fusion process is significantly increased, which was later reduced by the feature selection technique. Figure 8 shows the accuracy-based comparison of ISIC2019 dataset results after employing the proposed feature selection technique. This figure shows that the accuracy is plotted for four different ratios such as 50:50, 60:40, 70:30, and 80:20, respectively. The average accuracy of all classifiers for the 50:50 approach is 89.81%, whereas for the rest of the combination, the obtained accuracies are 88.68, 89.29, and 89.75%, respectively.
A GradCAM-based visualization is performed for the DenseNet-201 fine-tuned model. This process aims to analyze the newly trained model’s performance. A few sample results are shown in Figure 9. In this figure, it is illustrated that the brown highlighted regions are marked on the cancer region. Figure 10 shows a few sample-labeled images of the entire proposed framework. These images are generated using the proposed method (Cubic SVM classifier). In the end, a brief comparison of the proposed method with several existing techniques has been conducted. Table 10 presents several techniques for comparison with existing methods. In [49], the authors used the ISIC2018 dataset for the experimental process and obtained an accuracy of 83%. The proposed method shows an improved accuracy of 85.4%. Authors in [50,51] used the ISIC2019 dataset and obtained an accuracy of 97.1% and 97.84%, respectively. The proposed method obtained an accuracy of 98.9%, which is improved than the existing techniques on the ISIC2019 dataset.

5. Conclusions

Skin lesion classification is vital in computer-aided melanoma detection (CAD) systems, whose accuracy depends on the middle steps, such as contrast enhancement of skin lesions, feature extraction, feature fusion, and selection. This work proposes a non-invasive computerized dermoscopy technique for the improved classification accuracy of multiclass skin lesions. Data augmentation was performed in the initial phase that followed the learning of fine-tuned deep learning models. Features are extracted from the global average pooling layer of both trained models. Later on, the fusion technique is employed, and fused features of both CNN models. Finally, the fused feature vector is optimized using an improved selection algorithm that is classified using machine learning classifiers. Two datasets have been employed for the experimental process, such as ISIC2018 (seven classes) and ISIC2019 (eight classes). On these datasets, the proposed method obtained an improved accuracy of 85.4% and 98.9%, respectively. Overall, we conclude the following:
  • The proposed framework can be useful in the clinics for the second opinion of malignant and benign lesions.
  • The proposed framework can help dermatologists with early classification of lesion type and is also useful for lesion location localization (GradCAM).
  • The contrast enhancement step improves the visibility of cancer and healthy regions, later helpful in better learning of fine-tuned deep models.
  • Adding a new block for each network increased the learning performance and training accuracy.
  • The fusion process improved the accuracy of the proposed method compared to the fine-tuned models.
  • The selection of best features removed the redundant and irrelevant information and reduced the computational time.
This work’s limitation is the increased computational time after employing the fusion step. In the future, an attention mechanism-based network-level fused architecture will be designed and trained on the ISIC2018 and ISIC2019 datasets. In addition, a feature optimization technique will be proposed based on the location adjustment.

Author Contributions

Each contributor made an equal contribution. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through large group Research Project under grant number RGP2/249/44.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets used in this work are publically available (https://challenge.isic-archive.com/data/#2019, accessed on 11 August 2023).

Acknowledgments

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through large group Research Project under grant number RGP2/249/44.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hasan, M.K.; Ahamad, M.A.; Yap, C.H.; Yang, G. A survey, review, and future trends of skin lesion segmentation and classification. Comput. Biol. Med. 2023, 155, 106624. [Google Scholar] [CrossRef] [PubMed]
  2. Yang, J.; Luly, K.M.; Green, J.J. Nonviral nanoparticle gene delivery into the CNS for neurological disorders and brain cancer applications. Wiley Interdiscip. Rev. Nanomed. Nanobiotechnol. 2023, 15, e1853. [Google Scholar] [CrossRef]
  3. Huang, S.; Yang, J.; Shen, N.; Xu, Q.; Zhao, Q. Artificial intelligence in lung cancer diagnosis and prognosis: Current application and future perspective. In Seminars in Cancer Biology; Academic Press: Cambridge, MA, USA, 2023. [Google Scholar]
  4. Nasser, M.; Yusof, U.K. Deep Learning Based Methods for Breast Cancer Diagnosis: A Systematic Review and Future Direction. Diagnostics 2023, 13, 161. [Google Scholar] [CrossRef] [PubMed]
  5. Zedan, M.J.; Zulkifley, M.A.; Ibrahim, A.A.; Moubark, A.M.; Kamari, N.A.M.; Abdani, S.R. Automated Glaucoma Screening and Diagnosis Based on Retinal Fundus Images Using Deep Learning Approaches: A Comprehensive Review. Diagnostics 2023, 13, 2180. [Google Scholar] [CrossRef]
  6. Ashraf, R.; Afzal, S.; Rehman, A.U.; Gul, S.; Baber, J.; Bakhtyar, M.; Mehmood, I.; Song, O.-Y.; Maqsood, M. Region-of-interest based transfer learning assisted framework for skin cancer detection. IEEE Access 2020, 8, 147858–147871. [Google Scholar] [CrossRef]
  7. Zheng, Y.; Liang, H.; Li, Z.; Tang, M.; Song, L. Skin microbiome in sensitive skin: The decrease of Staphylococcus epidermidis seems to be related to female lactic acid sting test sensitive skin. J. Dermatol. Sci. 2020, 97, 225–228. [Google Scholar] [CrossRef] [PubMed]
  8. Namozov, A.; Im Cho, Y. Convolutional neural network algorithm with parameterized activation function for melanoma classification. In Proceedings of the 2018 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Republic of Korea, 17–19 October 2018; pp. 417–419. [Google Scholar]
  9. In, T. Facts & figures 2019: US cancer death rate has dropped 27% in 25 years. Am. Cancer 2019, 4, 1–17. [Google Scholar]
  10. Chaturvedi, S.S.; Gupta, K.; Prasad, P.S. Skin lesion analyser: An efficient seven-way multi-class skin cancer classification using MobileNet. In Proceedings of the International Conference on Advanced Machine Learning Technologies and Applications, Jaipur, India, 13–15 February 2020; pp. 165–176. [Google Scholar]
  11. Tahir, M.; Naeem, A.; Malik, H.; Tanveer, J.; Naqvi, R.A.; Lee, S.-W. DSCC_Net: Multi-Classification Deep Learning Models for Diagnosing of Skin Cancer Using Dermoscopic Images. Cancers 2023, 15, 2179. [Google Scholar] [CrossRef]
  12. Mazhar, T.; Haq, I.; Ditta, A.; Mohsan, S.A.H.; Rehman, F.; Zafar, I.; Gansau, J.A.; Goh, L.P.W. The role of machine learning and deep learning approaches for the detection of skin cancer. Healthcare 2023, 11, 415. [Google Scholar] [CrossRef]
  13. Khan, M.Q.; Hussain, A.; Rehman, S.U.; Khan, U.; Maqsood, M.; Mehmood, K.; Khan, M.A. Classification of melanoma and nevus in digital images for diagnosis of skin cancer. IEEE Access 2019, 7, 90132–90144. [Google Scholar] [CrossRef]
  14. Khan, A.R.; Khan, S.; Harouni, M.; Abbasi, R.; Iqbal, S.; Mehmood, Z. Brain tumor segmentation using K-means clustering and deep learning with synthetic data augmentation for classification. Microsc. Res. Tech. 2021, 84, 1389–1399. [Google Scholar] [CrossRef] [PubMed]
  15. Tembhurne, J.V.; Hebbar, N.; Patil, H.Y.; Diwan, T. Skin cancer detection using ensemble of machine learning and deep learning techniques. Multimed. Tools Appl. 2023, 82, 27501–27524. [Google Scholar] [CrossRef]
  16. Fargnoli, M.C.; Kostaki, D.; Piccioni, A.; Micantonio, T.; Peris, K. Dermoscopy in the diagnosis and management of non-melanoma skin cancers. Eur. J. Dermatol. 2012, 22, 456–463. [Google Scholar] [CrossRef] [PubMed]
  17. Nachbar, F.; Stolz, W.; Merkle, T.; Cognetta, A.B.; Vogt, T.; Landthaler, M.; Bilek, P.; Braun-Falco, O.; Plewig, G. The ABCD rule of dermatoscopy: High prospective value in the diagnosis of doubtful melanocytic skin lesions. J. Am. Acad. Dermatol. 1994, 30, 551–559. [Google Scholar] [CrossRef] [PubMed]
  18. Kawahara, J.; Daneshvar, S.; Argenziano, G.; Hamarneh, G. Seven-point checklist and skin lesion classification using multitask multimodal neural nets. IEEE J. Biomed. Health Inform. 2018, 23, 538–546. [Google Scholar] [CrossRef]
  19. Argenziano, G.; Soyer, H.P.; Chimenti, S.; Talamini, R.; Corona, R.; Sera, F.; Binder, M.; Cerroni, L.; De Rosa, G.; Ferrara, G. Dermoscopy of pigmented skin lesions: Results of a consensus meeting via the Internet. J. Am. Acad. Dermatol. 2003, 48, 679–693. [Google Scholar] [CrossRef]
  20. Henning, J.S.; Dusza, S.W.; Wang, S.Q.; Marghoob, A.A.; Rabinovitz, H.S.; Polsky, D.; Kopf, A.W. The CASH (color, architecture, symmetry, and homogeneity) algorithm for dermoscopy. J. Am. Acad. Dermatol. 2007, 56, 45–52. [Google Scholar] [CrossRef]
  21. Keerthana, D.; Venugopal, V.; Nath, M.K.; Mishra, M. Hybrid convolutional neural networks with SVM classifier for classification of skin cancer. Biomed. Eng. Adv. 2023, 5, 100069. [Google Scholar] [CrossRef]
  22. Qasim Gilani, S.; Syed, T.; Umair, M.; Marques, O. Skin Cancer Classification Using Deep Spiking Neural Network. J. Digit. Imaging 2023, 36, 1137–1147. [Google Scholar] [CrossRef]
  23. SM, J.; P, M.; Aravindan, C.; Appavu, R. Classification of skin cancer from dermoscopic images using deep neural network architectures. Multimed. Tools Appl. 2023, 82, 15763–15778. [Google Scholar]
  24. Sukanya, S.; Jerine, S. Skin lesion analysis towards melanoma detection using optimized deep learning network. Multimed. Tools Appl. 2023, 82, 27795–27817. [Google Scholar] [CrossRef]
  25. Naqvi, M.; Gilani, S.Q.; Syed, T.; Marques, O.; Kim, H.-C. Skin Cancer Detection Using Deep Learning—A Review. Diagnostics 2023, 13, 1911. [Google Scholar] [CrossRef] [PubMed]
  26. Gururaj, H.; Manju, N.; Nagarjun, A.; Aradhya, V.N.M.; Flammini, F. DeepSkin: A Deep Learning Approach for Skin Cancer Classification. IEEE Access 2023, 11, 50205–50214. [Google Scholar] [CrossRef]
  27. Mridha, K.; Uddin, M.M.; Shin, J.; Khadka, S.; Mridha, M. An Interpretable Skin Cancer Classification Using Optimized Convolutional Neural Network for a Smart Healthcare System. IEEE Access 2023, 11, 41003–41018. [Google Scholar] [CrossRef]
  28. Abbas, Q.; Emre Celebi, M.; Garcia, I.F.; Ahmad, W. Melanoma recognition framework based on expert definition of ABCD for dermoscopic images. Ski. Res. Technol. 2013, 19, e93–e102. [Google Scholar] [CrossRef] [PubMed]
  29. Barata, C.; Ruela, M.; Francisco, M.; Mendonça, T.; Marques, J.S. Two systems for the detection of melanomas in dermoscopy images using texture and color features. IEEE Syst. J. 2013, 8, 965–979. [Google Scholar] [CrossRef]
  30. Zortea, M.; Schopf, T.R.; Thon, K.; Geilhufe, M.; Hindberg, K.; Kirchesch, H.; Møllersen, K.; Schulz, J.; Skrøvseth, S.O.; Godtliebsen, F. Performance of a dermoscopy-based computer vision system for the diagnosis of pigmented skin lesions compared with visual evaluation by experienced dermatologists. Artif. Intell. Med. 2014, 60, 13–26. [Google Scholar] [CrossRef]
  31. Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 818–833. [Google Scholar]
  32. Codella, N.C.; Nguyen, Q.-B.; Pankanti, S.; Gutman, D.A.; Helba, B.; Halpern, A.C.; Smith, J.R. Deep learning ensembles for melanoma recognition in dermoscopy images. IBM J. Res. Dev. 2017, 61, 5:1–5:15. [Google Scholar] [CrossRef]
  33. Thomas, S.M.; Lefevre, J.G.; Baxter, G.; Hamilton, N.A. Interpretable deep learning systems for multi-class segmentation and classification of non-melanoma skin cancer. Med. Image Anal. 2021, 68, 101915. [Google Scholar] [CrossRef]
  34. Amin, J.; Sharif, A.; Gul, N.; Anjum, M.A.; Nisar, M.W.; Azam, F.; Bukhari, S.A.C. Integrated design of deep features fusion for localization and classification of skin cancer. Pattern Recognit. Lett. 2020, 131, 63–70. [Google Scholar] [CrossRef]
  35. Al-Masni, M.A.; Kim, D.-H.; Kim, T.-S. Multiple skin lesions diagnostics via integrated deep convolutional networks for segmentation and classification. Comput. Methods Programs Biomed. 2020, 190, 105351. [Google Scholar] [CrossRef] [PubMed]
  36. Pacheco, A.G.; Ali, A.-R.; Trappenberg, T. Skin cancer detection based on deep learning and entropy to detect outlier samples. arXiv 2019, arXiv:1909.04525. [Google Scholar]
  37. Farooq, M.A.; Khatoon, A.; Varkarakis, V.; Corcoran, P. Advanced deep learning methodologies for skin cancer classification in prodromal stages. arXiv 2020, arXiv:2003.06356. [Google Scholar]
  38. Liu, L.; Mou, L.; Zhu, X.X.; Mandal, M. Automatic skin lesion classification based on mid-level feature learning. Comput. Med. Imaging Graph. 2020, 84, 101765. [Google Scholar] [CrossRef]
  39. Pereira, P.M.; Fonseca-Pinto, R.; Paiva, R.P.; Assuncao, P.A.; Tavora, L.M.; Thomaz, L.A.; Faria, S.M. Skin lesion classification enhancement using border-line features–The melanoma vs nevus problem. Biomed. Signal Process. Control. 2020, 57, 101765. [Google Scholar] [CrossRef]
  40. Milton, M.A.A. Automated skin lesion classification using ensemble of deep neural networks in isic 2018: Skin lesion analysis towards melanoma detection challenge. arXiv 2019, arXiv:1901.10802. [Google Scholar]
  41. El-Khatib, H.; Popescu, D.; Ichim, L. Deep learning–based methods for automatic diagnosis of skin lesions. Sensors 2020, 20, 1753. [Google Scholar] [CrossRef]
  42. Almaraz-Damian, J.-A.; Ponomaryov, V.; Sadovnychiy, S.; Castillejos-Fernandez, H. Melanoma and nevus skin lesion classification using handcraft and deep learning feature fusion via mutual information measures. Entropy 2020, 22, 484. [Google Scholar] [CrossRef]
  43. Pacheco, A.G.; Krohling, R.A. The impact of patient clinical information on automated skin cancer detection. Comput. Biol. Med. 2020, 116, 103545. [Google Scholar] [CrossRef]
  44. Kassem, M.A.; Hosny, K.M.; Fouad, M.M. Skin lesions classification into eight classes for ISIC 2019 using deep convolutional neural network and transfer learning. IEEE Access 2020, 8, 114822–114832. [Google Scholar] [CrossRef]
  45. Terai, Y.; Goto, T.; Hirano, S.; Sakurai, M. Color image contrast enhancement by Retinex model. In Proceedings of the Consumer Electronics, ISCE’09, 2009 IEEE 13th International Symposium on Consumer Electronics, Kyoto, Japan, 25–28 May 2009; pp. 392–393. [Google Scholar]
  46. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  47. Wang, S.-H.; Zhang, Y.-D. DenseNet-201-based deep neural network with composite learning factor and precomputation for multiple sclerosis classification. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2020, 16, 1–19. [Google Scholar] [CrossRef]
  48. Abdel-Basset, M.; Mohamed, R.; Mirjalili, S.; Chakrabortty, R.K.; Ryan, M. An efficient marine predators algorithm for solving multi-objective optimization problems: Analysis and validations. IEEE Access 2021, 9, 42817–42844. [Google Scholar] [CrossRef]
  49. Budhiman, A.; Suyanto, S.; Arifianto, A. Melanoma cancer classification using resnet with data augmentation. In Proceedings of the 2019 International Seminar on Research of Information Technology and Intelligent Systems (ISRITI), Yogyakarta, Indonesia, 5–6 December 2019; pp. 17–20. [Google Scholar]
  50. Alizadeh, S.M.; Mahloojifar, A. Automatic skin cancer detection in dermoscopy images by combining convolutional neural networks and texture features. Int. J. Imaging Syst. Technol. 2021, 31, 695–707. [Google Scholar] [CrossRef]
  51. Elansary, I.; Ismail, A.; Awad, W. Efficient classification model for melanoma based on convolutional neural networks. In Medical Informatics and Bioimaging Using Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2022; pp. 15–27. [Google Scholar]
Figure 1. Main flow of proposed automated melanoma recognition using deep learning.
Figure 1. Main flow of proposed automated melanoma recognition using deep learning.
Diagnostics 13 03063 g001
Figure 2. A sample image of the ISIC2019 dermoscopic dataset.
Figure 2. A sample image of the ISIC2019 dermoscopic dataset.
Diagnostics 13 03063 g002
Figure 3. Lesion Contrast Enhancement Results: (a,c) Original Image; (b,d) Enhanced Image.
Figure 3. Lesion Contrast Enhancement Results: (a,c) Original Image; (b,d) Enhanced Image.
Diagnostics 13 03063 g003
Figure 4. Transfer learning model for the learning of deep model for skin lesion classification.
Figure 4. Transfer learning model for the learning of deep model for skin lesion classification.
Diagnostics 13 03063 g004
Figure 5. Original architecture of DensNet-201 CNN model.
Figure 5. Original architecture of DensNet-201 CNN model.
Diagnostics 13 03063 g005
Figure 6. Confusion matrix of Quadratic SVM after employing the proposed feature selection technique on ISIC2018 dataset. * Actinic keratosis (AK), Melanoma (MEL), Melanocytic nevus (NV), Basal cell carcinoma (BCC), Benign keratosis (BKL), Dermatofibroma (DF), and Vascular (VASC), respectively.
Figure 6. Confusion matrix of Quadratic SVM after employing the proposed feature selection technique on ISIC2018 dataset. * Actinic keratosis (AK), Melanoma (MEL), Melanocytic nevus (NV), Basal cell carcinoma (BCC), Benign keratosis (BKL), Dermatofibroma (DF), and Vascular (VASC), respectively.
Diagnostics 13 03063 g006
Figure 7. Confusion matrix of Cubic SVM after employing feature selection technique on ISIC2019 dataset.
Figure 7. Confusion matrix of Cubic SVM after employing feature selection technique on ISIC2019 dataset.
Diagnostics 13 03063 g007
Figure 8. Comparison of ISIC2019 dataset accuracy after employing proposed feature selection using different training/testing ratios.
Figure 8. Comparison of ISIC2019 dataset accuracy after employing proposed feature selection using different training/testing ratios.
Diagnostics 13 03063 g008
Figure 9. GradCAM based visualization of fine-tuned DenseNet-201model for cancer localization.
Figure 9. GradCAM based visualization of fine-tuned DenseNet-201model for cancer localization.
Diagnostics 13 03063 g009
Figure 10. Proposed method prediction in terms of Labeled images.
Figure 10. Proposed method prediction in terms of Labeled images.
Diagnostics 13 03063 g010
Table 1. Summary of deep learning based classification technique.
Table 1. Summary of deep learning based classification technique.
AuthorYearMethodsMethod TypeDatasetAccuracy
Simon et al. [33]2021Interpretable deep learning frameworkDetection + ClassificationPrivate Collected97.1%
Amin et al. [34]2020Alex net and VGG16 Neural NetworksDetection + ClassificationKaggle Skin Cancer96.0%
Al-Masni et al. [35]2020ResNet-50 andDenseNet-201ClassificationISIC 2016 ISIC 2017 and ISIC 2018.88.0%
Pacheco et al. [36]2020SE Net with Adam OptimizationDetection + ClassificationISIC 201991.0%
Farooq et al. [37]2019Inception-V3 and Mobile Net Neural NetworksClassificationKaggle Skin Cancer86.0%
Liu et al. [38]2019Dense Net and Res Net use MFL moduleClassificationISIC 201787.0%
Pereira et al. [39]2020Linear SVM and Feedforward Neural Network (FNN)Detection + ClassificationDermo fit Dataset90.0%
El-Khitib et al. [41]2020Res Net-101 CNN ArchitectureDetection + ClassificationPH2 Dataset90.0%
Almaraz et al. [42]2020Handcrafted features and Mobile Netv2 ArchitectureDetection + ClassificationHAM1000 Dataset92.4%
Pacheco et al. [43]2020VGG-16, Mobile Net, Resnet-101 using clinical featuresClassificationPAD-UFES-2076.4%
Table 2. Proposed classification results by employing DarkNet-53 deep features on ISIC2018 dataset.
Table 2. Proposed classification results by employing DarkNet-53 deep features on ISIC2018 dataset.
Sr.Classifier
(%)
Recall
(%)
Precision
(%)
F1 Score
(%)
FNR
(%)
Accuracy
(%)
Time
(Sec)
1QSVM49.772.959.127.179.0109.98
2CSVM49.272.758.627.379.3114.2
3FT25.931.628.468.466.612.44
4GNB49.433.339.766.755.324.2
5WKNN34.964.545.235.573.727.3
6CKNN72.246.756.753.372.2467.3
7NNN72.948.057.85272.9411.4
8WNN72.546.256.453.872.5371.6
9BNN76.35563.94576.326.7
10TNN71.441.752.658.371.4345.16
Bold denotes the max values.
Table 3. Results of classification utilizing DarkNet-201 deep features using the ISIC2018 dataset.
Table 3. Results of classification utilizing DarkNet-201 deep features using the ISIC2018 dataset.
Sr.Classifier
(%)
Recall
(%)
Precision
(%)
F1 Score
(%)
FNR
(%)
Accuracy
(%)
Time
(Sec)
1QSVM53.373.961.926.181.3228.24
2CSVM53.874.562.425.581.5259.6
3FT27.532.629.867.466.524.6
4GNB49.931.838.868.25966
5WKNN35.967.546.832.574.754.172
6CKNN32.951.140.048.973.51103.2
7NNN49.650.349.949.775.1604.1
8WNN56.956.956.943.179.3653.42
9BNN46.546.546.553.574.452.0
10TNN42.742.342.457.772.8253.2
Bold denotes the max values.
Table 4. Classification results of the proposed feature fusion technique using the ISIC2018 dataset.
Table 4. Classification results of the proposed feature fusion technique using the ISIC2018 dataset.
Sr.Classifier
(%)
Recall
(%)
Precision
(%)
F1 Score
(%)
FNR
(%)
Accuracy
(%)
Time
(Sec)
1QSVM618069.22086.2448.7
2CSVM598168.21986.1552.49
3FT313432.46669.666.55
4GNB584349.35768.3150.82
5WKNN394843.05277.196.49
6CKNN355442.44675.92537.2
7NNN58.46059.14081.8545.7
8WNN66.471.768.928.385.581.6
9BNN57.457.957.642.181.41031.6
10TNN57.45656.64480839.5
Bold denotes the max values.
Table 5. Classification results of the proposed feature selection technique on the ISIC2018 dataset.
Table 5. Classification results of the proposed feature selection technique on the ISIC2018 dataset.
Sr.Classifier
(%)
Recall
(%)
Precision
(%)
F1 Score
(%)
FNR
(%)
Accuracy
(%)
Time
(Sec)
1QSVM60.878.168.321.985.4277.9
2CSVM59.579.267.920.885.423.0
3FT29.332.230.667.868.836.6
4GNB57.544.450.155.668.556.5
5WKNN37.170.248.529.877.053.0
6CKNN36.869.348.030.775.9821.0
7NNN58.859.459.040.681.3191.5
8WNN65.470.968.029.184.640.6
9BNN54.356.9555.543.0580.6406.9
10TNN50.852.651.647.479.2393.3
Bold denotes the max values.
Table 6. Classification results of DarkNet-53 deep features using the ISIC2019 dataset.
Table 6. Classification results of DarkNet-53 deep features using the ISIC2019 dataset.
Sr.ClassifierRecall
(%)
Precision
(%)
F1 Score
(%)
FNR
(%)
Accuracy
(%)
Time
(Sec)
1QSVM96.9697.197.02.997.0267.3
2CSVM9898.298.01.898.1267.8
3FT56.159.157.540.956.224.1
4GNB59.762.761.137.358.258.2
5WKNN97.798.197.81.998.0112.63
6CKNN91.392.391.77.791.92471.6
7NNN95.795.895.74.295.9636
8WNN95.79695.8495.9648.4
9BNN9797.297.02.897.2571.1
10TNN9595.495.14.695.4667.9
Bold denotes the maximum value.
Table 7. Results of DensNet-201 deep features using ISIC2019 dataset.
Table 7. Results of DensNet-201 deep features using ISIC2019 dataset.
Sr.ClassifierRecall
(%)
Precision
(%)
F1 Score
(%)
FNR
(%)
Accuracy
(%)
Time
(Sec)
1QSVM98.398.498.31.698.31055.2
2CSVM98.998.998.91.198.91177.6
3FT62.365.463.834.662.148.3
4GNB63.165.364.134.761.5102.8
5WKNN98.598.898.61.298.7306.1
6CKNN94.393.293.76.894.07719.7
7NNN96.796.996.793.196.91054.5
8WNN98.896.997.83.196.91092.7
9BNN9898.198.01.998.11167
10TNN96.396.496.33.696.91092.6
Bold denotes the maximum value.
Table 8. Results of the proposed fusion technique using the ISIC2019 dataset.
Table 8. Results of the proposed fusion technique using the ISIC2019 dataset.
Sr.ClassifierRecall
(%)
Sensitive
(%)
F1 Score
(%)
FNR
(%)
Accuracy
(%)
Time
(Sec)
1QSVM98.398.398.31.798.41053.5
2CSVM99.0299.199.00.999.11329.7
3FT62.565.463.934.662.5128.2
4GNB70.57371.72769.1176.3
5WKNN97.686.0691.413.9498.0513.1
6CKNN91.193.292.16.892.71253.3
7NNN95.995.995.94.196.0266.5
8WNN97.897.997.82.198.0181.26
9BNN95.695.795.64.395.7414.8
10TNN95.395.395.34.795.3610.83
Bold denotes the maximum value.
Table 9. Results of proposed feature selection on the ISIC2019 dataset.
Table 9. Results of proposed feature selection on the ISIC2019 dataset.
Sr.Classifier
(%)
Recall
(%)
Sensitive
(%)
F1 Score
(%)
FNR
(%)
Accuracy
(%)
Time
(Sec)
1QSVM98.198.298.11.898.2488.8
2CSVM98.898.998.81.198.9655.1
3FT60.663.261.836.860.439.9
4GNB70.372.571.327.568.987.39
5WKNN97.79897.8297.9180.1
6CKNN91.891.891.88.292.13829.5
7NNN95.195.195.14.995.1105.9
8WNN97.394.795.95.397.5129.4
9BNN94.797.496.02.694.6342.28
10TNN94.594.594.55.594.5128.6
Bold denotes the maximum value.
Table 10. Comparison with existing methods for the proposed technique.
Table 10. Comparison with existing methods for the proposed technique.
ReferenceDatasetAccuracy
[49]ISIC 201883%
[50]ISIC 201997.1%
[51]ISIC 201997.84%
ProposedISIC 2018
ISIC 2019
85.4%
98.9%
Bold denotes the significant outcome.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bibi, S.; Khan, M.A.; Shah, J.H.; Damaševičius, R.; Alasiry, A.; Marzougui, M.; Alhaisoni, M.; Masood, A. MSRNet: Multiclass Skin Lesion Recognition Using Additional Residual Block Based Fine-Tuned Deep Models Information Fusion and Best Feature Selection. Diagnostics 2023, 13, 3063. https://doi.org/10.3390/diagnostics13193063

AMA Style

Bibi S, Khan MA, Shah JH, Damaševičius R, Alasiry A, Marzougui M, Alhaisoni M, Masood A. MSRNet: Multiclass Skin Lesion Recognition Using Additional Residual Block Based Fine-Tuned Deep Models Information Fusion and Best Feature Selection. Diagnostics. 2023; 13(19):3063. https://doi.org/10.3390/diagnostics13193063

Chicago/Turabian Style

Bibi, Sobia, Muhammad Attique Khan, Jamal Hussain Shah, Robertas Damaševičius, Areej Alasiry, Mehrez Marzougui, Majed Alhaisoni, and Anum Masood. 2023. "MSRNet: Multiclass Skin Lesion Recognition Using Additional Residual Block Based Fine-Tuned Deep Models Information Fusion and Best Feature Selection" Diagnostics 13, no. 19: 3063. https://doi.org/10.3390/diagnostics13193063

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop