Next Article in Journal
Can FinTech Applied to Payments Improve Consumer Financial Satisfaction? Evidence from the USA
Next Article in Special Issue
Diagnosing Vascular Aging Based on Macro and Micronutrients Using Ensemble Machine Learning
Previous Article in Journal
Improving the Performance of RODNet for MMW Radar Target Detection in Dense Pedestrian Scene
Previous Article in Special Issue
DExMA: An R Package for Performing Gene Expression Meta-Analysis with Missing Genes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DeepLabv3+-Based Segmentation and Best Features Selection Using Slime Mould Algorithm for Multi-Class Skin Lesion Classification

1
Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47040, Pakistan
2
Department of Computer Science, University of Wah, Wah Cantt 47040, Pakistan
3
National University of Technology (NUTECH), Islamabad 44000, Pakistan
4
Department of Computer Science, Shah Abdul Latif University, Khairpur 66111, Pakistan
5
Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
6
Artificial Intelligence Research Center (AIRC), Ajman University, Ajman P.O. Box 346, United Arab Emirates
7
Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 13-5053, Lebanon
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(2), 364; https://doi.org/10.3390/math11020364
Submission received: 18 September 2022 / Revised: 28 December 2022 / Accepted: 4 January 2023 / Published: 10 January 2023
(This article belongs to the Special Issue Current Research in Biostatistics)

Abstract

:
The development of abnormal cell growth is caused by different pathological alterations and some genetic disorders. This alteration in skin cells is very dangerous and life-threatening, and its timely identification is very essential for better treatment and safe cure. Therefore, in the present article, an approach is proposed for skin lesions’ segmentation and classification. So, in the proposed segmentation framework, pre-trained Mobilenetv2 is utilised in the act of the back pillar of the DeepLabv3+ model and trained on the optimum parameters that provide significant improvement for infected skin lesions’ segmentation. The multi-classification of the skin lesions is carried out through feature extraction from pre-trained DesneNet201 with N × 1000 dimension, out of which informative features are picked from the Slim Mould Algorithm (SMA) and input to SVM and KNN classifiers. The proposed method provided a mean ROC of 0.95 ± 0.03 on MED-Node, 0.97 ± 0.04 on PH2, 0.98 ± 0.02 on HAM-10000, and 0.97 ± 0.00 on ISIC-2019 datasets.

1. Introduction

Skin acts as the most important and massive part of the human body, covering about 20 square feet. The skin plays the role of regulating temperature, allowing the sense of touch, feeling hot and cold, and protecting the inner body from ultraviolet rays [1]. Skin accounts for 15% weight of the whole body, with a surface area of about 2   m 2 [2]. Skin consists of 3 main layers. skin [3]. In skin cancer the rare growing of the skin cell has become uncontrolled [4].In daily routine, some skin cells die, and new cells come on their place [5]. Skin cancer has become common these days. According to the report of cancer statistics estimation in the US in 2021, the new skin cancer (melanoma) estimated score has reached 34,920, where 19,320 are male and 15,600 are female. The death rate estimation is 12,410, with 5570 females and 6840 males [6]. The support of computer-aided diagnosis can motivate dermatologists to develop real-time skin cancer identification algorithms. One of the most essential steps for the analysis of the problem is to extract and select the most prominent and promising features. After that, the designed algorithm must be able to provide better measures than the previous one. The limitation in existing approaches acts as a motivation for the presented work, as semantic segmentation is required to extract the exact boundaries of the lesion and deep features, and their selection is required for more accurate classification. The main contributions of the presented work are:
Skin lesions are segmented using the proposed segmentation model, in which features are drawn out through a pre-trained Mobilenetv2 model, which acts as a base of DeepLabv3+ for boundary extraction. The model is attained on the chosen hyperparameters that provide more accurate segmentation results.
A classification framework is designed in which features are taken through a pre-trained DenseNet-201 model and optimal features are picked using SMA. These optimal features are passed to the machine learning classifier along with labels to perform classification.

2. Related Work

Timely and accurate skin lesion recognition and classification [7,8] is a very important task [9,10,11,12]. In the skin lesion analysis segmentation [13,14], it is the most important and second step, coming after pre-processing [15,16]. It divides the image into parts, with these parts being called segments [17]. A hybrid model is proposed which is the combination of k-means with a level set [18,19]. A method is defined in which, segments the input image using k-mean clustering [20,21]. An initial contour edges Chan–Vese model is applied with a genetic algorithm for the recognition of skin lesion boundaries [22,23]. The researcher proposed new pyramid pooling for lesion segmentation [24,25]. A system with Mask-R-CNN is proposed [26,27]. A dense framework is utilised for improvement [28,29]. Segmentation is performed using an adaptive dual attention module [30,31]. An algorithm using Bezier curves used for global optimization [32,33]. The segmentation is performed to accurately discover the lesion using deep learning-based methods, i.e., DeepLab V3+ and Mask R-CNN [34,35,36]. The encoder is joined with DeepLabV3 and decoder [37,38] for lesion segmentation. Deconvolutional coating are utilised to change the volume of input and output [39,40]. Hierarchical supervision is used to refine the prediction mask [41,42]. To segment, the image fuzzy clustering is utilised [43]. Researchers utilised colour features to partition the image [44,45,46]. The CNN classification with the novel regularising method proposed provided an accuracy of 0.974 [47]. The ensembles for melanoma classification, are utilised [48,49]. The ARL-CNN classification model is used for effectiveness [50].

3. Proposed Methodology

We developed novel segmentation and classification models. In the proposed segmentation model, a pre-trained Mobilenetv2 model and DeepLabv3+ are utilised. In the proposed classification framework, features are investigated using DesnseNet-201 and novel features are extracted with SMA for multi-classification of skin lesions as presented in Figure 1.

3.1. Segmentation of Skin Lesion

In the proposed segmentation model, features are extracted using Mobilenetv2 [51]. Features obtained from pre-trained Mobilenetv2 are input to the DeepLabv3+ network. DeepLabv3+ [52] is an enhanced version of atrous spatial pyramid pooling, with the addition of image-level features and batch normalization. Atrous convolutional in the last few blocks of the backbone to control the feature map size. The atrous spatial pyramid pooling is added on the peak of taken features that classify every pixel corresponding to their classes. The proposed framework is joined with Mobilenetv2 and Deeplabv3+, which contains 186 layers, in which 01 input, 67 convolutions, 59 batch-norm, 40 flip ReLU, 13 addition, 02 2D-crops, 02 depth concatenation, 01 softmax, and 01 classification layers are included, as illustrated in Figure 2.
The parameters of segmentation are mentioned in Table 1.
In Table 1, the parameters are concluded after long experimentation, in which 32 batch-size, 100 epochs, 0.0001 rate of learning with sgdm optimizer solver provide good segmentation results.

3.2. Classification of Skin Lesions

The proposed classification model consists of three phases, including features extraction using DenseNet-201, optimal features selection using a slime mould algorithm, and pictorial classification, as in Figure 3.

3.2.1. Features Extraction and Selection

Pre-trained DenseNe-t201 [53,54] model is used to obtain the feature, taken from a fully-connected FC-1000 layer measuring N × 1000 , and is input into the slime mould algorithm (SMA) [55]. SMA is an optimization technique used for the best feature selection. SMA [56] is naturally established within slime mould oscillation. Thus, SMA is influenced by the actions of morphological alterations and slime mould. The individual swarms are categorized into three groups. Some of them are picked at the origin, through a proportional number, to be resurrected and carry out their exploration. Some of them pursue their investigation built on their current position and the remaining would be direct towards the foremost candidate. The selected SMA parameters are described in Table 2.
Table 2 depicts the selected parameters of SMA which are utilised for the selection of optimum features, in which the total number of 5 neighbours, 0.2 hold-out validation ratio, the total number of 100 solutions, and maximum 100 iterations are included. The convergence curve in terms of fitness is obtained using SMA, as revealed in Figure 4.
The above graph shows the outcomes of the best feature selection on the PH2 dataset using the SMA algorithm. The SMA’s mathematical model is discussed below:
Z   ( iu + 1 ) = { z b ( iu ) + xb ·   ( C ·   Z A ( iu ) Z B ( iu ) ) ,       g < h gc ·   Z ( iu ) ,                                                                                         g   h
where the xb in Equation (3), gc linearly decreasing towards one to zero. The iu presents the current iteration. The Z b stands for the current highest accuracy position and it describes the current position, Z defines slime Mould location, z A and Z B present randomly selected two individuals from swarms, C defines the weight of slime mould’s, and h is shown in Equation (2):
h = tanh |V (j) − eF|
where the V (j) shows the fitness of Z where j 1, 2, 3, … n, and eF define the finest fitness in iterations. The xb is defined as bellow:
xb   =   [ d ,   d ]
where the calculation of d is shown in Equation (4)
d = arctanh   ( ( iu maximum _ iu ) + 1 )
where ( maximum   iu ) shows the maximum iteration.
The C is presented as follows:
C   ( index   ( j ) )   =   {   1 + g . log ( aF V ( j ) aF cF + 1 ) , Cond .   1 g . log ( aF V   ( j ) aF cF + 1 ) , others  
where aF stands for best fitness, cF stands for worst fitness, cond. describes that V (j) categorizes the population in the initial half, g defines a random value between [0, 1] interval. Index defines sorted values of fitness and computed, as described in Equation (6).
index = Sort (V)
The uncertainty is described in Equation (5), simulated using r. The log decreased the change rate of numerical values; therefore, not too many changes occur in frequency.
The slime mould changes its search pattern, conforming to the nature of the worth of the food. When the mass is greater, food concentration becomes sufficient, and the mass should decrease when the food concentration becomes poor, as presented in Equation (7).
z   =   { random ·   ( ub lb ) + lb ,   random   < k Z b ( iu ) + Xb ·   ( C ·   Z A ( iu ) Z B ( iu ) ) ,   g < h gc ·   Z ( iu ) ,   g   h
where the ub and lb define the upper and lower boundaries, and g defines the random value of 0 and 1. Figure 5 depicts the optimization process of the feature vector.
Table 3 depicts the selected feature vector dimensions after applying SMA.
Table 3 shows the best-selected features number on PH2, MED-NODE, HAM10000, and ISIC 2019 datasets in both the training and testing phases.

3.2.2. Classification Using Selected Classifiers

The classifier takes the value of numerous features to make a prediction and consists of the number of parameters that it should learn from training data. The learned classifier shows the correspondence between the labels in training data and features [57]. In the proposed methodology, by using optimal features, the three classifiers have been utilised to differentiate the skin lesions into relevant classes.
The cubic kernel SVM [58] and its chosen parameters are stated in Table 4.
For classification purposes, the weighted KNN [59] and fine KNN [60] selected parameters are presented in Table 5.

4. Experimental Discussion/Setup

The achievement of the proposed segmentation approach is estimated on four public datasets ISIC 2016 [61], 2017 [62], 2018 [63], and PH2 [64,65]. The four public datasets ISIC 2019 [66,67], HAM10000 [66], PH2 [64], and MED-NODE [68,69] were utilised to estimate the performance of the proposed classification framework after augmentation. MATLAB 2020b is utilised as an implementation tool, using Intel core i5 6th Generation hardware on Windows 10.

4.1. Experiment#1: Segmentation

The proposed segmentation approach performance is computed based on global accuracy, mean Accuracy, meanIoU, weightedIoU, and mean BF score using ISIC 2016,17,18, and PH2 datasets, as shown in Table 6.
Table 6 depicts the proposed segmentation results, in which we achieved a global accuracy of 0.97481, 0.97297, 0.98642, 0.95914 on ISIC 2016, 2017, 2018, and PH2, respectively. The proposed framework segmentation outcomes using benchmark ISIC 2016 and PH2 datasets are stated in Figure 6 and Figure 7.
The achieved results are also compared to existing research work, as presented in Table 7.
On the 2016 challenge dataset, the existing technique provides a maximum of 96.2% accuracy using GA-based optimization [22]. On the 2017 segmentation challenging dataset, FC-DPN provides 95.14% accuracy but some lesions are not segmented accurately due to blurry and low-contrast images [74]. The w-net model provides 97.39% accuracy of segmentation, though an improvement is required in the deep learning framework to increase the segmentation results [77]. Antialiasing convolution model is utilised for skin lesion segmentation, providing 95% prediction scores. The segmentation scores might be increased using the improved features optimization approach [70].
The proposed method in this article consists of Mobilenetv2 and DeepLabv3+, which detects lesion boundaries more accurately, with an accuracy of 97.48%, 97.29%, 98.64% and 95.91% on challenge 2016, 17, 18 and PH2, respectively, making it far more efficient compared to the existing work.

4.2. Experiment#2: Skin Lesions Classification

In the classification experiment, features are computed using pre-trained DenseNet-201 and selected optimum features by SMA that are supplied to the classifiers on 5-fold cross-validation. The graphical depiction of the proposed classification results is expressed in Figure 8, Figure 9, Figure 10 and Figure 11. The classification results are described in Table 8.
As given in Table 8, the classification of the MED-NODE dataset was performed using three classifiers: cubic SVM, weighted KNN, and fine KNN, with an overall accuracy of 97.32%, 97.62%, and 99.33%, respectively. All the classifiers were trained using cross-validation 5 folds dataset distribution. The classification outcomes on the PH2 are mentioned in Table 9.
In Table 9, cubic SVM achieved an accuracy of 97.87%. The results of the weighted KNN and fine KNN classifiers are 98.09% and 98.88%, respectively. The classification outcomes on the HAM10000 are mentioned in Table 10.
Table 11 shows the outcomes of the cubic SVM, weighted KNN, and fine KNN, which obtained an overall accuracy of 90.65%, 86.90%, and 92.01%, respectively. The classification outcomes of the ISIC 2019 are depicted in Table 11.
Table 11 shows the outcomes of the cubic SVM, weighted KNN, and fine KNN with an overall accuracy of 89.99%, 90.22%, and 91.7%. The result shows that fine KNN performs best among the three classifiers. The classification results comparison is mentioned in Table 12.
In Table 12, the classification results with existing methods using ISIC 2019, HAM10000, PH2, and MED-NODE datasets are shown. On dataset ISIC 2019, deep learning and an entropy-based approach provided 91% accuracy; however, there is still room to improve the model for better accuracy [83]. On the HAM-10000 dataset, the accuracy rate achieved is 89.8% based on deep features extraction and selection approach [89]. On the dataset PH2, 97.5% accuracy is achieved using a combination of deep and texture features. The classification results might be increased using shape and colour features [91]. On MED-NODE dataset, the accuracy is 97.70% using transfer learning model [69].
In this research, however, features from the selected layers of the pre-trained models and the best features are selected using an SMA model that provides an accuracy of 91.7% on ISIC 2019, 92.01% on HAM10000, 98.88% on PH2, and 99.33% on MED-NODE datasets. The experimental outcomes show that the achieved outcomes are finer compared to the newest works in this domain.

5. Conclusions

Skin lesions’ detection is a complex job due to resemblances among the classes of skin lesions. To overcome the existing challenge, novel deep learning models are designed for skin lesion analysis. To perform semantic segmentation, the deep features are taken through the pre-trained model Mobilenetv2, which are then passed to the DeepLabv3+ for the extraction of the exact border of the lesion. The proposed segmentation approach is evaluated based on Mean Accuracy, Global Accuracy, BF Score, Weighted IoU, and Mean IoU on ISIC 2016, 2017, 2018, and PH2 datasets, which provide a global accuracy of 0.97481, 0.97297, 0.98642, and 0.95914, respectively.
In the proposed classification model, deep features are taken using DenseNet-201, and select optimal features by SMA, which are then evaluated on the MED-NODE, PH2, HAM-10000, and ISIC 2019 benchmark datasets, providing an accuracy of 99.33%, 98.88%, 92.01%, and 91.7, respectively. The achieved outcomes of segmentation and classification are far better compared to existing techniques.

Author Contributions

Conceptualization, M.Z. and J.A.; Methodology, M.S., G.A.M. and S.K.; Formal analysis, M.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hameed, N.; Ruskin, A.; Hassan, K.A.; Hossain, M.A. A comprehensive survey on image-based computer aided diagnosis systems for skin cancer. In Proceedings of the 2016 10th International Conference on Software, Knowledge, Information Management & Applications (SKIMA), Chengdu, China, 15–17 December 2016. [Google Scholar]
  2. Lai-Cheong, J.E.; McGrath, J.A. Structure and function of skin, hair and nails. Medicine 2013, 41, 317–320. [Google Scholar] [CrossRef]
  3. Gordon, R. Skin cancer: An overview of epidemiology and risk factors. Semin. Oncol. Nurs. 2013, 29, 160–169. [Google Scholar] [CrossRef] [PubMed]
  4. Javed, R.; Rahim, M.S.M.; Saba, T.; Rehman, A. A comparative study of features selection for skin lesion detection from dermoscopic images. Netw. Model. Anal. Heal. Inform. Bioinform. 2019, 9, 4. [Google Scholar]
  5. Zhang, N.; Cai, Y.-X.; Wang, Y.-Y.; Tian, Y.-T.; Wang, X.-L.; Badami, B. Skin cancer diagnosis based on optimized convolutional neural network. Artif. Intell. Med. 2020, 102, 101756. [Google Scholar] [CrossRef] [PubMed]
  6. Key Statistics for Melanoma Skin Cancer. Available online: https://www.cancer.org/cancer/melanoma-skin-cancer/about/key-statistics.html (accessed on 12 October 2021).
  7. Shetty, B.; Fernandes, R.; Rodrigues, A.P.; Chengoden, R.; Bhattacharya, S.; Lakshmanna, K. Skin lesion classification of dermoscopic images using machine learning and convolutional neural network. Sci. Rep. 2022, 12, 18134. [Google Scholar] [CrossRef]
  8. Liang, Y.; Sun, L.; Ser, W.; Lin, F.; Thng, S.T.G.; Chen, Q.; Lin, Z. Classification of non-tumorous skin pigmentation disorders using voting based probabilistic linear discriminant analysis. Comput. Biol. Med. 2018, 99, 123–132. [Google Scholar] [CrossRef]
  9. Amin, J.; Sharif, M.; Raza, M.; Saba, T.; Rehman, A. Brain tumor classification: Feature fusion. In Proceedings of the 2019 International Conference on Computer and Information Sciences (ICCIS), Sakaka, Saudi Arabia, 3–4 April 2019. [Google Scholar]
  10. Amin, J.; Sharif, M.; Yasmin, M.; Saba, T.; Raza, M. Use of machine intelligence to conduct analysis of human brain data for detection of abnormalities in its cognitive functions. Multimed. Tools Appl. 2020, 79, 10955–10973. [Google Scholar] [CrossRef]
  11. Amin, J.; Almas-Anjum, M.; Sharif, M.; Kadry, S.; Nam, Y. Fruits and vegetable diseases recognition using convolutional neural networks. Comput. Mater. Contin. 2021, 70, 619–635. [Google Scholar] [CrossRef]
  12. Saleem, S.; Amin, J.; Sharif, M.; Anjum, M.A.; Iqbal, M.; Wang, S.-H. A deep network designed for segmentation and classification of leukemia using fusion of the transfer learning models. Complex Intell. Syst. 2022, 8, 3105–3120. [Google Scholar] [CrossRef]
  13. Wei, Z.; Shi, F.; Song, H.; Ji, W.; Han, G. Attentive boundary aware network for multi-scale skin lesion segmentation with adversarial training. Multimed. Tools Appl. 2020, 79, 27115–27136. [Google Scholar] [CrossRef]
  14. Wei, Z.; Song, H.; Li, L.; Chen, Q.; Han, G. Attention-based DenseUnet network with adversarial training for skin lesion segmentation. IEEE Access 2019, 7, 136616–136629. [Google Scholar] [CrossRef]
  15. Sharif, M.I.; Li, J.P.; Amin, J.; Sharif, A. An improved framework for brain tumor analysis using MRI based on YOLOv2 and convolutional neural network. Complex Intell. Syst. 2021, 7, 2023–2036. [Google Scholar] [CrossRef]
  16. Sharif, M.; Amin, J.; Raza, M.; Yasmin, M.; Satapathy, S.C. An integrated design of particle swarm optimization (PSO) with fusion of features for detection of brain tumor. Pattern Recognit. Lett. 2020, 129, 150–157. [Google Scholar] [CrossRef]
  17. Saba, T.; Mohamed, A.S.; El-Affendi, M.; Amin, J.; Sharif, M. Brain tumor detection using fusion of hand crafted and deep learning features. Cogn. Syst. Res. 2020, 59, 221–230. [Google Scholar] [CrossRef]
  18. Hwang, Y.N.; Seo, M.J.; Kim, S.M. A segmentation of melanocytic skin lesions in dermoscopic and standard images using a hybrid two-stage approach. BioMed Res. Int. 2021, 2021, 5562801. [Google Scholar] [CrossRef]
  19. Amin, J.; Sharif, M.; Gul, N.; Yasmin, M.; Shad, S.A. Brain tumor classification based on DWT fusion of MRI sequences using convolutional neural network. Pattern Recognit. Lett. 2020, 129, 115–122. [Google Scholar] [CrossRef]
  20. Garg, S.; Jindal, B. Skin lesion segmentation using k-mean and optimized fire fly algorithm. Multimed. Tools Appl. 2021, 80, 7397–7410. [Google Scholar] [CrossRef]
  21. Amin, J.; Sharif, A.; Gul, N.; Anjum, M.A.; Nisar, M.W.; Azam, F.; Bukhari, S.A.C. Integrated design of deep features fusion for localization and classification of skin cancer. Pattern Recognit. Lett. 2020, 131, 63–70. [Google Scholar] [CrossRef]
  22. Ashour, A.S.; Nagieb, R.M.; El-Khobby, H.A.; Abd-Elnaby, M.M.; Dey, N. Genetic algorithm-based initial contour optimization for skin lesion border detection. Multimed. Tools Appl. 2021, 80, 2583–2597. [Google Scholar] [CrossRef]
  23. Amin, J.; Sharif, M.; Gul, N.; Raza, M.; Anjum, M.A.; Nisar, M.W.; Bukhari, S.A.C. Brain tumor detection by using stacked autoencoders in deep learning. J. Med. Syst. 2020, 44, 32. [Google Scholar] [CrossRef]
  24. Kaur, R.; GholamHosseini, H.; Sinha, R.; Lindén, M. Automatic lesion segmentation using atrous convolutional deep neural networks in dermoscopic skin cancer images. BMC Med. Imaging 2022, 22, 103. [Google Scholar] [CrossRef]
  25. Sharif, M.; Amin, J.; Raza, M.; Anjum, M.A.; Afzal, H.; Shad, S.A. Brain tumor detection based on extreme learning. Neural Comput. Appl. 2020, 32, 15975–15987. [Google Scholar] [CrossRef]
  26. Bagheri, F.; Tarokh, M.J.; Ziaratban, M. Skin lesion segmentation from dermoscopic images by using Mask R-CNN, Retina-Deeplab, and graph-based methods. Biomed. Signal Process. Control. 2021, 67, 102533. [Google Scholar] [CrossRef]
  27. Amin, J.; Sharif, M.; Rehman, A.; Raza, M.; Mufti, M.R. Diabetic retinopathy detection and classification using hybrid feature set. Microsc. Res. Tech. 2018, 81, 990–996. [Google Scholar] [CrossRef] [PubMed]
  28. Qamar, S.; Ahmad, P.; Shen, L. Dense encoder-decoder–based architecture for skin lesion segmentation. Cogn. Comput. 2021, 13, 583–594. [Google Scholar] [CrossRef]
  29. Amin, J.; Sharif, M.; Anjum, M.A.; Raza, M.; Bukhari, S.A.C. Convolutional neural network with batch normalization for glioma and stroke lesion detection using MRI. Cogn. Syst. Res. 2020, 59, 304–311. [Google Scholar] [CrossRef]
  30. Wu, H.; Pan, J.; Li, Z.; Wen, Z.; Qin, J. Automated skin lesion segmentation via an adaptive dual attention module. IEEE Trans. Med. Imaging 2020, 40, 357–370. [Google Scholar] [CrossRef]
  31. Muhammad, N.; Sharif, M.; Amin, J.; Mehboob, R.; Gilani, S.A.; Bibi, N.; Javed, H.; Ahmed, N. Neurochemical alterations in sudden unexplained perinatal deaths—A review. Front. Pediatr. 2018, 6, 6. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Abdulhamid, I.A.M.; Sahiner, A.; Rahebi, J. New auxiliary function with properties in nonsmooth global optimization for melanoma skin cancer segmentation. BioMed Res. Int. 2020, 2020, 5345923. [Google Scholar] [CrossRef] [Green Version]
  33. Sharif, M.; Amin, J.; Nisar, M.W.; Anjum, M.A.; Muhammad, N.; Shad, S.A. A unified patch based method for brain tumor detection using features fusion. Cogn. Syst. Res. 2020, 59, 273–286. [Google Scholar] [CrossRef]
  34. Goyal, M.; Oakley, A.; Bansal, P.; Dancey, D.; Yap, M.H. Skin lesion segmentation in dermoscopic images with ensemble deep learning methods. IEEE Access 2019, 8, 4171–4181. [Google Scholar] [CrossRef]
  35. Sharif, M.; Amin, J.; Siddiqa, A.; Khan, H.U.; Malik, M.S.A.; Anjum, M.A.; Kadry, S. Recognition of different types of leukocytes using YOLOv2 and optimized bag-of-features. IEEE Access 2020, 8, 167448–167459. [Google Scholar] [CrossRef]
  36. Anjum, M.A.; Amin, J.; Sharif, M.; Khan, H.U.; Malik, M.S.A.; Kadry, S. Deep semantic segmentation and multi-class skin lesion classification based on convolutional neural network. IEEE Access 2020, 8, 129668–129678. [Google Scholar] [CrossRef]
  37. Lameski, J.; Jovanov, A.; Zdravevski, E.; Lameski, P.; Gievska, S. Skin lesion segmentation with deep learning. In Proceedings of the IEEE EUROCON 2019-18th International Conference on Smart Technologies, Novi Sad, Serbia, 1–4 July 2019. [Google Scholar]
  38. Sharif, M.; Amin, J.; Yasmin, M.; Rehman, A. Efficient hybrid approach to segment and classify exudates for DR prediction. Multimed. Tools Appl. 2020, 79, 11107–11123. [Google Scholar] [CrossRef]
  39. Amin, J.; Sharif, M.; Gul, E.; Nayak, R.S. 3D-semantic segmentation and classification of stomach infections using uncertainty aware deep neural networks. Complex Intell. Syst. 2022, 8, 3041–3057. [Google Scholar] [CrossRef]
  40. Amin, J.; Anjum, M.A.; Sharif, M.; Saba, T.; Tariq, U. An intelligence design for detection and classification of COVID-19 using fusion of classical and convolutional neural network and improved microscopic features selection approach. Microsc. Res. Tech. 2021, 84, 2254–2267. [Google Scholar] [CrossRef]
  41. Li, H.; He, X.; Zhou, F.; Yu, Z.; Ni, D.; Chen, S.; Wang, T.; Lei, B. Dense deconvolutional network for skin lesion segmentation. IEEE J. Biomed. Health Inform. 2018, 23, 527–537. [Google Scholar] [CrossRef]
  42. Amin, J.; Sharif, M.; Anjum, M.A.; Khan, H.U.; Malik, M.S.A.; Kadry, S. An integrated design for classification and localization of diabetic foot ulcer based on CNN and YOLOv2-DFU models. IEEE Access 2020, 8, 228586–228597. [Google Scholar] [CrossRef]
  43. Amin, J.; Sharif, M.; Yasmin, M. Segmentation and classification of lung cancer: A review. Immunol. Endocr. Metab. Agents Med. Chem. 2016, 16, 82–99. [Google Scholar] [CrossRef]
  44. Jaisakthi, S.A.; Chandrabose, A.; Mirunalini, P. Automatic skin lesion segmentation using semi-supervised learning technique. arXiv 2017, arXiv:1703.04301. [Google Scholar]
  45. Lynn, N.C.; Kyu, Z.M. Segmentation and classification of skin cancer melanoma from skin lesion images. In Proceedings of the 2017 18th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT), Taipei, Taiwan, 18–20 December 2017. [Google Scholar]
  46. Amin, J.; Anjum, M.A.; Sharif, M.; Kadry, S.; Nam, Y.; Wang, S. Convolutional Bi-LSTM based human gait recognition using video sequences. Comput. Mater. Contin. 2021, 68, 2693–2709. [Google Scholar] [CrossRef]
  47. Albahar, M.A. Skin lesion classification using convolutional neural network with novel regularizer. IEEE Access 2019, 7, 38306–38313. [Google Scholar] [CrossRef]
  48. Milton, M.A.A. Automated skin lesion classification using ensemble of deep neural networks in ISIC 2018: Skin lesion analysis towards melanoma detection challenge. arXiv 2019, arXiv:1901.10802. [Google Scholar]
  49. Linsky, T.W.; Vergara, R.; Codina, N.; Nelson, J.W.; Walker, M.J.; Su, W.; Barnes, C.O.; Hsiang, T.-Y.; Esser-Nobis, K.; Yu, K. De novo design of potent and resilient hACE2 decoys to neutralize SARS-CoV-2. Science 2020, 370, 1208–1214. [Google Scholar] [CrossRef] [PubMed]
  50. Zhang, J.; Xie, Y.; Xia, Y.; Shen, C. Attention residual learning for skin lesion classification. IEEE Trans. Med. Imaging 2019, 38, 2092–2103. [Google Scholar] [CrossRef]
  51. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
  52. Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
  53. Nalini, M.; Radhika, K. Comparative analysis of deep network models through transfer learning. In Proceedings of the 2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud)(I-SMAC), Palladam, India, 7–9 October 2020. [Google Scholar]
  54. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  55. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  56. Wazery, Y.M.; Saber, E.; Houssein, E.H.; Ali, A.A.; Amer, E. An efficient slime mould algorithm combined with k-nearest neighbor for medical classification tasks. IEEE Access 2021, 9, 113666–113682. [Google Scholar] [CrossRef]
  57. Pereira, F.; Mitchell, T.; Botvinick, M. Machine learning classifiers and fMRI: A tutorial overview. Neuroimage 2009, 45, S199–S209. [Google Scholar] [CrossRef] [Green Version]
  58. Singh, S.; Kumar, R. Histopathological image analysis for breast cancer detection using cubic SVM. In Proceedings of the 2020 7th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 27–28 February 2020. [Google Scholar]
  59. Ali, A.; Alrubei, M.; Hassan, L.; Al-Ja’afari, M.; Abdulwahed, S. Diabetes classification based on KNN. IIUM Eng. J. 2020, 21, 175–181. [Google Scholar] [CrossRef] [Green Version]
  60. Subbarao, M.V.; Samundiswary, P. Performance analysis of modulation recognition in multipath fading channels using pattern recognition classifiers. Wirel. Pers. Commun. 2020, 115, 129–151. [Google Scholar] [CrossRef]
  61. Gutman, D.; Codella, N.C.; Celebi, E.; Helba, B.; Marchetti, M.; Mishra, N.; Halpern, A. Skin lesion analysis toward melanoma detection: A challenge at the international symposium on biomedical imaging (ISBI) 2016, hosted by the international skin imaging collaboration (ISIC). arXiv 2016, arXiv:1605.01397. [Google Scholar]
  62. Codella, N.C.; Gutman, D.; Celebi, M.E.; Helba, B.; Marchetti, M.A.; Dusza, S.W.; Kalloo, A.; Liopyris, K.; Mishra, N.; Kittler, H. Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic). In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018. [Google Scholar]
  63. Codella, N.; Rotemberg, V.; Tschandl, P.; Celebi, M.E.; Dusza, S.; Gutman, D.; Helba, B.; Kalloo, A.; Liopyris, K.; Marchetti, M. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (isic). arXiv 2019, arXiv:1902.03368. [Google Scholar]
  64. Mendonça, T.; Ferreira, P.M.; Marques, J.S.; Marcal, A.R.; Rozeira, J. PH 2-A dermoscopic image database for research and benchmarking. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013. [Google Scholar]
  65. Ünver, H.M.; Ayan, E. Skin lesion segmentation in dermoscopic images with combination of YOLO and grabcut algorithm. Diagnostics 2019, 9, 72. [Google Scholar] [CrossRef] [Green Version]
  66. Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 2018, 5, 180161. [Google Scholar] [CrossRef] [PubMed]
  67. Combalia, M.; Codella, N.C.; Rotemberg, V.; Helba, B.; Vilaplana, V.; Reiter, O.; Carrera, C.; Barreiro, A.; Halpern, A.C.; Puig, S. Bcn20000: Dermoscopic lesions in the wild. arXiv 2019, arXiv:1908.02288. [Google Scholar]
  68. Giotis, I.; Molders, N.; Land, S.; Biehl, M.; Jonkman, M.F.; Petkov, N. MED-NODE: A computer-assisted melanoma diagnosis system using non-dermoscopic images. Expert Syst. Appl. 2015, 42, 6578–6585. [Google Scholar] [CrossRef]
  69. Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Classification of skin lesions using transfer learning and augmentation with Alex-net. PloS ONE 2019, 14, e0217293. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  70. Le, P.T.; Chang, C.-C.; Li, Y.-H.; Hsu, Y.-C.; Wang, J.-C. Antialiasing Attention Spatial Convolution Model for Skin Lesion Segmentation with Applications in the Medical IoT. Wirel. Commun. Mob. Comput. 2022, 2022, 1278515. [Google Scholar] [CrossRef]
  71. Tong, X.; Wei, J.; Sun, B.; Su, S.; Zuo, Z.; Wu, P. ASCU-Net: Attention gate, spatial and channel attention u-net for skin lesion segmentation. Diagnostics 2021, 11, 501. [Google Scholar] [CrossRef] [PubMed]
  72. Xie, F.; Yang, J.; Jiang, Z.; Zheng, Y.; Wang, Y. Skin lesion segmentation using high-resolution convolutional neural network. Comput. Methods Programs Biomed. 2020, 186, 105241. [Google Scholar] [CrossRef]
  73. Bi, L.; Kim, J.; Ahn, E.; Kumar, A.; Feng, D.; Fulham, M. Step-wise integration of deep class-specific learning for dermoscopic image segmentation. Pattern Recognit. 2019, 85, 78–89. [Google Scholar] [CrossRef] [Green Version]
  74. Shan, P.; Wang, Y.; Fu, C.; Song, W.; Chen, J. Automatic skin lesion segmentation based on FC-DPN. Comput. Biol. Med. 2020, 123, 103762. [Google Scholar] [CrossRef]
  75. Kaymak, R.; Kaymak, C.; Ucar, A. Skin lesion segmentation using fully convolutional networks: A comparative experimental study. Expert Syst. Appl. 2020, 161, 113742. [Google Scholar] [CrossRef]
  76. Al-Masni, M.A.; Al-Antari, M.A.; Choi, M.-T.; Han, S.-M.; Kim, T.-S. Skin lesion segmentation in dermoscopy images via deep full resolution convolutional networks. Comput. Methods Programs Biomed. 2018, 162, 221–231. [Google Scholar] [CrossRef] [PubMed]
  77. Khouloud, S.; Ahlem, M.; Fadel, T.; Amel, S. W-net and inception residual network for skin lesion segmentation and classification. Appl. Intell. 2022, 52, 3976–3994. [Google Scholar] [CrossRef]
  78. Arora, R.; Raman, B.; Nayyar, K.; Awasthi, R. Automated skin lesion segmentation using attention-based deep convolutional neural network. Biomed. Signal Process. Control. 2021, 65, 102358. [Google Scholar] [CrossRef]
  79. Bi, L.; Feng, D.; Kim, J. Improving automatic skin lesion segmentation using adversarial learning based data augmentation. arXiv 2018, arXiv:1807.08392. [Google Scholar]
  80. Jahanifar, M.; Tajeddin, N.Z.; Koohbanani, N.A.; Gooya, A.; Rajpoot, N. Segmentation of skin lesions and their attributes using multi-scale convolutional neural networks and domain specific augmentations. arXiv 2018, arXiv:1809.10243. [Google Scholar]
  81. Chen, P.; Huang, S.; Yue, Q. Skin Lesion Segmentation Using Recurrent Attentional Convolutional Networks. IEEE Access 2022, 10, 94007–94018. [Google Scholar] [CrossRef]
  82. Zhang, G.; Shen, X.; Chen, S.; Liang, L.; Luo, Y.; Yu, J.; Lu, J. DSM: A deep supervised multi-scale network learning for skin cancer segmentation. IEEE Access 2019, 7, 140936–140945. [Google Scholar] [CrossRef]
  83. Pacheco, A.G.; Ali, A.-R.; Trappenberg, T. Skin cancer detection based on deep learning and entropy to detect outlier samples. arXiv 2019, arXiv:1909.04525. [Google Scholar]
  84. Setiawan, A.W. Effect of Color Enhancement on Early Detection of Skin Cancer using Convolutional Neural Network. In Proceedings of the 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT), Doha, Qatar, 2–5 February 2020. [Google Scholar]
  85. Pacheco, A.G.; Sastry, C.S.; Trappenberg, T.; Oore, S.; Krohling, R.A. On out-of-distribution detection algorithms with deep neural skin cancer classifiers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
  86. Saarela, M.; Geogieva, L. Robustness, Stability, and Fidelity of Explanations for a Deep Skin Cancer Classification Model. Appl. Sci. 2022, 12, 9545. [Google Scholar] [CrossRef]
  87. Afza, F.; Sharif, M.; Mittal, M.; Khan, M.A.; Hemanth, D.J. A hierarchical three-step superpixels and deep learning framework for skin lesion classification. Methods 2022, 202, 88–102. [Google Scholar] [CrossRef]
  88. Khan, M.A.; Akram, T.; Zhang, Y.-D.; Sharif, M. Attributes based skin lesion detection and recognition: A mask RCNN and transfer learning-based deep learning framework. Pattern Recognit. Lett. 2021, 143, 58–66. [Google Scholar] [CrossRef]
  89. Khan, M.A.; Javed, M.Y.; Sharif, M.; Saba, T.; Rehman, A. Multi-model deep neural network based features extraction and optimal selection approach for skin lesion classification. In Proceedings of the 2019 International Conference on Computer and Information Sciences (ICCIS), Sakaka, Saudi Arabia, 3–4 April 2019. [Google Scholar]
  90. Hosny, K.M.; Kassem, M.A. Refined residual deep convolutional network for skin lesion classification. J. Digit. Imaging 2022, 35, 258–280. [Google Scholar] [CrossRef]
  91. Alizadeh, S.M.; Mahloojifar, A. Automatic skin cancer detection in dermoscopy images by combining convolutional neural networks and texture features. Int. J. Imaging Syst. Technol. 2021, 31, 695–707. [Google Scholar] [CrossRef]
  92. Kumar, M.; Alshehri, M.; AlGhamdi, R.; Sharma, P.; Deep, V. A de-ann inspired skin cancer detection approach using fuzzy c-means clustering. Mob. Netw. Appl. 2020, 25, 1319–1329. [Google Scholar] [CrossRef]
  93. Arora, G.; Dubey, A.K.; Jaffery, Z.A.; Rocha, A. Bag of feature and support vector machine based early diagnosis of skin cancer. Neural Comput. Appl. 2020, 34, 8385–8392. [Google Scholar] [CrossRef]
  94. Xie, Y.; Zhang, J.; Xia, Y.; Shen, C. A mutual bootstrapping model for automated skin lesion segmentation and classification. IEEE Trans. Med. Imaging 2020, 39, 2482–2493. [Google Scholar] [CrossRef]
  95. Gerges, F.; Shih, F.Y. A Convolutional Deep Neural Network Approach for Skin Cancer Detection Using Skin Lesion Images. Int. J. Electr. Comput. Eng. 2021, 15, 475–478. [Google Scholar]
  96. Mukherjee, S.; Adhikari, A.; Roy, M. Malignant melanoma detection using multi layer preceptron with visually imperceptible features and PCA components from MED-NODE dataset. Int. J. Med. Eng. Inform. 2020, 12, 151–168. [Google Scholar] [CrossRef]
Figure 1. Skin lesions’ recognition. Segmentation using Mobilenetv2 and DeepLabv3+; classification based on DenseNet-201 and SMA with SVM, KNN classifiers.
Figure 1. Skin lesions’ recognition. Segmentation using Mobilenetv2 and DeepLabv3+; classification based on DenseNet-201 and SMA with SVM, KNN classifiers.
Mathematics 11 00364 g001
Figure 2. Skin lesions segmentation based on Mobilenet-v2 and Deeplabv3+.
Figure 2. Skin lesions segmentation based on Mobilenet-v2 and Deeplabv3+.
Mathematics 11 00364 g002
Figure 3. Steps of the proposed classification model.
Figure 3. Steps of the proposed classification model.
Mathematics 11 00364 g003
Figure 4. SMA convergence in terms of iterations and fitness value.
Figure 4. SMA convergence in terms of iterations and fitness value.
Mathematics 11 00364 g004
Figure 5. Optimal features selection using SMA.
Figure 5. Optimal features selection using SMA.
Mathematics 11 00364 g005
Figure 6. Visualisation results (a) input image of ISIC 2016 (b) segmented output of ISIC 2016 (c) input image of ISIC 2017 (d) segmented output of ISIC 2017.
Figure 6. Visualisation results (a) input image of ISIC 2016 (b) segmented output of ISIC 2016 (c) input image of ISIC 2017 (d) segmented output of ISIC 2017.
Mathematics 11 00364 g006
Figure 7. Visualisation results (a) input image of ISIC 2018 (b) segmented output of ISIC 2018 (c) input image of PH2 (d) segmented output of PH2.
Figure 7. Visualisation results (a) input image of ISIC 2018 (b) segmented output of ISIC 2018 (c) input image of PH2 (d) segmented output of PH2.
Mathematics 11 00364 g007
Figure 8. Classification results on MED-NODE (a) ROC curve of fine KNN (b) confusion matrix of fine KNN.
Figure 8. Classification results on MED-NODE (a) ROC curve of fine KNN (b) confusion matrix of fine KNN.
Mathematics 11 00364 g008
Figure 9. Classification results on PH2 (a) ROC curve of fine KNN (b) confusion matrix of fine KNN.
Figure 9. Classification results on PH2 (a) ROC curve of fine KNN (b) confusion matrix of fine KNN.
Mathematics 11 00364 g009
Figure 10. Classification results on HAM10000 (a) ROC curve of fine KNN (b) confusion matrix of fine KNN.
Figure 10. Classification results on HAM10000 (a) ROC curve of fine KNN (b) confusion matrix of fine KNN.
Mathematics 11 00364 g010
Figure 11. Classification results on ISIC 2019 (a) ROC Curve of fine KNN (b) confusion matrix of fine KNN.
Figure 11. Classification results on ISIC 2019 (a) ROC Curve of fine KNN (b) confusion matrix of fine KNN.
Mathematics 11 00364 g011
Table 1. Hyper parameters of the proposed framework.
Table 1. Hyper parameters of the proposed framework.
Batch-size32
Training epochs100
Rate of learning0.0001
Optimizer solverSgdm
Table 2. Selected parameters of SMA.
Table 2. Selected parameters of SMA.
Features selection modelSMA
No. of K Nearest NeighbourOpts.k = 5
Validation data ratioHo = 0.2
No. of solutionsOpts. N = 10
Maximum iterationsOpts. T = 100
Table 3. Selected numbers of features in each dataset.
Table 3. Selected numbers of features in each dataset.
DatasetsTotal FeaturesNo. of Selected Features
PH2 N × 1000 N × 50
MED-NODEN × 30
HAM10000N × 227
ISIC 2019N × 251
Table 4. Parameters of SVM.
Table 4. Parameters of SVM.
ModelCubic SVM
Function of KernelCubic
Scale of KernelAutomatic
Level of box constraint01
Multiclass methodOne-vs-One
Data StandardizationTrue
Table 5. Parameters of KNN classifier.
Table 5. Parameters of KNN classifier.
ModelFine KNNWeighted KNN
No. of neighbours110
Metric of distanceEuclideanEuclidean
Distance weightEqualSquare inverse
Data StandardizationTrueTrue
Table 6. Skin lesion segmentation performance on ISIC-2016, 2017, 2018, and PH2 datasets.
Table 6. Skin lesion segmentation performance on ISIC-2016, 2017, 2018, and PH2 datasets.
DatasetGlobal AccuracyMean AccuracyMean IoUWeighted IoUMean BF Score
ISIC 20160.974810.962530.939600.950820.88649
ISIC 20170.972970.968410.944830.947240.84741
ISIC 20180.986420.914720.881390.973900.78364
PH20.959140.960050.904770.922990.82448
Table 7. Comparison of the existing research works on similar datasets.
Table 7. Comparison of the existing research works on similar datasets.
Ref#YearDatasetsAccuracy
[70]2022ISIC 201696%
[71]202195.4%
[22]202196.2%
[72]202093.8%
[32]202095.24%
[73]201995.78%
Proposed Method97.48%
[70]2022ISIC 201795%
[71]202192.6%
[72]202093.8%
[74]202095.14%
[75]202094.58
[76]201894.03%
Proposed Method97.29%
[77]2022ISIC 201897.39
[28]202196.95%
[78]202195.0%
[30]202094.7%
[79]201996.23
[80]201896.80%
Proposed Method98.64%
[81]2022PH295.14
[71]202194.3%
[18]202194.6%
[72]202094.9%
[32]202093.2%
[82]201993.1%
Proposed Method95.91%
Table 8. Proposed classification model outcomes on the MED-NODE dataset.
Table 8. Proposed classification model outcomes on the MED-NODE dataset.
ClassifiersClassesAccuracyPrecisionRecallF1 ScoreOverall
Accuracy
Melanoma (M)Nevus (N)
Cubic SVM 97.32%0.980.970.9897.87%
97.32%0.960.970.97
Weighted KNN 97.62%0.990.970.9897.62%
97.62%0.960.980.97
Fine KNN 99.33%0.990.990.9999.33%
99.33%0.990.990.99
Table 9. Proposed classification model results on PH2.
Table 9. Proposed classification model results on PH2.
ClassifiersClassesAccuracyPrecisionRecallF1 ScoreOverall
Accuracy
ANCNM
Cubic SVM 97.95%0.970.970.9797.87%
98.06%0.970.970.97
99.74%1.001.001.00
Weighted KNN 98.17%0.970.980.9798.09%
98.43%0.980.970.98
99.59%1.000.990.99
Fine KNN 98.99%0.980.990.9898.88%
98.92%0.990.980.98
99.85%1.001.001.00
Table 10. Proposed classification model results on HAM10000.
Table 10. Proposed classification model results on HAM10000.
ClassifiersClassesAccuracyPrecisionRecallF1 ScoreOverall
Accuracy
AKBCCBKDMNVL
Cubic SVM 97.18%0.920.890.9090.65%
97.35%0.920.900.91
95.67%0.840.860.85
98.93%0.970.960.96
95.57%0.840.850.84
96.91%0.870.910.89
99.68%0.990.990.99
Weighted KNN 95.89%0.940.800.8786.96%
96.49%0.890.860.88
94.55%0.710.880.79
97.22%0.990.840.91
94.73%0.800.820.81
95.67%0.770.910.83
99.38%0.980.980.98
Fine KNN 97.67%0.950.890.9292.01%
98.02%0.940.920.93
96.74%0.870.900.88
98.36%0.980.910.95
96.82%0.890.880.89
96.69%0.810.960.87
99.7%0.990.990.99
Table 11. Classification results on ISIC 2019 dataset.
Table 11. Classification results on ISIC 2019 dataset.
ClassifiersClassesAccuracyPrecisionRecallF1 ScoreOverall
Accuracy
AKBCCBKDMMNSCCVL
Cubic SVM 97.58%0.880.880.8889.99%
96.81%0.890.880.88
95.81%0.840.860.85
98.76%0.960.940.95
96.23%0.830.880.85
97.77%0.900.900.90
97.37%0.910.870.89
97.77%0.910.870.89
Weighted KNN 96.65%0.860.880.8790.22%
96.44%0.830.910.87
98.17%0.980.880.93
96.4%0.820.900.86
97.93%0.900.910.91
97.67%0.940.870.90
99.41%0.990.970.98
97.77%0.910.870.89
Fine KNN 98.13%0.930.890.9191.7%
97.16%0.880.910.89
96.95%0.880.900.89
98.66%0.980.920.95
96.82%0.850.910.88
97.97%0.890.930.91
98.14%0.950.890.92
99.57%0.990.980.98
Table 12. Comparisons of proposed classification method outcomes with existing approaches.
Table 12. Comparisons of proposed classification method outcomes with existing approaches.
Ref#YearDatasetsAccuracy
[83]2019ISIC 201991%
[84]202084.79%
[85]202082.5%
Proposed91.7%
[86]2022HAM1000080%
[87]202185.50%
[88]202188.50%
[89]201989.8%
Proposed92.01%
[90]2022PH294.97%
[91]202197.5%
[92]202096.9%
[93]202085.7%
[94]202094.0
Proposed98.88%
[90]2022MED-NODE92%
[95]202197%
[96]202083.33%
[69]201997.70%
Proposed99.33%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zafar, M.; Amin, J.; Sharif, M.; Anjum, M.A.; Mallah, G.A.; Kadry, S. DeepLabv3+-Based Segmentation and Best Features Selection Using Slime Mould Algorithm for Multi-Class Skin Lesion Classification. Mathematics 2023, 11, 364. https://doi.org/10.3390/math11020364

AMA Style

Zafar M, Amin J, Sharif M, Anjum MA, Mallah GA, Kadry S. DeepLabv3+-Based Segmentation and Best Features Selection Using Slime Mould Algorithm for Multi-Class Skin Lesion Classification. Mathematics. 2023; 11(2):364. https://doi.org/10.3390/math11020364

Chicago/Turabian Style

Zafar, Mehwish, Javeria Amin, Muhammad Sharif, Muhammad Almas Anjum, Ghulam Ali Mallah, and Seifedine Kadry. 2023. "DeepLabv3+-Based Segmentation and Best Features Selection Using Slime Mould Algorithm for Multi-Class Skin Lesion Classification" Mathematics 11, no. 2: 364. https://doi.org/10.3390/math11020364

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop