Next Article in Journal
Molecular Detection of Rickettsia and Other Bacteria in Ticks and Birds in an Urban Fragment of Tropical Dry Forest in Magdalena, Colombia
Next Article in Special Issue
Primary and Metastatic Cutaneous Melanomas Discriminately Enrich Several Ligand-Receptor Interactions
Previous Article in Journal
Intrinsic Therapeutic Link between Recuperative Cerebellar Con-Nectivity and Psychiatry Symptom in Schizophrenia Patients with Comorbidity of Metabolic Syndrome
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Skin Lesion Analysis and Cancer Detection Based on Machine/Deep Learning Techniques: A Comprehensive Survey

by
Mehwish Zafar
1,
Muhammad Imran Sharif
1,
Muhammad Irfan Sharif
2,
Seifedine Kadry
3,4,5,*,
Syed Ahmad Chan Bukhari
6 and
Hafiz Tayyab Rauf
7
1
Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47040, Pakistan
2
Department of Computer Science, University of Education, Jauharabad Campus, Khushāb 41200, Pakistan
3
Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
4
Artificial Intelligence Research Center (AIRC), Ajman University, Ajman P.O. Box 346, United Arab Emirates
5
Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 13-5053, Lebanon
6
Division of Computer Science, Mathematics and Science, Collins College of Professional Studies, St. John’s University, Queens, NY 11439, USA
7
Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent ST4 2DE, UK
*
Author to whom correspondence should be addressed.
Life 2023, 13(1), 146; https://doi.org/10.3390/life13010146
Submission received: 7 December 2022 / Revised: 25 December 2022 / Accepted: 28 December 2022 / Published: 4 January 2023
(This article belongs to the Special Issue Therapeutic Prevention and Early Detection of Melanoma)

Abstract

:
The skin is the human body’s largest organ and its cancer is considered among the most dangerous kinds of cancer. Various pathological variations in the human body can cause abnormal cell growth due to genetic disorders. These changes in human skin cells are very dangerous. Skin cancer slowly develops over further parts of the body and because of the high mortality rate of skin cancer, early diagnosis is essential. The visual checkup and the manual examination of the skin lesions are very tricky for the determination of skin cancer. Considering these concerns, numerous early recognition approaches have been proposed for skin cancer. With the fast progression in computer-aided diagnosis systems, a variety of deep learning, machine learning, and computer vision approaches were merged for the determination of medical samples and uncommon skin lesion samples. This research provides an extensive literature review of the methodologies, techniques, and approaches applied for the examination of skin lesions to date. This survey includes preprocessing, segmentation, feature extraction, selection, and classification approaches for skin cancer recognition. The results of these approaches are very impressive but still, some challenges occur in the analysis of skin lesions because of complex and rare features. Hence, the main objective is to examine the existing techniques utilized in the discovery of skin cancer by finding the obstacle that helps researchers contribute to future research.

1. Introduction

For a better understanding of skin cancer, clinicians should require a basic awareness of the skin. There are three main layers epidermis, dermis, and subcutaneous fat [1]. Skin cancer is the state when the uncommon development of the skin cell becomes uncontrolled [2]. Every day some old skin cells die, and new cells take their place. However, when this process takes the wrong direction, a situation occurs when the old cells are not in the dying stage but become dead and new cells grow when there is no need for them. The extra number of skin cells creates a mass of tissue and becomes a tumor [3,4]. For the enhancement of the diagnostic process, dermoscopy has begun; itis a non-invasive approach that gets illuminated and enhanced images of skin spots. This approach is utilized by the dermatologist for skin cancer recognition, traditionally performed by visual inspection and manual screening which is not accurate and also time-consuming [5]. However, the improvement has been made on account of the latest approaches of machine learning for the timely recognition of fatal cancerous diseases. Some skin tumors are benign and curable if detected at the early stage, so they hardly turn into cancer. Melanoma is a very serious and dangerous category of skin tumor. There are further numerous divisions of skin tumors and each cancer type depends upon the behavior of the abnormal cells. There are the following categories of skin cancer melanoma [6], squamous and basal cell carcinoma (together called nonmelanocytic skin cancers [7] and Merkel skin carcinoma [8]. Different types of changes occur in the human body as skin cancer has several symptoms. The changes that happen are color fadedness of the skin, an increase in the size of the mole, and ulceration. However, these symptoms can vary from type to type. Usually, the symptoms of basal cell carcinoma (BCC) include a swollen smooth surface on the shoulder and neck and blood vessels becoming visible in a tumor. These tumors are developed with the bleeding from the center point of the tumor, but these types of cancer are treated completely. In squamous cell carcinoma (SCC) cancer shows in the form of nodules that look hard in presence. It can result in bleeding from the tumor and ulceration. If this type of cancer is not treated immediately a large mass appearance occurs on the skin. Merkel skin carcinoma (MCC) cancer presents as a painless red spot so it is taken as a cyst [9]. UV is a serious revisable harmful agent for skin cancer and others are environmentally affected skin disorders. UV is also known as a big source of Vitamin D; therefore, UV has a mixed influence on human health. Excessive intake of UV can cause health risks such as malignancy, pigmentary modification, wrinkling, and atrophy. The UV molecularly and epidemiologically has an association with different categories of skin cancers. The genetic factor becomes another element that may lead to skin disorders [10]. As it is known that the ratio of skin diseases increases every year, it is essential to take precautionary measures to avoid such disorders. UV rays are a familiar environmental source of skin cancer so protection of the skin from these rays is necessary. The measure that one should take to avoid UV rays is to avoid direct exposure to mid-day sun, and textile protection by wearing appropriate clothing and by using sunscreen. It is essential to conduct self-examination and visit a dermatologist for regular check-ups to avoid such fatal diseases [11].

1.1. Motivation and Contribution

Table 1 exhibits the comparison of the presented survey with further reviews of the skin cancer recognition system. The presented article tries to cover the maximum aspects and contents of the problem domain. The aim of presenting this review is to confer the computer-aided diagnosis methods that are available to recognize skin cancer. Systematic reviews are gathered and existing methods are evaluated according to determined evaluation standards. All the related information is taken from primary sources and after analysis, it gives the details about the utilized methods, their results, and issues encountered by the researchers. The death and death estimation rates in the future due to skin cancer are very high. So, these facts motivate us to contribute in terms of trying to cover the maximum amount of literature for a better understanding of the problem domain and its proposed solutions.

1.2. Scope and Objectives

The identification of cancer and its classes with the support of machines can launch and open a new direction in research in an early phase and reduce manual efforts. The rising cases of skin cancer create an alarming situation, so it becomes essential to find cancer at the initial phase to rescue the patient’s life. This research is focused on detecting skin cancer and its classification by reviewing numerous techniques. The entire machine-based processing is performed by image processing, computer vision, and machine learning approaches.
Section 1 includes an introduction along with the scope, objectives, and contribution of the presented research article. Section 2 consists of the number of steps utilized for the skin cancer recognition system. Section 3 includes the number of challenges that occur in the detection process. Section 4 consists of details about the benchmark datasets, Section 5 covers some mobile applications utilized to examine skin cancer through smartphones. Section 6 provides a discussion of methods their encountered problems. Section 7 and its subsection conclude the research article and provide some future suggestion to help researchers contribute to future research.

2. Skin Cancer Recognition and Classification System

A computer-aided diagnostic system consists of different phases, these phases are mentioned below:

2.1. Preprocessing

It is a very important step when images carry unnecessary details such as hairs, low illumination, spots around the lesion, etc. These elements can degrade the performance of the system. To remove such artifacts [18,19] preprocessing is applied and it also enhances the quality of samples. There are several processes in preprocessing approaches listed below:

2.1.1. Morphological Operations

The artifacts in the skin lesion images can cause the performance to degrade the system. To remove these artifacts many morphological operations are performed. The word morphology defines the outline of the shape and structure of a particular object. These filters are constructed using a set of algebra called mathematical morphology. It is utilized within the window filter hence these are closely related to order statistical filters. In the skin cancer recognition process, various morphological operations are performed to remove the artifacts such as morphology and threshold [20], top-hat approach [21], morphological closing [22,23,24], and morphological operator bottom hat transforms [25], etc.

2.1.2. Colorspace Conversion

Recognizing and discriminating the information in color is considered a major aspect of vision. Color becomes a very useful aspect in the phase of preprocessing, as the color space conversion utilizes a specific function to improve the quality of an image. In skin cancer recognition systems the lesion images are most commonly converted to CIELAB conversion [26,27], HSV color space conversion [28], grayscale [29,30], etc., to improve the visual effect.

2.1.3. Filtering and Other Enhancement Methods

The noise and artifacts are added to the images by several factors during image acquisition and transmission etc. Once the noise is taken out, it makes a great effect on the condition of the image. The noise removal [19,31] algorithm is known as a process that removes and reduces the noise [32] in an image by smoothing the entire image. Image enhancement [33] makes an image lighter or darker or develops slash contrast. It provides a boost to the sensitivity of the information in images and presents enhanced input to carry out other procedures. In skin cancer recognition the research utilized an anisotropic diffusion filter [34,35], median filter [35,36], Dull Razor [37,38,39,40], Adam Huang algorithm [41], Wiener filter [42], Bilateral filter [43] and Gaussian and Gaussian Blur filter [44,45], Z-score transformation [46], contrast-limited adaptive histogram equalization [47,48], adaptive histogram equalization [49], global-local contrast stretching [50], color constancy with shades of gray [51], adaptive gamma correction [52], gamma and correction [53], etc., for enhancement and noise removal in the skin recognition process.

2.2. Segmentation

Segmentation [54,55] of skin lesions [56] is the most important step as it divides the single image into various small parts. The researcher proposed several deep learning [57,58] and traditional segmentation [59] methods for skin lesions.

2.2.1. Traditional Segmentation

A hybrid segmentation framework is proposed by researchers, the model combines novel hierarchical k-means with a level set technique [60]. A two-phase segmentation is defined in which firstly the image is segmented with k-means clustering for localization of a precise lesion region then it gets optimized using the firefly algorithm to execute higher accuracy [61]. A histogram-based clustering algorithm is utilized with Genetic algorithm, after that the neutrosophic set are computed by neutrosophic c-mean and at last graph cut is applied for skin lesion [62]. For the segment, region-growing segmentation based on fuzzy clustering segmentation is utilized. The fuzzy clustering mean is unsupervised clustering the extension of k means clustering and region growing Emprise spatial context by merging adjacent pixels [63]. As the homogeneity is determined by texture and color features, researchers utilized color features to partition the image thus segmentation is applied using K-means clustering and histogram calculation [64]. The system described can segment skin images using a semi-supervised approach shift mean algorithm it does not require mentioning the cluster’s numbers [65]. The threshold-based segmentation technique is utilized, as the sample is split into many regions depending on the threshold merit so that the edges of the cancerous area become clear [66]. A skin lesion segmentation technique was designed that established adaptive thresholding with the normalization of color networks for dermoscopic images [67]. The researchers designed an algorithm that solves the problem of global optimization as it established an auxiliary function that was smoothed by utilizing Bezier curves and constructed using a local minimizer [68]. Active contour fusion segmentation is utilized and the main focus is to segment low-contrast dermoscopic samples [50].

2.2.2. CNN-Based Segmentation

An initial contour without the edge chan-vese optimized with the genetic algorithm is proposed for recognition of the skin lesion boundary [69]. For the segmentation in dermoscopic samples, a full convolution encoder-decoder network was optimized exponential neighborhood gray wolf optimization algorithm [70]. The researcher designed a novel CNN-based architecture end-to-end atrous spatial pyramid pooling for segmentation of the lesion [71]. A system designed segments skin lesions with the help of Retina-DeepLab, graph-based techniques, and Mask R-CNN [72]. A dense encoder-decoder-based framework is utilized in which the combination of ResNet and DenseNet is utilized for improvement. Moreover, ASPP is used to get multiscale contextual information and skip connections to recover the information [73]. An automated lesion segmentation using an adaptive dual attention component with three characteristics is proposed in which the first characteristic is two global context modeling schemes integrated with ADAM, the second characteristic is to support the multi-scale fusion for better segmentation and the third is to harness spatial information weighted technique to reduce redundancies [74]. The most effective approach based on an enhanced fully convolutional network approach (iFCN) segments the skin lesion without preprocessing or post-processing. It contributes to the determination of the center position of the lesion and clears the details on the edge by removing the unwanted effects [75]. A method is defined that automatically segments the skin lesion and introduces the novel segmentation topology approach named FC-DPN that is made with the combination of a fully convolutional and dual path network [76]. The researchers proposed an attentive border-aware system for segmentation of multi-scale lesions through adversarial schooling consisting of different sections, ResNet34 as the encoding and decoding path, skip connection based on Scale-Att-ASPP and PPM at the peak of the last convolutional layer in the encoding path [77]. The Mask R-CNN-based technique [78] proposed for skin lesion segmentation consists of two parts, proposing the candidate object bounding boxes with RPN and Fast R-CNN classifier and a branch of binary mask prediction [79]. The researcher proposed effective and novel practice of lesion segmentation by using the fusion of YOLOv3 and the GrabCut algorithm [80]. A lesion segmentation system motivated by the Pyramid Scene Parsing Network in which an encoder-decoder system is designed that uses pyramid pooling blocks, and a skip connection which can pay back for lost spatial details and aggregate global context is utilized [81]. The skin lesion segmentation which locates the lesion accurately with the deep learning method depends on DeepLabv3+ and Mask R-CNN which are utilized to increase the performance [82]. The researchers described the investigation of the relevancy of deep learning by utilizing a pre-trained VGG16 encoder and combined it with DeeplabV3, SegNet decoder, and TernausNet [83]. A deep learning strategy to perform and refine the important task of skin lesion segmentation is utilized by 46 layered U-net frameworks and a modified U-Net framework to achieve a successful lesion segmentation rate [84]. To resolve the challenges of varying size and the appearance of skin lesion segmentation, a new dense deconvolutional framework is designed. The deconvolutional layers are utilized to unchange the dimension of input/output, chained residual pooling extract the contextual background information then fuse multi-level features. Hierarchical supervision is added to make refinements of the prediction mask and serves as auxiliary loss [85].
Table 2 presents a detailed summary of the segmentation methods proposed by researchers. It consists of methods, datasets, and the highest result outcome of the experiment regarding accuracy and Jaccard.
The segmentation experiment is evaluated using different parameters. The graph in Figure 1 shows the comparison among segmentation results in terms of accuracy, while Figure 2 compares segmentation outcomes for Jaccard. However, the process of segmentation is a tough task because of various hurdles such as low illumination and further artifacts. This makes it tricky to identify the exact ROI in the samples. Although researchers designed several algorithms this area is still under development.

2.3. Features Extraction, Selection, and Fusion

It is utilized to recognize different features using a variety of machine-learning techniques. It is also the part of dimensionality reduction in which the initial data are first divided and then reduced in a manageable form. The large dataset has several variables that required a lot of computing and some of the variables were also not relevant to the objective. Thus, feature extraction [86] and selection [87,88] help to select the more relevant [89], prominent, and significant features [90,91] and effectively decreased a large mass of variables. In skin cancer recognition, several feature extraction and fusion [92] approaches utilized by researchers are listed below:

2.3.1. Features Extraction

The features were extracted using ABCD, color features were brought out by three-color spaces HSV, LAB, RGB, and texture features by GLCM, after that the most relevant features were selected by a genetic algorithm [93]. In feature extraction, the researchers proposed to refine the bag of words technique by the combination of the features gain from first-order moments and color histogram with HOG [94]. The researchers employed a Local Binary Pattern, Local Vector Pattern, and LTrP that determine eight connections to the neighbors at distance D and also encode them with the help of CST. The combination of LBP+ LVP obtained higher results as compared to other features [95]. The researchers utilized the ABCD method that extracts attributes of symmetry; border, color, and diameter while GLCM and HOG extract the texture features of the lesions in classification SVM provides the best outcomes regarding accuracy [96]. The texture features consider one of the dominant features in the image to obtain second-order texture attributes GLCM of spatial and color features, a color histogram is used to gain color attributes from three color spaces including OPP, HSV, and RGB [97]. The global texture is computed through GLCM and the local features of the sample are taken out using Speeded Up Robust Feature. The comparison shows that SURF features perform better than GLCM and SIFT features and SVM performs better than KNN in classification [98]. The statistical feature standard deviation, mean texture, smoothness, skewness, and energy are extracted by the Local Binary Pattern [99]. In the phase of feature extraction color and textural features combine with a bag of words. Moreover, HL and HG were bagged separately and combined with other bagged Zernike and bagged angles of color vector to emprise the color information [100]. To extract color-based features standard deviation, min, mean, skewness, and kurtosis were calculated for each R,G,B component, and texture-based features were extracted by wavelet transform [101]. The researchers present four ReliefF, Chi2, RFE, and CFS approaches, after that feature normalization is performed using standard score transformation [102].

2.3.2. Features Selection and Fusion

The hybrid features are extracted to classify skin cancer in a machine vision approach. The GLCM and first-order histogram features are combined, moreover, dimensions get reduced with the help of PCA [103]. The researchers extract modified ABCD by using cumulative level difference mean and prominent attributes selected by the Eigenvector centrality feature ranking and selection approach [104]. To recognize melanoma, the network training is performed on the most significant features, the features gained by the GLCM approach, and it gets optimized by a binary bat algorithm which provides a relevant set of features [105]. The novel approach described is called an optimized framework of optimal color feature selections and the best and optimal features get selected using higher entropy value features with PCA [106]. The motive is to distinguish the cancerous and non-cancerous lesions with the help of an accurate feature extraction method so the researchers proposed the fusion of speeded-up-robust features with a bag of features [107].
The researchers proposed several approaches to extract the worthiest features, but to reduce the error rate that occurs due to complex features there is a need to extract more optimal features along with optimization.

2.4. Classification

Classification [108,109] is considered the most significant step in skin cancer recognition as it categorizes skin lesions [110]. Firstly, the classifier is trained on some label data after good training, some un-label images are provided to the classifier for testing purposes. There are many machine learning classifiers [111], their variants [112], and CNN [113] models utilized for classification purposes.

2.4.1. Traditional Machine Learning Classifiers

In the domain of skin cancer recognition machine learning [114] performs very well. A method proposed to discover melanoma skin cancer using the ISIC dataset for classification, the researchers utilized an SVM and achieved an accuracy of 96.9% [115]. The researchers utilized different classifiers SVM, KNN, Ensemble, and Decision Tree for melanoma discovery with accuracies of 100%, 87.5%, 87.5%, and 75.0%, respectively [116]. The researchers designed a manner to classify the skin lesion with a decision tree-based Random Forest classifier on ISIC 2017 and HAM 10,000 datasets with an accuracy of 97% [117]. The Naïve Bayes classifier was utilized for dermoscopic classification using the Dermatology Information System and DermQuest with an accuracy of 98.8% [118]. The researchers utilized SVM, KNN, Ensemble classifiers, and Artificial Neural Network classifier and their variants. The SVM variant gives the best accuracy of 83% [119]. For melanoma, keratosis, and benign discovery the Naïve Bayes give accuracies of 91.2%, 92.9%, and 94.3%, respectively [120]. A system was proposed based on fuzzy decision ontology for the recognition of melanoma with the help of a KNN classifier on DermQuest and Dermatology Information System with an accuracy of 92% [121]. The researchers designed a system for the acknowledgment of melanoma with ANN and SVM classifiers which obtained an accuracy of 96.2% and 97%, respectively [122].

2.4.2. Deep Learning Models

Deep learning [123] is strengthening its roots in every research domain nowadays. Various advanced and deep learning [124,125] building blocks are employed by researchers to acquire worthy performances. A stacked ensemble framework based on CNN is designed to recognize melanoma in the initial stage, in which the multiple sub-CNN models for classification tasks are an ensemble. A task is performed on the ISIC dataset having two classes with an accuracy of 0.957 [126]. The researchers utilized three models VGG16, VGG19, and Inception V3 using the ISIC and obtained accuracies of 77%, 76%, and 74%, respectively [127]. A methodology is defined that classifies samples of skin cancer with the help of ResNet, VGG19, and InceptionV3 on more than 24,000 samples. Moreover, experiments show that inceptionV3 performs best [128]. The classification of skin lesions was performed with EfficientNets and ensemble models using the ISIC 2020 melanoma classification dataset, the best result is recorded using the entire EfficientNet B6 models ensemble and one is EfficientNet B5 with a 0.944 ROC curve [129]. A CNN architecture was utilized for pigmented lesion categorization using the HAM10000 dataset with an accuracy of 91.51% [130]. The VGG 16 and AlexNet were utilized which were serially fused and optimized using PCA. The classification was performed using PH2, ISBI 2016, and 2017 datasets with an accuracy of 99% [131]. A system was proposed that distinguishes different classes of skin cancer through five pre-trained CNN and four ensemble models on the HAM-10000 dataset in which pre-trained ResNetXt101 and ensemble InceptionResNetV2+ ResNetXt101 provide the best accuracies of 93.20% and 92.83%, respectively [132]. The GoogleNet model performed classification using the ISIC 2019 with an accuracy of 94.92% [133]. The researcher designed a hyper-connected CNN called HcCNN for the discrimination of skin lesions in a multimodality test on a seven point checklist and achieved an accuracy of 74.9% [134]. The classification using CNN with a Novel Regularizer is proposed which is a binary approach that distinguishes malignant and benign lesions tested on ISIC and provides an accuracy of 97.49% [135]. The ensemble of CNN networks for classification for melanoma recognition including InceptionV4, SENet154, InceptionResNetV2, and PNASNeT-5-Large was utilized, in which PNASNeT-5-Large gives an excessive validation outcome that is 0.76 on the ISIC 2018 dataset [136]. The insufficient training data problem gets resolved by intra-class difference and inter-class alikeness by designing the ARL-CNN using ISIC 2017 with an accuracy of 85% [137]. The CNN framework MobileNet proposed for the categorization of skin lesions using the HAM-10000 dataset in which accuracy is a little bit higher without data augmentation is 83.93% [138]. The GoogLleNet, VGG, and their ensemble were utilized for the categorization of seven classes utilizing the ISIC 2018 and the models are 79.7%, 80.1% and 81.5% accurate, respectively [139]. The summary of the existing classification approaches along with highest obtained results are mentioned in Table 3.
In the classification process, the problem of less training data, overfitting, and underfitting of the model cause misclassification and make the wrong prediction. Figure 3 presents a visual representation of the outcomes of the CNN classification.

3. Challenges in the Existing Literature

The processing task on skin-infected images is very challenging due to several reasons. The most important reason is low contrast which makes it difficult to differentiate the infected and normal skin. Other factors include the presence of hair, bubble, ruler, and ink artifacts. So it is important to consider these artifacts while making algorithms [140]. The other challenges include extensive training, images of light-skinned persons in standard datasets, small interclass variation, multi-sized and multi-shape images, and unbalanced classes of datasets.

4. Benchmark Datasets

Following is the list of standard available datasets: HAM-10000: The dataset contains seven classes of skin lesions. The dermatoscopic images are stored in different modalities. The total number of samples are 10,015 which are publicly obtained through the ISIC archive [141]. MED–NODE: The dataset contains 170 samples [142]. PH2: It consists of 200 dermoscopic images [143] with three classes of normal, abnormal, and melanoma [144]. DermoFit: The DermoFit consists of macroscopic images [145] and the repository contains 1300 images with 10 classes with parallel binary segmentation masks [146]. ISBI 2016: The dataset is provided by the ISIC archive. The whole count of samples in the dataset is 1279 of which 900 are training and 379 are testing images [147]. ISBI 2017: It contains a total of 2750 images, out of which 150 are for validation, 2000 for training, and 600 for testing. [148]. ISIC 2018: The ISIC is the world’s largest database that provides 12,500 images with three tasks. The first task is lesion segmentation, the second task is the detection of lesion attributes, and the last task is a classification of lesion disease. The dataset consists of seven classes [149]. ISIC 2019: The data in ISIC 2019 come from HAM-10000, BCN_20000, ViDIR Group, and anonymous resources. The dataset consists of 25,331 dermoscopic samples with eight classes [150]. ISIC 2020: The dataset consists of 33,126 samples. The dataset presents two formats that are DICOM, where pixels data are encoded with JPEG [151].

5. Mobile Apps for Skin Cancer Detection

The users can perform a self-examination of skin lesions and determine the risk of melanoma by taking a picture from a smartphone. There are several applications some of which are discussed below: Mole Mapper: the application collects the data provided by the participant and measurements of the mole with behavioral and demographic information that is related to the risk of melanoma. The application should be downloaded from the Apple App store [152]. m-Skin Doctor: utilizing image processing and computer vision approaches, the researchers described a real-time mobile healthcare system that can recognize melanoma. The Gaussian removes noise and segmentation is undertaken by the GrabCut algorithm. In the last step, SVM performs the classification. The application provides a specificity of 75% and a sensitivity of 80% [153]. SkinVision: this is a smartphone application with 900,000 users worldwide. There are different versions of the app, and the latest version was launched in October 2018 which is the fifth version. The app uses a conditional generative adversarial neural network and SVM. It obtains 78% specificity and 95% sensitivity [154]. UMSkinCheck: the application was developed by the University of Michigan. The full-body survey required the taking of 23 photographs in seven positions for future lesion comparison. The risk calculator discovers the risk of the development of melanoma built on the ten previously detected risk factors [155]. SpotMole: it is an android application that allows the direct submission of an image captured using the app and in-direct submission using the gallery. The application provides a specificity of 80% and a sensitivity of 43% [156].

6. Discussion

According to the facts, melanoma has become among the most rapidly increasing cancers all around the globe, but it can be cured if detected at its initial stage. It should be noted that many state-of-art-techniques consist of various problems, which affect their achievements and generate faulty outcomes while recognizing skin cancer. This research article presented a comprehensive literature review on automated skin cancer identification processes with deep learning and machine learning approaches. Different approaches were analyzed and compared on benchmark datasets. However, issues such as noise, poor contrast, border irregularity, etc. [126] reduced the effectiveness of the approaches, so the algorithms must be able to handle these hurdles. Existing standard datasets contain samples of light-skinned persons [157], datasets having enough images of both light and dark-skinned people are necessary to increase the accuracy of skin cancer recognition systems. Sometimes classes of datasets are unbalanced [138] and due to the problem of deficient training data, so the dataset must be augmented to balance each class, to draw strong generalizations.

7. Conclusions

The examination of a lesion by the naked eye is not accurate and the medical procedure of skin cancer diagnosis is reliable, but it takes lots of time. Accurate lesion segmentation in dermoscopy samples is an important and difficult job. Thus, a computer-aided diagnosis system is utilized to perform these tasks more accurately. This comprehensive survey article has discussed several noninvasive machine and deep learning approaches for skin cancer recognition and classification. Multiple steps required for the skin cancer recognition system, including preprocessing and segmentation go along with feature extraction and classification. This survey focused on the latest approaches designed for the identification of skin cancer. Every algorithm has its own merits and limitations so that specific selection of algorithms according to the problem is the best outcome. However, CNN provides better outcomes than other traditional approaches while classifying image samples and segmentation because it is more interconnected to computer vision than others. This article provides extensive literature on the techniques utilized for the skin cancer detection process.

Future Direction

In the future, researchers may survey skin cancer detection systems by more categorization in sections. Traditional segmentation is further divided into region-based, clustering-based, threshold-based, etc. Similarly, the feature extraction should be categorized into color features, shape features, and texture features. Although the results obtained from CNN are impressive still, some limitations such as optimization and generalization exist which could be solved in the future using Quantum computing.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gordon, R. Skin Cancer: An Overview of Epidemiology and Risk Factors. Semin. Oncol. Nurs. 2013, 29, 160–169. [Google Scholar] [CrossRef] [PubMed]
  2. Javed, R.; Rahim, M.S.M.; Saba, T.; Rehman, A. A comparative study of features selection for skin lesion detection from dermoscopic images. Netw. Model. Anal. Health Inform. Bioinform. 2020, 9, 4. [Google Scholar] [CrossRef]
  3. Zhang, N.; Cai, Y.-X.; Wang, Y.-Y.; Tian, Y.-T.; Wang, X.-L.; Badami, B. Skin cancer diagnosis based on optimized convolutional neural network. Artif. Intell. Med. 2020, 102, 101756. [Google Scholar] [CrossRef] [PubMed]
  4. Khan, M.A.; Sharif, M.I.; Raza, M.; Anjum, A.; Saba, T.; Shad, S.A. Skin lesion segmentation and classification: A unified framework of deep neural network features fusion and selection. Expert Syst. 2022, 39, e12497. [Google Scholar] [CrossRef]
  5. Yu, L.; Chen, H.; Dou, Q.; Qin, J.; Heng, P.A. Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks. IEEE Trans. Med. Imaging 2016, 36, 994–1004. [Google Scholar] [CrossRef]
  6. Rezvantalab, A.; Safigholi, H.; Karimijeshni, S. Dermatologist level dermoscopy skin cancer classification using different deep learning convolutional neural networks algorithms. arXiv 2018, arXiv:1810.10348. [Google Scholar]
  7. Orthaber, K.; Pristovnik, M.; Skok, K.; Perić, B.; Maver, U. Skin cancer and its treatment: Novel treatment approaches with emphasis on nanotechnology. J. Nanomater. 2017, 2017, 2606271. [Google Scholar] [CrossRef] [Green Version]
  8. Foote, M.; Harvey, J.; Porceddu, S.; Dickie, G.; Hewitt, S.; Colquist, S.; Zarate, D.; Poulsen, M. Effect of Radiotherapy Dose and Volume on Relapse in Merkel Cell Cancer of the Skin. Int. J. Radiat. Oncol. 2010, 77, 677–684. [Google Scholar] [CrossRef]
  9. Qadir, M.I. Skin cancer: Etiology and management. Pak. J. Pharm. Sci. 2016, 29, 999–1003. [Google Scholar]
  10. D’Orazio, J.; Jarrett, S.; Amaro-Ortiz, A.; Scott, T. UV radiation and the skin. Int. J. Mol. Sci. 2013, 14, 12222–12248. [Google Scholar] [CrossRef] [Green Version]
  11. Seebode, C.; Lehmann, J.; Emmert, S. Photocarcinogenesis and Skin Cancer Prevention Strategies. Anticancer Res. 2016, 36, 1371–1378. [Google Scholar] [PubMed]
  12. Mohammed, S.S.; Al-Tuwaijari, J.M. Skin Disease Classification System Based on Machine Learning Technique: A Survey. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1076, 012045. [Google Scholar] [CrossRef]
  13. Sharma, V.; Garg, A.; Thenmalar, S. A survey on Classification of malignant melanoma and Benign Skin Lesion by Using Machine Learning Techniques. Easy Chair Prepr. 2020, 2611, 2314–2516. [Google Scholar]
  14. Saherish, F.; Megha, J. A Survey on Melanoma Skin Cancer Detection Using CNN. Int. J. Sci. Res. Eng. Manag. (IJSREM) 2020, 4, 1–4. [Google Scholar]
  15. Goswami, T.; Dabhi, V.K.; Prajapati, H.B. Skin Disease Classification from Image-A Survey. In Proceedings of the 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 6–7 March 2020; pp. 599–605. [Google Scholar]
  16. Sreelatha, T.; Subramanyam, M.V.; Prasad, M.N.G. A Survey work on Early Detection methods of Melanoma Skin Cancer. Res. J. Pharm. Technol. 2019, 12, 2589. [Google Scholar] [CrossRef]
  17. DurgaRao, N.; Sudhavani, G. A Survey on Skin Cancer Detection System. J. Eng. Res. Appl. 2017, 7, 59–64. [Google Scholar] [CrossRef]
  18. Irum, I.; Sharif, M.; Raza, M.; Yasmin, M. Salt and Pepper Noise Removal Filter for 8-Bit Images Based on Local and Global Occurrences of Grey Levels as Selection Indicator. Nepal J. Sci. Technol. 2014, 15, 123–132. [Google Scholar] [CrossRef] [Green Version]
  19. Sharif, M.; Irum, I.; Yasmin, M.; Raza, M. Salt & pepper noise removal from digital color images based on mathematical morphology and fuzzy decision. Nepal J. Sci. Technol. 2017, 18, 1–7. [Google Scholar]
  20. Reis, H.C.; Turk, V.; Khoshelham, K.; Kaya, S. InSiNet: A deep convolutional approach to skin cancer detection and segmentation. Med. Biol. Eng. Comput. 2022, 60, 643–662. [Google Scholar] [CrossRef] [PubMed]
  21. Sikkandar, M.Y.; Alrasheadi, B.A.; Prakash, N.B.; Hemalakshmi, G.R.; Mohanarathinam, A.; Shankar, K. Deep learning based an automated skin lesion segmentation and intelligent classification model. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 3245–3255. [Google Scholar] [CrossRef]
  22. Zghal, N.S.; Kallel, I.K. An effective approach for the diagnosis of melanoma using the sparse auto-encoder for features detection and the SVM for classification. In Proceedings of the 2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), Sousse, Tunisia, 2–5 September 2020; pp. 1–6. [Google Scholar]
  23. Guarracino, M.R.; Maddalena, L. SDI+: A Novel Algorithm for Segmenting Dermoscopic Images. IEEE J. Biomed. Health Inform. 2018, 23, 481–488. [Google Scholar] [CrossRef] [PubMed]
  24. Nida, N.; Irtaza, A.; Javed, A.; Yousaf, M.H.; Mahmood, M.T. Melanoma lesion detection and segmentation using deep region based convolutional neural network and fuzzy C-means clustering. Int. J. Med. Inf. 2019, 124, 37–48. [Google Scholar] [CrossRef] [PubMed]
  25. Victor, A.; Ghalib, M. Automatic detection and classification of skin cancer. Int. J. Intell. Eng. Syst. 2017, 10, 444–451. [Google Scholar] [CrossRef]
  26. Shyma, A. A Comparative Study between Content-Adaptive Superpixel and Semantic Segmentation for Skin Cancer. Int. J. Innov. Sci. Res. Technol. 2021, 6, 1028–1033. [Google Scholar]
  27. Pezhman Pour, M.; Seker, H. Transform domain representation-driven convolutional neural networks for skin lesion segmentation. Expert Syst. Appl. 2020, 144, 113129. [Google Scholar] [CrossRef]
  28. Ottom, M.A. Convolutional Neural Network for Diagnosing Skin Cancer. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 333–338. [Google Scholar] [CrossRef]
  29. Mane, S.; Shinde, S. A method for melanoma skin cancer detection using dermoscopy images. In Proceedings of the 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, 16–18 August 2018; pp. 1–6. [Google Scholar]
  30. Ansari, U.B.; Sarode, T. Skin cancer detection using image processing. Int. Res. J. Eng. Technol. 2017, 4, 2875–2881. [Google Scholar]
  31. Irum, I.; Sharif, M.; Raza, M.; Mohsin, S. A Nonlinear Hybrid Filter for Salt & Pepper Noise Removal from Color Images. J. Appl. Res. Technol. 2015, 13, 79–85. [Google Scholar]
  32. Irum, I.; Sharif, M.; Yasmin, M.; Raza, M.; Azam, F. A Noise Adaptive Approach to Impulse Noise Detection and Reduction. Nepal J. Sci. Technol. 2014, 15, 67–76. [Google Scholar] [CrossRef]
  33. Shah, G.A.; Khan, A.; Shah, A.A.; Raza, M.; Sharif, M. A Review on Image Contrast Enhancement Techniques Using Histogram Equalization. Sci. Int. 2015, 27, 1297–1302. [Google Scholar]
  34. Janney, J.B.; Roslin, S. Classification of melanoma from Dermoscopic data using machine learning techniques. Multimed. Tools Appl. 2020, 79, 3713–3728. [Google Scholar] [CrossRef]
  35. Alasadi, A.H.H.; Alsafy, B.M. Diagnosis of Malignant Melanoma of Skin Cancer Types. Int. J. Interact. Multimed. Artif. Intell. 2017, 4, 44. [Google Scholar] [CrossRef] [Green Version]
  36. Murugan, A.; Nair, S.A.H.; Kumar, K.P. Detection of skin cancer using SVM, random forest and kNN classifiers. J. Med. Syst. 2019, 43, 1–9. [Google Scholar] [CrossRef] [PubMed]
  37. Shawon, M.; Abedin, K.F.; Majumder, A.; Mahmud, A.; Mishu, M.C. Identification of Risk of Occurring Skin Cancer (Melanoma) Using Convolutional Neural Network (CNN). AIUB J. Sci. Eng. 2021, 20, 47–51. [Google Scholar] [CrossRef]
  38. Kamboj, A. A color-based approach for melanoma skin cancer detection. In Proceedings of the 2018 First International Conference on Secure Cyber Computing and Communication (ICSCCC), Jalandhar, India, 15–17 December 2018; pp. 508–513. [Google Scholar]
  39. Farooq, M.A.; Khatoon, A., Varkarakis; Corcoran, P. Advanced deep learning methodologies for skin cancer classification in prodromal stages. arXiv 2020, arXiv:2003.06356. [Google Scholar]
  40. Ahn, E.; Kim, J.; Bi, L.; Kumar, A.; Li, C.; Fulham, M.; Feng, D.D. Saliency-Based Lesion Segmentation Via Background Detection in Dermoscopic Images. IEEE J. Biomed. Health Inform. 2017, 21, 1685–1693. [Google Scholar] [CrossRef]
  41. Qian, Y.; Zhao, S. Detection and Recognition of Skin Cancer in Dermatoscopy Images. In Proceedings of the 2020 International Conference on Pattern Recognition and Intelligent Systems, Athens, Greece, 30 July–2 August 2020; pp. 1–5. [Google Scholar]
  42. Malibari, A.A.; Alzahrani, J.S.; Eltahir, M.M.; Malik, V.; Obayya, M.; Al Duhayyim, M.; Neto, A.V.L.; de Albuquerque, V.H.C. Optimal deep neural network-driven computer aided diagnosis model for skin cancer. Comput. Electr. Eng. 2022, 103, 108318. [Google Scholar] [CrossRef]
  43. Montaha, S.; Azam, S.; Rafid, A.K.M.R.H.; Islam, S.; Ghosh, P.; Jonkman, M. A shallow deep learning approach to classify skin cancer using down-scaling method to minimize time and space complexity. PLoS ONE 2022, 17, e0269826. [Google Scholar] [CrossRef]
  44. Salamaa, W.M.; Aly, M.H. Deep learning design for benign and malignant classification of skin lesions: A new approach. Multimed. Tools Appl. 2021, 80, 26795–26811. [Google Scholar] [CrossRef]
  45. Pham, T.C.; Tran, G.S.; Nghiem, T.P.; Doucet, A.; Luong, C.M.; Hoang, V.D. A comparative study for classification of skin cancer. In Proceedings of the 2019 International Conference on System Science and Engineering (ICSSE), Dong Hoi, Vietnam, 20–21 July 2019; pp. 267–272. [Google Scholar]
  46. Rajput, A.S.; Tanwar, V.K.; Raman, B. -Score-Based Secure Biomedical Model for Effective Skin Lesion Segmentation Over eHealth Cloud. ACM Trans. Multimed. Comput. Commun. Appl. 2021, 17, 1–19. [Google Scholar] [CrossRef]
  47. Okuboyejo, D.A.; Olugbara, O.O.; Odunaike, S.A. CLAHE Inspired Segmentation of Dermoscopic Images Using Mixture of Methods. In Transactions on Engineering Technologies; Springer: Berlin/Heidelberg, Germany, 2014; pp. 355–365. [Google Scholar]
  48. Ibraheem, M.R.; Elmogy, M. A Non-invasive Automatic Skin Cancer Detection System for Characterizing Malignant Melanoma from Seborrheic Keratosis. In Proceedings of the 2020 2nd International Conference on Computer and Information Sciences (ICCIS), Sakaka, Saudi Arabia, 13–15 October 2020; pp. 1–5. [Google Scholar]
  49. Hoshyar, A.N.; Jumaily, A.A.; Hoshyar, A.N. Pre-Processing of Automatic Skin Cancer Detection System: Comparative Study. Int. J. Smart Sens. Intell. Syst. 2014, 7, 1364–1377. [Google Scholar] [CrossRef] [Green Version]
  50. Javed, R.; Rahim, M.S.M.; Saba, T.; Rashid, M. Region-based active contour JSEG fusion technique for skin lesion segmentation from dermoscopic images. Biomed. Res. 2019, 30, 1–10. [Google Scholar]
  51. Ali, R.; Hardie, R.C.; Narayanan Narayanan, B.; De Silva, S. Deep learning ensemble methods for skin lesion analysis towards melanoma detection. In Proceedings of the IEEE National Aerospace and Electronics Conference (NAECON), Dayton, OH, USA, 15–19 July 2019; pp. 311–316. [Google Scholar]
  52. Olugbara, O.O.; Taiwo, T.B.; Heukelman, D. Segmentation of Melanoma Skin Lesion Using Perceptual Color Difference Saliency with Morphological Analysis. Math. Probl. Eng. 2018, 2018, 1524286. [Google Scholar] [CrossRef]
  53. Okuboyejo, D.; Olugbara, O.O. Segmentation of Melanocytic Lesion Images Using Gamma Correction with Clustering of Keypoint Descriptors. Diagnostics 2021, 11, 1366. [Google Scholar] [CrossRef]
  54. Iqbal, A.; Sharif, M.; Yasmin, M.; Raza, M.; Aftab, S. Generative adversarial networks and its applications in the biomedical image segmentation: A comprehensive survey. Int. J. Multimed. Inf. Retr. 2022, 11, 333–368. [Google Scholar] [CrossRef]
  55. Masood, S.; Sharif, M.; Masood, A.; Yasmin, M.; Raza, M. A Survey on Medical Image Segmentation. Curr. Med. Imaging 2015, 11, 3–14. [Google Scholar] [CrossRef]
  56. Khan, M.A.; Akram, T.; Sharif, M.; Saba, T.; Javed, K.; Lali, I.U.; Tanik, U.J.; Rehman, A. Construction of saliency map and hybrid set of features for efficient segmentation and classification of skin lesion. Microsc. Res. Technol. 2019, 82, 741–763. [Google Scholar] [CrossRef]
  57. Anjum, M.A.; Amin, J.; Sharif, M.; Khan, H.U.; Malik, M.S.A.; Kadry, S. Deep Semantic Segmentation and Multi-Class Skin Lesion Classification Based on Convolutional Neural Network. IEEE Access 2020, 8, 129668–129678. [Google Scholar] [CrossRef]
  58. Iqbal, A.; Sharif, M.; Khan, M.A.; Nisar, W.; Alhaisoni, M. FF-UNet: A U-Shaped Deep Convolutional Neural Network for Multimodal Biomedical Image Segmentation. Cogn. Comput. 2022, 14, 1287–1302. [Google Scholar] [CrossRef]
  59. Shahzad, A.; Sharif, M.; Raza, M.; Hussain, K. Enhanced watershed image processing segmentation. J. Inf. Commun. Technol. 2008, 2, 1–9. [Google Scholar]
  60. Na Hwang, Y.; Seo, M.J.; Kim, S.M. A Segmentation of Melanocytic Skin Lesions in Dermoscopic and Standard Images Using a Hybrid Two-Stage Approach. BioMed Res. Int. 2021, 2021, 5562801. [Google Scholar] [CrossRef] [PubMed]
  61. Garg, S.; Jindal, B. Skin lesion segmentation using k-mean and optimized fire fly algorithm. Multimed. Tools Appl. 2021, 80, 7397–7410. [Google Scholar] [CrossRef]
  62. Hawas, A.R.; Guo, Y.; Du, C.; Polat, K.; Ashour, A.S. OCE-NGC: A neutrosophic graph cut algorithm using optimized clustering estimation algorithm for dermoscopic skin lesion segmentation. Appl. Soft Comput. 2020, 86, 105931. [Google Scholar] [CrossRef]
  63. Mohamed, A.A.I.; Ali, M.M.; Nusrat, K.; Rahebi, J.; Sayiner, A.; Kandemirli, F. Melanoma skin cancer segmentation with image region growing based on fuzzy clustering mean. Int. J. Eng. Innov. Res. 2017, 6, 91C95. [Google Scholar]
  64. Jaisakthi, S.M.; Chandrabose, A.; Mirunalini, P. Automatic skin lesion segmentation using semi-supervised learning technique. arXiv 2017, arXiv:1703.04301. [Google Scholar]
  65. Lynn, N.C.; Kyu, Z.M. Segmentation and Classification of Skin Cancer Melanoma from Skin Lesion Images. In Proceedings of the 2017 18th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT), Taipei, Taiwan, 18–20 December 2017; pp. 117–122. [Google Scholar]
  66. Saravanan, S.; Heshma, B.; Shanofer, A.A.; Vanithamani, R. Skin cancer detection using dermoscope images. Mater. Today: Proc. 2020, 33, 4823–4827. [Google Scholar] [CrossRef]
  67. Thanh, D.N.H.; Erkan, U.; Prasath, V.S.; Kumar, V.; Hien, N.N. A Skin Lesion Segmentation Method for Dermoscopic Images Based on Adaptive Thresholding with Normalization of Color Models. In Proceedings of the 2019 6th International Conference on Electrical and Electronics Engineering (ICEEE), Istanbul, Turkey, 16–17 April 2019. [Google Scholar]
  68. Abdulhamid, I.A.M.; Sahiner, A.; Rahebi, J. New Auxiliary Function with Properties in Nonsmooth Global Optimization for Melanoma Skin Cancer Segmentation. BioMed Res. Int. 2020, 2020, 5345923. [Google Scholar] [CrossRef] [Green Version]
  69. Ashour, A.S.; Nagieb, R.M.; El-Khobby, H.A.; Elnaby, M.M.A.; Dey, N. Genetic algorithm-based initial contour optimization for skin lesion border detection. Multimed. Tools Appl. 2021, 80, 2583–2597. [Google Scholar] [CrossRef]
  70. Mohakud, R.; Dash, R. Skin cancer image segmentation utilizing a novel EN-GWO based hyper-parameter optimized FCEDN. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 9889–9904. [Google Scholar] [CrossRef]
  71. Kaur, R.; Gholam, H.; Sinha, R.; Lindén, M. Automatic lesion segmentation using atrous convolutional deep neural networks in dermoscopic skin cancer images. BMC Med. Imaging 2021, 22, 1–13. [Google Scholar] [CrossRef]
  72. Bagheri, F.; Tarokh, M.J.; Ziaratban, M. Skin lesion segmentation from dermoscopic images by using Mask R-CNN, Retina-Deeplab, and graph-based methods. Biomed. Signal Process. Control. 2021, 67, 102533. [Google Scholar] [CrossRef]
  73. Qamar, S.; Ahmad, P.; Shen, L. Dense Encoder-Decoder–Based Architecture for Skin Lesion Segmentation. Cogn. Comput. 2021, 13, 583–594. [Google Scholar] [CrossRef]
  74. Wu, H.; Pan, J.; Li, Z.; Wen, Z.; Qin, J. Automated Skin Lesion Segmentation Via an Adaptive Dual Attention Module. IEEE Trans. Med. Imaging 2020, 40, 357–370. [Google Scholar] [CrossRef]
  75. Öztürk, Ş.; Özkaya, U. Skin Lesion Segmentation with Improved Convolutional Neural Network. J. Digit. Imaging 2020, 33, 958–970. [Google Scholar] [CrossRef] [PubMed]
  76. Shan, P.; Wang, Y.; Fu, C.; Song, W.; Chen, J. Automatic skin lesion segmentation based on FC-DPN. Comput. Biol. Med. 2020, 123, 103762. [Google Scholar] [CrossRef] [PubMed]
  77. Wei, Z.; Shi, F.; Song, H.; Ji, W.; Han, G. Attentive boundary aware network for multi-scale skin lesion segmentation with adversarial training. Multimed. Tools Appl. 2020, 79, 27115–27136. [Google Scholar] [CrossRef]
  78. Khan, M.A.; Akram, T.; Zhang, Y.-D.; Sharif, M. Attributes based skin lesion detection and recognition: A mask RCNN and transfer learning-based deep learning framework. Pattern Recognit. Lett. 2021, 143, 58–66. [Google Scholar] [CrossRef]
  79. Huang, C.; Yu, A.; Wang, Y.; He, H. Skin Lesion Segmentation Based on Mask R-CNN. In Proceedings of the 2020 International Conference on Virtual Reality and Visualization (ICVRV), Recife, Brazil, 13–14 November 2020. [Google Scholar]
  80. Ünver, H.M.; Ayan, E. Skin Lesion Segmentation in Dermoscopic Images with Combination of YOLO and GrabCut Algorithm. Diagnostics 2019, 9, 72. [Google Scholar] [CrossRef] [Green Version]
  81. Shahin, A.H.; Amer, K.; Elattar, M.A. Deep Convolutional Encoder-Decoders with Aggregated Multi-Resolution Skip Connections for Skin Lesion Segmentation. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 451–454. [Google Scholar]
  82. Goyal, M.; Oakley, A.; Bansal, P.; Dancey, D.; Yap, M.H. Skin Lesion Segmentation in Dermoscopic Images with Ensemble Deep Learning Methods. IEEE Access 2019, 8, 4171–4181. [Google Scholar] [CrossRef]
  83. Lameski, J.; Jovanov, A.; Zdravevski, E.; Lameski, P.; Gievska, S. Skin lesion segmentation with deep learning. In Proceedings of the IEEE EUROCON 2019-18th International Conference on Smart Technologies, Novi Sad, Serbia, 1–4 July 2019; pp. 1–5. [Google Scholar]
  84. Hasan, S.N.; Gezer, M.; Azeez, R.A.; Gulsecen, S. Skin Lesion Segmentation by using Deep Learning Techniques. In Proceedings of the 2019 Medical Technologies Congress (TIPTEKNO), Izmir, Turkey, 3–5 October 2019; pp. 1–4. [Google Scholar]
  85. Li, H.; He, X.; Zhou, F.; Yu, Z.; Ni, D.; Chen, S.; Wang, T.; Lei, B. Dense Deconvolutional Network for Skin Lesion Segmentation. IEEE J. Biomed. Health Inform. 2018, 23, 527–537. [Google Scholar] [CrossRef]
  86. Khan, M.A.; Javed, M.Y.; Sharif, M.; Saba, T.; Rehman, A. Multi-Model Deep Neural Network based Features Extraction and Optimal Selection Approach for Skin Lesion Classification. In Proceedings of the 2019 International Conference on Computer and Information Sciences (ICCIS), Sakaka, Saudi Arabia, 3–4 April 2019; pp. 1–7. [Google Scholar]
  87. Afza, F.; Sharif, M.; Khan, M.A.; Tariq, U.; Yong, H.-S.; Cha, J. Multiclass Skin Lesion Classification Using Hybrid Deep Features Selection and Extreme Learning Machine. Sensors 2022, 22, 799. [Google Scholar] [CrossRef] [PubMed]
  88. Khan, M.A.; Sharif, M.; Akram, T.; Bukhari, S.A.C.; Nayak, R.S. Developed Newton-Raphson based deep features selection framework for skin lesion recognition. Pattern Recognit. Lett. 2020, 129, 293–303. [Google Scholar] [CrossRef]
  89. Khan, M.A.; Akram, T.; Sharif, M.; Javed, K.; Rashid, M.; Bukhari, S.A.C. An integrated framework of skin lesion detection and recognition through saliency method and optimal deep neural network features selection. Neural Comput. Appl. 2020, 32, 15929–15948. [Google Scholar] [CrossRef]
  90. Khan, M.A.; Sharif, M.; Akram, T.; Damaševičius, R.; Maskeliūnas, R. Skin lesion segmentation and multiclass classification using deep learning features and improved moth flame optimization. Diagnostics 2021, 11, 811. [Google Scholar] [CrossRef]
  91. Afza, F.; Khan, M.A.; Sharif, M.; Rehman, A. Microscopic skin laceration segmentation and classification: A framework of statistical normal distribution and optimal feature selection. Microsc. Res. Technol. 2019, 82, 1471–1488. [Google Scholar] [CrossRef]
  92. Khan, M.A.; Muhammad, K.; Sharif, M.; Akram, T.; Kadry, S. Intelligent fusion-assisted skin lesion localization and classification for smart healthcare. Neural Comput. Appl. 2021, 1–16. [Google Scholar] [CrossRef]
  93. Tan, T.Y.; Zhang, L.; Jiang, M. An intelligent decision support system for skin cancer detection from dermoscopic images. In Proceedings of the 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), Changsha, China, 13–15 August 2016; pp. 2194–2199. [Google Scholar]
  94. Alfed, N.; Khelifi, F.; Bouridane, A. Improving a bag of words approach for skin cancer detection in dermoscopic images. In Proceedings of the 2016 International Conference on Control, Decision and Information Technologies (CoDIT), Saint Julian’s, Malta, 6–8 April 2016; pp. 24–27. [Google Scholar]
  95. Durgarao, N.; Sudhavani, G. Diagnosing skin cancer via C-means segmentation with enhanced fuzzy optimization. IET Image Process. 2021, 15, 2266–2280. [Google Scholar] [CrossRef]
  96. Vidya, M.; Karki, M.V. Skin Cancer Detection using Machine Learning Techniques. In Proceedings of the 2020 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Bangalore, India, 2–4 July 2020; pp. 1–5. [Google Scholar]
  97. Kavitha, J.C.; Suruliandi, A. Texture and color feature extraction for classification of melanoma using SVM. In Proceedings of the 2016 International Conference on Computing Technologies and Intelligent Data Engineering (ICCTIDE’16), Kovilpatti, India, 7–9 January 2016; pp. 1–6. [Google Scholar]
  98. Kavitha, J.C.; Suruliandi, A.; Nagarajan, D.; Nadu, T. Melanoma detection in dermoscopic images using global and local feature extraction. Int. J. Multimed. Ubiquitous Eng. 2017, 12, 19–28. [Google Scholar] [CrossRef]
  99. Putri, H.S.K.A.; Sari, C.A.; Setiadi, D.R.I.M.; Rachmawanto, E.H. Classification of Skin Diseases Types using Naïve Bayes Classifier based on Local Binary Pattern Features. In Proceedings of the 2020 International Seminar on Application for Technology of Information and Communication (iSemantic), Semarang, Indonesia, 19–20 September 2020; pp. 61–66. [Google Scholar]
  100. Alfed, N.; Khelifi, F. Bagged textural and color features for melanoma skin cancer detection in dermoscopic and standard images. Expert Syst. Appl. 2017, 90, 101–110. [Google Scholar] [CrossRef] [Green Version]
  101. Nezhadian, F.K.; Rashidi, S. Melanoma skin cancer detection using color and new texture features. In Proceedings of the 2017 Artificial Intelligence and Signal Processing Conference (AISP), Shiraz, Iran, 25–27 October 2017; pp. 1–5. [Google Scholar]
  102. Filali, Y.; Ennouni, A.; Sabri, M.A.; Aarab, A. A study of lesion skin segmentation, features selection and classification approaches. In Proceedings of the 2018 International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco, 2–4 April 2018; pp. 1–7. [Google Scholar]
  103. Zareen, S.S.; Guangmin, S.; Li, Y.; Kundi, M.; Qadri, S.; Qadri, S.F.; Ahmad, M.; Khan, A.H. A Machine Vision Approach for Classification of Skin Cancer Using Hybrid Texture Features. Comput. Intell. Neurosci. 2022, 2022, 4942637. [Google Scholar] [CrossRef]
  104. Melbin, K.; Raj, Y.J.V. Integration of modified ABCD features and support vector machine for skin lesion types classification. Multimed. Tools Appl. 2021, 80, 8909–8929. [Google Scholar] [CrossRef]
  105. Gaonkar, R.; Singh, K.; Prashanth, G.; Kuppili, V. Lesion analysis towards melanoma detection using soft computing techniques. Clin. Epidemiol. Glob. Health 2020, 8, 501–508. [Google Scholar] [CrossRef] [Green Version]
  106. Afza, F.; Khan, M.A.; Sharif, M.; Saba, T.; Rehman, A.; Javed, M.Y. Skin Lesion Classification: An Optimized Framework of Optimal Color Features Selection. In Proceedings of the 2020 2nd International Conference on Computer and Information Sciences (ICCIS), Sakaka, Saudi Arabia, 13–15 October 2020; pp. 1–6. [Google Scholar]
  107. Arora, G.; Dubey, A.K.; Jaffery, Z.A.; Rocha, A. Bag of feature and support vector machine based early diagnosis of skin cancer. Neural Comput. Appl. 2020, 34, 8385–8392. [Google Scholar] [CrossRef]
  108. Khan, M.A.; Akram, T.; Sharif, M.; Shahzad, A.; Aurangzeb, K.; Alhussein, M.; Haider, S.I.; Altamrah, A. An implementation of normal distribution based segmentation and entropy controlled features selection for skin lesion detection and classification. BMC Cancer 2018, 18, 638. [Google Scholar] [CrossRef] [PubMed]
  109. Nasir, M.; Khan, M.A.; Sharif, M.; Lali, I.U.; Saba, T.; Iqbal, T. An improved strategy for skin lesion detection and classification using uniform segmentation and feature selection based approach. Microsc. Res. Technol. 2018, 81, 528–543. [Google Scholar] [CrossRef]
  110. Khan, M.A.; Muhammad, K.; Sharif, M.; Akram, T.; de Albuquerque, V.H.C. Multi-Class Skin Lesion Detection and Classification via Teledermatology. IEEE J. Biomed. Health Inform. 2021, 25, 4267–4275. [Google Scholar] [CrossRef] [PubMed]
  111. Hameed, M.; Sharif, M.; Raza, M.; Haider, S.W.; Iqbal, M. Framework for the comparison of classifiers for medical image segmentation with transform and moment based features. Res. J. Recent Sci. 2012, 2277, 2502. [Google Scholar]
  112. Akram, T.; Khan, M.A.; Sharif, M.; Yasmin, M. Skin lesion segmentation and recognition using multichannel saliency estimation and M-SVM on selected serially fused features. J. Ambient. Intell. Humaniz. Comput. 2018, 1–20. [Google Scholar] [CrossRef]
  113. Khan, M.A.; Zhang, Y.-D.; Sharif, M.; Akram, T. Pixels to Classes: Intelligent Learning Framework for Multiclass Skin Lesion Localization and Classification. Comput. Electr. Eng. 2021, 90, 106956. [Google Scholar] [CrossRef]
  114. Amin, J.; Sharif, M.; Almas Anjum, M. Skin lesion detection using recent machine learning approaches. In Prognostic Models in Healthcare: AI and Statistical Approaches; Springer: Berlin/Heidelberg, Germany, 2022; pp. 193–211. [Google Scholar]
  115. Banasode, P.; Patil, M.; Ammanagi, N. A Melanoma Skin Cancer Detection Using Machine Learning Technique: Support Vector Machine. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1065, 012039. [Google Scholar] [CrossRef]
  116. Shahi, P.; Yadav, S.; Singh, N.; Singh, N.P. Melanoma skin cancer detection using various classifiers. In Proceedings of the 2018 5th IEEE Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON), Gorakhpur, India, 2–4 November 2018; pp. 1–5. [Google Scholar]
  117. Dhivyaa, C.R.; Sangeetha, K.; Balamurugan, M.; Amaran, S.; Vetriselvi, T.; Johnpaul, P. Skin lesion classification using decision trees and random forest algorithms. J. Ambient Intell. Humaniz. Comput. 2020, 1–13. [Google Scholar] [CrossRef]
  118. Arasi, M.A.; El-Horbaty, E.S.M.; El-Sayed, A. Classification of Dermoscopy Images Using Naïve Bayesian and Decision Tree Techniques. In Proceedings of the 2018 1st Annual International Conference on Information and Sciences (AiCIS), Fallujah, Iraq, 20–21 November 2018; pp. 7–12. [Google Scholar]
  119. Hameed, N.; Shabut, A.; Hossain, M.A. A Computer-aided diagnosis system for classifying prominent skin lesions using machine learning. In Proceedings of the 2018 10th Computer Science and Electronic Engineering (CEEC), Colchester, UK, 19–21 September 2018; pp. 186–191. [Google Scholar]
  120. Balaji, V.; Suganthi, S.; Rajadevi, R.; Kumar, V.K.; Balaji, B.S.; Pandiyan, S. Skin disease detection and segmentation using dynamic graph cut algorithm and classification through Naive Bayes classifier. Measurement 2020, 163, 107922. [Google Scholar] [CrossRef]
  121. Abbes, W.; Sellami, D.; Marc-Zwecker, S.; Zanni-Merk, C. Fuzzy decision ontology for melanoma diagnosis using KNN classifier. Multimed. Tools Appl. 2021, 1–22. [Google Scholar] [CrossRef]
  122. Praveena, H.D.; Sudha, K.; Geetha, P. Support Vector Machine Based Melanoma Skin Cancer Detection. J. Univ. Shanghai Sci. Technol. 2020, 22, 1075–1081. [Google Scholar]
  123. Afza, F.; Sharif, M.; Mittal, M.; Khan, M.A.; Jude Hemanth, D. A hierarchical three-step superpixels and deep learning framework for skin lesion classification. Methods 2022, 202, 88–102. [Google Scholar] [CrossRef]
  124. Khan, M.A.; Sharif, M.; Akram, T.; Kadry, S.; Hsu, C.-H. A two-stream deep neural network-based intelligent system for complex skin cancer types classification. Int. J. Intell. Syst. 2022, 37, 10621–10649. [Google Scholar] [CrossRef]
  125. Khan, M.A.; Akram, T.; Sharif, M.; Kadry, S.; Nam, Y. Computer Decision Support System for Skin Cancer Localization and Classification. Comput. Mater. Contin. 2021, 68, 1041–1064. [Google Scholar]
  126. Shorfuzzaman, M. An explainable stacked ensemble of deep learning models for improved melanoma skin cancer detection. Multimed. Syst. 2022, 28, 1309–1323. [Google Scholar] [CrossRef]
  127. Jasil, S.P.G.; Ulagamuthalvi, V. Deep learning architecture using transfer learning for classification of skin lesions. J. Ambient. Intell. Humaniz. Comput. 2021, 1–8. [Google Scholar] [CrossRef]
  128. Mijwil, M.M. Skin cancer disease images classification using deep learning solutions. Multimed. Tools Appl. 2021, 1–17. [Google Scholar] [CrossRef]
  129. Karki, S.; Kulkarni, P.; Stranieri, A. Melanoma classification using EfficientNets and Ensemble of models with different input resolution. In Proceedings of the Australasian Computer Science Week Multiconference (ACSW), Dunedin, New Zealand, 1–5 February 2021; pp. 1–5. [Google Scholar]
  130. Sevli, O. A deep convolutional neural network-based pigmented skin lesion classification application and experts evaluation. Neural Comput. Appl. 2021, 33, 12039–12050. [Google Scholar] [CrossRef]
  131. Amin, J.; Sharif, A.; Gul, N.; Anjum, M.A.; Nisar, M.W.; Azam, F.; Bukhari, S.A.C. Integrated design of deep features fusion for localization and classification of skin cancer. Pattern Recognit. Lett. 2020, 131, 63–70. [Google Scholar] [CrossRef]
  132. Chaturvedi, S.S.; Tembhurne, J.V.; Diwan, T. A multi-class skin Cancer classification using deep convolutional neural networks. Multimed. Tools Appl. 2020, 79, 28477–28498. [Google Scholar] [CrossRef]
  133. Kassem, M.A.; Hosny, K.M.; Fouad, M.M. Skin Lesions Classification into Eight Classes for ISIC 2019 Using Deep Convolutional Neural Network and Transfer Learning. IEEE Access 2020, 8, 114822–114832. [Google Scholar] [CrossRef]
  134. Bi, L.; Feng, D.D.; Fulham, M.; Kim, J. Multi-Label classification of multi-modality skin lesion via hyper-connected convolutional neural network. Pattern Recognit. 2020, 107, 107502. [Google Scholar] [CrossRef]
  135. Albahar, M.A. Skin Lesion Classification Using Convolutional Neural Network with Novel Regularizer. IEEE Access 2019, 7, 38306–38313. [Google Scholar] [CrossRef]
  136. Milton, M.A.A. Automated skin lesion classification using ensemble of deep neural networks in ISIC 2018: Skin lesion analysis towards melanoma detection challenge. arXiv 2019, arXiv:1901.10802. [Google Scholar]
  137. Zhang, J.; Xie, Y.; Xia, Y.; Shen, C. Attention Residual Learning for Skin Lesion Classification. IEEE Trans. Med. Imaging 2019, 38, 2092–2103. [Google Scholar] [CrossRef]
  138. Sae-Lim, W.; Wettayaprasit, W.; Aiyarak, P. Convolutional neural networks using MobileNet for skin lesion classification. In Proceedings of the 16th International Joint Conference on Computer Science and Software Engineering (JCSSE), Chonburi, Thailand, 10–12 July 2019; pp. 242–247. [Google Scholar]
  139. Majtner, T.; Bajić, B.; Yildirim, S.; Hardeberg, J.Y.; Lindblad, J.; Sladoje, N. Ensemble of convolutional neural networks for dermoscopic images classification. arXiv 2018, arXiv:1808.05071. [Google Scholar]
  140. Mishra, N.K.; Celebi, M.E. An overview of melanoma detection in dermoscopy images using image processing and machine learning. arXiv 2016, arXiv:1601.07843. [Google Scholar]
  141. Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 2018, 5, 180161. [Google Scholar] [CrossRef] [PubMed]
  142. Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Classification of skin lesions using transfer learning and augmentation with Alex-net. PLoS ONE 2019, 14, e0217293. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  143. Mendonça, T.; Celebi, M.; Mendonca, T.; Marques, J. Ph2: A public database for the analysis of dermoscopic images. In Dermoscopy Image Analysis; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
  144. Ozkan, I.A.; Koklu, M. Skin lesion classification using machine learning algorithms. Int. J. Intell. Syst. Appl. Eng. 2017, 5, 285–289. [Google Scholar] [CrossRef] [Green Version]
  145. Nedelcu, T.; Vasconcelos, M.; Carreiro, A. Multi-Dataset Training for Skin Lesion Classification on Multimodal and Multitask Deep Learning. In Proceedings of the 6th World Congress on Electrical Engineering and Computer Systems and Sciences (EECSS’20), Prague, Czech Republic, 13–15 August 2020; pp. 13–15. [Google Scholar]
  146. Kumar, A.; Hamarneh, G.; Drew, M.S. Illumination-based Transformations Improve Skin Lesion Segmentation in Dermoscopic Images. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 3132–3141. [Google Scholar]
  147. Bi, L.; Kim, J.; Ahn, E.; Kumar, A.; Fulham, M.; Feng, D. Dermoscopic Image Segmentation via Multistage Fully Convolutional Networks. IEEE Trans. Biomed. Eng. 2017, 64, 2065–2074. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  148. Codella, N.C.; Gutman, D.; Celebi, M.E.; Helba, B.; Marchetti, M.A.; Dusza, S.W.; Kalloo, A.; Liopyris, K.; Mishra, N.; Kittler, H.; et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic). In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 168–172. [Google Scholar]
  149. Codella, N.; Rotemberg, V.; Tschandl, P.; Celebi, M.E.; Dusza, S.; Gutman, D.; Helba, B.; Kalloo, A.; Liopyris, K.; Marchetti, M.; et al. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (isic). arXiv 2019, arXiv:1902.03368. [Google Scholar]
  150. Pacheco, A.G.; Ali, A.R.; Trappenberg, T. Skin cancer detection based on deep learning and entropy to detect outlier samples. arXiv 2019, arXiv:1909.04525. [Google Scholar]
  151. Rotemberg, V.; Kurtansky, N.; Betz-Stablein, B.; Caffery, L.; Chousakos, E.; Codella, N.; Combalia, M.; Dusza, S.; Guitera, P.; Gutman, D.; et al. A patient-centric dataset of images and metadata for identifying melanomas using clinical context. Sci. Data 2021, 8, 1–8. [Google Scholar] [CrossRef]
  152. Webster, D.; Suver, C.; Doerr, M.; Mounts, E.; Domenico, L.; Petrie, T.; Leachman, S.A.; Trister, A.D.; Bot, B.M. The Mole Mapper Study, mobile phone skin imaging and melanoma risk data collected using ResearchKit. Sci. Data 2017, 4, sdata20175. [Google Scholar] [CrossRef] [Green Version]
  153. Taufiq, M.A.; Hameed, N.; Anjum, A.; Hameed, F. m-Skin Doctor: A Mobile Enabled System for Early Melanoma Skin Cancer Detection Using Support Vector Machine. In eHealth 360°; Springer: Berlin/Heidelberg, Germany, 2016; pp. 468–475. [Google Scholar]
  154. De Carvalho, T.M.; Noels, E.; Wakkee, M.; Udrea, A.; Nijsten, T. Development of Smartphone Apps for Skin Cancer Risk Assessment: Progress and Promise. JMIR Dermatol. 2019, 2, e13376. [Google Scholar] [CrossRef] [Green Version]
  155. Cook, S.E.; Palmer, L.C.; Shuler, F.D. Smartphone Mobile Application to Enhance Diagnosis of Skin Cancer: A Guide for the Rural Practitioner. W. Va. Med. J. 2015, 111, 22–29. [Google Scholar]
  156. Ngoo, A.; Finnane, A.; McMeniman, E.; Tan, J.-M.; Janda, M.; Soyer, H.P. Efficacy of smartphone applications in high-risk pigmented lesions. Australas. J. Dermatol. 2018, 59, e175–e182. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  157. Brinker, T.J.; Hekler, A.; Utikal, J.S.; Grabe, N.; Schadendorf, D.; Klode, J.; Berking, C.; Steeb, T.; Enk, A.H.; von Kalle, C. Skin Cancer Classification Using Convolutional Neural Networks: Systematic Review. J. Med. Internet Res. 2018, 20, e11936. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Segmentation Results Concerning the Accuracy [50,60,61,62,63,66,68,72,73,74,75,77,80,85].
Figure 1. Segmentation Results Concerning the Accuracy [50,60,61,62,63,66,68,72,73,74,75,77,80,85].
Life 13 00146 g001
Figure 2. Segmentation Outcomes Concerning the Jaccard [65,67,69,70,71,76,81,82,83,84].
Figure 2. Segmentation Outcomes Concerning the Jaccard [65,67,69,70,71,76,81,82,83,84].
Life 13 00146 g002
Figure 3. Graphical Comparative Analysis of Classification Outcomes [126,127,128,130,131,132,133,134,135,137,138,139].
Figure 3. Graphical Comparative Analysis of Classification Outcomes [126,127,128,130,131,132,133,134,135,137,138,139].
Life 13 00146 g003
Table 1. Comparison of the Presented Survey with Others.
Table 1. Comparison of the Presented Survey with Others.
Sr. #ContentsPresented
Survey
[12]
2021
[13]
2020
[14]
2020
[15]
2020
[16]
2019
[17]
2017
1.Traditional methods
2.Deep learning methods
3.Benchmark datasets
4.Challenges
5.Mobile Apps
6.Discussion/Findings
Table 2. Summary of Segmentation Approaches.
Table 2. Summary of Segmentation Approaches.
Ref #YearMethodsDatasetsResults
[50]2019Local and global contrast starching, Dull razor, Region based active contour JSEG fusionPH2
ISIC 2017
0.917 (ACC)
0.953 (ACC)
[60]2021Histogram equalization, max filter, morphological operations, hierarchical k-means with level setPH2
DermoFit
0.946 (ACC)
0.942 (ACC)
[61]2021Threshold and morphological operations, K-means clustering, firefly algorithmISIC 2017
PH2
0.991 (ACC)
0.989 (ACC)
[85]2018Adam approach, a Dense deconvolutional networkISBI 2016
ISBI 2017
0.959 (ACC)
0.939 (ACC)
[80]2019Dull razor, morphological operations, YOLO, GrabCutPH2
ISBI 2017
0.929 (ACC)
0.933 (ACC)
[72]2021Retina-DeepLab, R-CNN, and graph-related approachesISBI 2017
PH2
DermQuest
0.942 (ACC)
0.898 (ACC)
0.992 (ACC)
[73]2021Data augmentation (flipping, rotation, translation, etc.), dense encoder-decoder based frameworkISIC 20180.969 (ACC)
[74]2020Bicubic interpolation, data augmentation, Adaptive dual attention moduleISBI 2017
ISIC 2018
0.957 (ACC)
0.947 (ACC)
[68]2020Median filter, histogram, Auxiliary function, global optimization algorithmPH2
ISBI 2016
ISBI 2017
0.932 (ACC)
0.952 (ACC)
0.976 (ACC)
[66]2020Median filter, contrast stretching, ABCD, Threshold-based segmentationDermIS
DermQuest
1.00 (ACC)
[62]2020GA, OCE-NGCISIC 20160.976 (ACC)
[63]2017Fuzzy clustering meanHPH0.968 (ACC)
[75]2020IfcnPH2
ISBI 2017
0.969 (ACC)
0.953 (ACC)
[77]2020Resizing, augmentation, ResNet34, Scale-Att-ASPP, PPM, GANISBI 2016
ISBI 2017
PH2
0.964 (ACC)
0.931 (ACC)
0.112 (DV)
[79]2020LabelMe, R-CNNISIC0.910 (recall)
[67]2019Gaussian filter, OTSU, SegRNorm, SegXNormISIC 20170.800 (JAC)
[81]2019Resizing, encoder-decoder deep convolutional with aggregate multi-resolution skip connectionsISIC 20180.837 (JAC)
[82]2019Morphological operations, DeeplabV3+ and Mask R-CNNISIC 2017
PH2
0.793 (JAC)
0.839 (JAC)
[83]2019Data augmentation, VGG16 encoder, DeeplabV3, SegNet, threshold with dilationsISIC 20180.876 (JAC)
[84]2019Linear filter, restoration, enhancement, U-Net 46 layered, U-Net 32 layeredISIC 20180.933 (JAC)
[64]2017Illumination correction, histogram calculation, Frangi vesselness, K- means clusteringISIC 20170.548 (validation set)
[65]2017Dull razor, ABCD, Shift mean algorithmISIC0.972 (JAC)
[70]2022Image Resize, FCEDN, EN-GWOISIC 2016
17
0. 964 (JAC)
0.868 (JAC)
[69]2021Dull razor, Initial contour optimization, GAISIC 20160.831 (JAC)
[71]2021Downsampling, translation, rotation and scaling, Atrous dilation CNNISIC 2016
17
18
0.904 (JAC)
0.818 (JAC)
0.891 (JAC)
[76]2020Augmentation, resizing, FC-DPNISBI 2017
PH2
0.800 (JAC)
0.835 (JAC)
Table 3. Summary of Classification Techniques.
Table 3. Summary of Classification Techniques.
Ref #YearCNN ModelsDatasetsNo. of ClassesResults (ACC)
[126]2022Stacked modelISIC 202020.957
[127]2021VGG16, VGG19, InceptionV3ISIC 201870.770
0.760
0.740
[128]2021ResNet, InceptionV3, VGG19ISIC archive20.753
0.869
0.731
[129]2021EfficientNet B6 models ensemble and EfficientNet B5ISIC 202020.941(ROC curve)
[130]2021CNN architectureHAM-1000070.915
[131]2020VGG 16 + AlexNetPH2+
+ISBI 2016
+ISBI 2017
20.990
[132]2020ResNetXt101, InceptionResNetV2+ ResNetXt101HAM-1000070.932
0.928
[133]2020GoogleNetISIC 201980.949
0.798(SEN)
0.970 (SPE)
[134]2020HcCNN7 point check70.749
[135]2019CNN + Novel RegularizerISIC archive20.974
[136]2019PNASNeT-5-LargeISIC 201870.76 (V.score)
[137]2019ARL-CNNISIC 201730.850
[138]2019MobileNetHAM-1000070.839
[139]2018GoogleNet, VGG 16, and their ensembleISIC 201870.797
0.801
0.815
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zafar, M.; Sharif, M.I.; Sharif, M.I.; Kadry, S.; Bukhari, S.A.C.; Rauf, H.T. Skin Lesion Analysis and Cancer Detection Based on Machine/Deep Learning Techniques: A Comprehensive Survey. Life 2023, 13, 146. https://doi.org/10.3390/life13010146

AMA Style

Zafar M, Sharif MI, Sharif MI, Kadry S, Bukhari SAC, Rauf HT. Skin Lesion Analysis and Cancer Detection Based on Machine/Deep Learning Techniques: A Comprehensive Survey. Life. 2023; 13(1):146. https://doi.org/10.3390/life13010146

Chicago/Turabian Style

Zafar, Mehwish, Muhammad Imran Sharif, Muhammad Irfan Sharif, Seifedine Kadry, Syed Ahmad Chan Bukhari, and Hafiz Tayyab Rauf. 2023. "Skin Lesion Analysis and Cancer Detection Based on Machine/Deep Learning Techniques: A Comprehensive Survey" Life 13, no. 1: 146. https://doi.org/10.3390/life13010146

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop