Next Article in Journal
Liquid Biopsy: A Family of Possible Diagnostic Tools
Next Article in Special Issue
Computer-Aided Detection (CADe) System with Optical Coherent Tomography for Melanin Morphology Quantification in Melasma Patients
Previous Article in Journal
miRNomic Signature in Very Low Birth-Weight Neonates Discriminates Late-Onset Gram-Positive Sepsis from Controls
Previous Article in Special Issue
Towards Accurate Diagnosis of Skin Lesions Using Feedforward Back Propagation Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Machine Learning and Deep Learning Methods for Skin Lesion Classification and Diagnosis: A Systematic Review

by
Mohamed A. Kassem
1,
Khalid M. Hosny
2,*,
Robertas Damaševičius
3,* and
Mohamed Meselhy Eltoukhy
4
1
Department of Robotics and Intelligent Machines, Faculty of Artificial Intelligence, Kaferelshiekh University, Kaferelshiekh 33511, Egypt
2
Department of Information Technology, Faculty of Computers and Informatics, Zagazig University, Zagazig 44519, Egypt
3
Department of Applied Informatics, Vytautas Magnus University, 44404 Kaunas, Lithuania
4
Computer Science Department, Faculty of Computers and Informatics, Suez Canal University, Ismailia 41522, Egypt
*
Authors to whom correspondence should be addressed.
Diagnostics 2021, 11(8), 1390; https://doi.org/10.3390/diagnostics11081390
Submission received: 12 June 2021 / Revised: 25 July 2021 / Accepted: 27 July 2021 / Published: 31 July 2021

Abstract

:
Computer-aided systems for skin lesion diagnosis is a growing area of research. Recently, researchers have shown an increasing interest in developing computer-aided diagnosis systems. This paper aims to review, synthesize and evaluate the quality of evidence for the diagnostic accuracy of computer-aided systems. This study discusses the papers published in the last five years in ScienceDirect, IEEE, and SpringerLink databases. It includes 53 articles using traditional machine learning methods and 49 articles using deep learning methods. The studies are compared based on their contributions, the methods used and the achieved results. The work identified the main challenges of evaluating skin lesion segmentation and classification methods such as small datasets, ad hoc image selection and racial bias.

1. Introduction

The annual incidence of melanoma cases has increased by 53%. This is due in part to increased ultraviolet (UV) exposure [1]. Despite the fact that melanoma is one of the deadliest types of skin cancer, early identification can lead to a high chance of survival.
Cancer develops when cells in the body begin to proliferate uncontrollably. Metastasizing means that cancerous cells may form in practically any place of the body and spread [2]. In this regard, the uncontrolled proliferation of abnormal skin cells is referred to as skin cancer. Uncorrected DNA damage to skin cells, most typically produced by UV radiation from the sun or tanning beds, creates mutations, or genetic flaws, that cause skin cells to reproduce rapidly and produce malignant tumors.
There are several varieties of benign and malignant melanomas that make the diagnosis of skin lesions complex. Squamous Cell Carcinoma (SCC), Basal Cell Carcinoma (BSC), and melanoma are major forms of irregular skin cells seen in clinical practice [3]. Further, the Skin Cancer Foundation (SCF) [4] distinguishes three less common types of abnormal cells, namely Merkel cell carcinoma, Actinic Keratosis (AKIEC), and Atypical moles. The six forms of skin lesions are depicted in Figure 1. The second most harmful cells are atypical moles after melanoma cases. According to SCF [4], the following are the distinctions between abnormal tissues:
  • Actinic Keratosis (AKIEC) or solar keratosis: This is a form of keratosis that occurs on the skin. It is a crusty, scaly growth on the skin. It is classified as pre-cancer because it has the potential to turn into skin cancer if left untreated.
  • Atypical moles: Also known as dysplastic nevi, they are benign moles that have an irregular appearance. They may look like melanoma, and the ones that have them are more likely to develop melanoma in a mole or anywhere on the body. They have a greater chance of having melanom;
  • Basal Cell Carcinoma (BCC): This is the most common form of skin cancer. This form of skin cancer spreads rarely. Its common symptoms are open sores, shiny bumps, red spots, pink growths, or scars;
  • Melanoma: This is the most lethal kind of skin cancer. It is often black or brown, although it can also be pink, red, purple, blue, or white. UV radiation from the sun or tanning beds causes cancerous tumors. Melanoma is frequently treatable if detected and treated early, but, if not, the disease can spread to other places of the body and the therapy would be complicated and deadly;
  • Merkel cell carcinoma: This is an uncommon and aggressive kind of skin cancer that has a high chance of metastasizing. However, it is 40 times less prevalent than melanoma;
  • Squamous Cell Carcinoma (SCC): This is the second most frequent kind of skin cancer. Scaly red spots, open sores, raised growths with a central depression, or warts are frequent signs.
The dominating cause of such skin cancer forms is skin tissue damage caused by UV radiation [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21]. A dermatologist’s visual examination is a common clinical procedure for melanoma diagnosis [18]. The precision of the clinical diagnosis is somewhat deceptive [19]. Dermoscopy is a non-invasive diagnostic method that interconnects clinical dermatology with dermatology by enabling the display of morphological characteristics which are not discernible using a naked eye examination. The morphological details visualized can significantly be improved with different techniques such as solar scans [20], microscopy of the Epiluminescence (ELM), Cross-polarization epiluminescence (XLM), and side transillumination [21,22,23,24]. Therefore, the dermatologist receives further diagnostic criteria. Dermoscopy improves diagnostic performance by 10–30% relative to a non-discrete eye [25,26,27,28]. Nevertheless, [29,30,31] reported that the diagnostic accuracy of the dermoscopy was decreased with novice dermatologists in contrast with expert dermatologists because this process requires a great deal of experience to identify lesions [32].
According to ref. [33] professional dermatologists have achieved 90% sensitivity and 59% specificity in the identification of skin lesions. Around the same time, the statistics for less qualified doctors indicated a significant decline for general practitioners of about 62–63%.
A visual inspection by a dermatologist of the suspicious skin region is the first step in diagnosing a malignant lesion. A correct diagnosis is essential because certain types of lesions have similarities; moreover, the accuracy of the Computer-Aided System (CAD) is close to the experienced dermatologist’s diagnosis [34,35,36]. Without the use of technology, dermatologists can diagnose melanoma with a 65–80% accuracy rate [37]. For suspect cases, a dermatoscopic image is taken using a very high-resolution camera to complete the visual examination. The lighting is controlled and a filter is used during the recording to reduce skin reflections and thus visualize the deeper layers of skin. This technical assistance led to a 49% improvement in skin lesion diagnosis [31]. Ultimately, the combination of a visual examination and dermatoscopy images led to an absolute accuracy of 75–84% for melanoma detection [38,39].
Classifying lesions of the skin has been an aim of the machine learning community for some time. Automated classification of lesions is used in clinical examination to help physicians and allow rapid and affordable access to lifesaving diagnoses [40], and outside of the hospital environment, smartphone apps have been used [41]. Before 2016, most research adopted the traditional machine learning workflow of preprocessing (enhancement), segmentation, feature extraction, and classification [41,42,43]. These phases are explained in the following section:
  • Image enhancement: This phase aims to eliminate all noise and artifacts such as hair and blood vessels in dermoscopic images;
  • Segmentation: Segmenting the Region of Interest (ROI) is a crucial step in CAD systems. The process of segmenting skin cancer images is made more complex by a large number of different skin lesions. It quickly became one of the most complex and tedious tasks in the CAD system;
  • Feature extraction: After defining the ROI, the goal of the feature extraction step is to identify the best set of features that have high discrimination capability to classify the dataset into two or more classes;
  • Classification and detection: The proposed system is evaluated according to its capability to classify the dataset into different classes. Hence, the choice of classifier is critical for a better performance. However, it depends on the set of extracted feature and the required number of classes. The classification performance measures are accuracy, specificity, sensitivity, precision, and Receiver Operating Characteristic (ROC).
The need for high-performance CAD systems is essential for lesion detection and diagnosis. Feature selection is a crucial task for CAD system development. The choice of appropriate features took a long time for the automatic recognition of pigmented skin lesion images in 1987 [44]. In the same manner, errors and data loss have a significant influence on the classification rate. For example, an inaccurate segmentation result often results in poor outcomes in feature extraction and, thus, low accuracy in the classification. Machine vision and computer analysis are becoming more critical to produce a successful automatic melanoma diagnosing system [45,46,47,48,49,50]. An accurate CAD system would help doctors and dermatologists to make better and more dependable diagnoses.
Many CAD systems have been identified using different border detection, extraction, selection, and classification algorithms. Some studies [51,52,53,54,55,56,57,58] have proposed the study and analysis of image processing techniques to diagnose skin cancer; moreover, they compared Artificial Intelligence (AI) and CAD system performances against the diagnostic accuracy of experienced dermatologists. However, further work is required to define and reduce ambiguity in automated decision support systems to enhance diagnostic accuracy. There is no comprehensive and up-to-date review of the automatic skin lesion diagnostic model. The constant development, in recent years, of new dermoscopic research classification algorithms and techniques would benefit from such a study.

2. Methods

2.1. Systematic Review

We have looked for systematic reviews and original research papers written in English in the ScienceDirect, IEEE, and SpringerLink databases. In this analysis, only papers that were published in journals and recorded proper scientific proceedings were considered.
Papers were included based on the inclusion criteria: (i) classification or segmentation of skin lesions binary or multi-class, (ii) traditional machine learning method, (iii) deep learning models, (iv) digital image modality, (v) papers published in well-defined journals, and (vi) published in English.
The exclusion criteria were used to exclude the irrelevant studies based on the list of criteria presented as follows: (i) review articles, (ii) papers published in a language different from English, (iii) conference papers, (iv) books, and (v) book chapters.
The PRISMA flow diagram in Figure 2 shows the selection procedure [59]. The initial search identified 111,701 literature sources satisfying the search criteria. These sources were supplemented with 5757 records identified using other methods (forward and backward snowballing). After the removal of duplicate records, the number of papers ended up with 106,398 records. After applying the inclusion criteria, 801 full-text articles were identified, which were further inspected by applying the exclusion criteria. Finally, 53 articles using traditional machine learning methods and 49 articles using deep learning were selected. The selected articles were further analyzed and their results are discussed in this study. In addition, we listed only the models with the best score in each sample from studies that tested several models.

2.2. Datasets

The performance of melanoma diagnosis has improved with the dermoscopic method [21]. Dermoscopy is an invasive skin imaging technique that can capture enlightened and enlarged pictures of skin lesions to improve the clarity of the spots. The effect of the deeper skin could be improved by removing the reflection from the skin surface [60]. Automatic identification of dermoscopic images of melanoma is a challenging task due to many factors: firstly, intraclass variability in lesions such as texture, scale, and color; secondly, the high resemblance between the lesions of melanoma and non-melanoma; finally, the environmental conditions around it including hair, veins, and color calibration charts and rule marks.
In this section, the most used datasets in this area of research are described. A broad range of available and free online datasets such as MedNode, DermaIS, DermQuest, the ISIC 2016, 2017, 2018, and 2019, Ph2, and paid like Dermofit were used.
MED-NODE consists of 170 images of melanoma and nevus; it is divided into 70 and 100, respectively. The dataset came from the Digital Image Archive of the University of Medicine in the Netherlands, Department of Dermatology. The device for the sensing of skin cancer on microscopic images was developed and tested [61]. DermIS [62] is the Dermatology Information System skin dataset. This dataset is divided into two classes, nevus and melanoma. It contained 69 images: 26 nevus and 43 melanoma. DermQuest is a dataset consisting of 137 images. These images are divided into two classes, melanoma and nevus; these classes have 76 and 61 images, respectively [63].
The PH2 [64] database was created in collaboration with Porto University, Technical University of Lisbon, and Hospital Pedro Hispano in Matosinhos. It is comprised of 200 RGB color images with a 768 × 560 pixel resolution. This dataset has three groups of images: melanoma, normal nevus, and atypical nevus, with 40, 80, and 80 images in each category, respectively.
The skin colors characterized in the PH2 database may differ from white to creamy white. As illustrated in Figure 3, the images were carefully selected, taking into account their resolutions, quality, and dermoscopic features.
The International Skin Imaging Collaboration (ISIC) 2016 [65] dataset, which is referred to as the 2016 ISIC-ISBI challenge, provides 900 training images. Participants can produce and submit automated results using a separate test dataset (350 images). The training dataset consists of two classes. These classes are melanoma and benign, in which each class contains 173 and 727 dermoscopic images, respectively.
The International Skin Imaging Collaboration (ISIC) 2017 dataset [66], which is also referred to as the 2017 ISBI Challenge on Skin Lesion Analysis Towards Melanoma Detection. This challenge provides training data (2000 images), a separate validation dataset (150 photos), and a blind held-out test dataset (600 images). The training dataset consists of three classes divided into 374, 254, and 1372 dermoscopic images for melanoma, seborrheic keratosis, and nevus. Figure 4 shows different skin lesions examples from ISIC 2017.
The International Skin Imaging Collaboration (ISIC) 2018 dataset [67,68], which is also referred to as the HAM10000 (“Human Against Machine with 10,000 training images”) dataset, was divided into a training dataset, consisting of 10015 images, and the test dataset, consisting of 1512 images. This dataset was compiled using a variety of dermatoscopy techniques on all anatomic sites (except mucosa and nails) from a retrospective sample of patients who had undergone skin cancer screening at multiple institutions. The training dataset consists of seven classes: AKIEC, BCC, Benign Keratosis (BKL), Dermatofibroma (DF), Melanocytic nevus (NV), Melanoma (MEL), Vascular lesion (VASC). There are varying numbers of images in each of these groups. The MEL has 1113, the NV has 6705, the BCC has 514, the AKIEC has 327, the BKL has 1099, the DF has 115, and the VASC has 142. The classification of different images into seven groups in this dataset is one of the most difficult challenges.
There is another dataset from the International Skin Imaging Collaboration, ISIC 2019 (BCN_20000) [69] consists of eight known classes and one class for outlier images. These classes are MEL, NV, BCC, AKIEC, BKL, DF, VASC, and SCC. ISIC 2019 consists of 25,331 images, where AKIEC has 867, BKL has 2624, BCC has 3323, DF has 239, NV has 12,875, MEL has 4522, SCC has 628, and VASC has 253. Figure 5 depicts the several types of skin cancer. This dataset is one of the hardest to categorize into eight classes with an uneven amount of photos in each class. The hardest challenge is to detect outliers or any of the other “out of distribution” diagnosis confidence.
The Dermofit Image Library dataset is composed of 1300 skin images with corresponding class labels and lesion segmentations. There are ten lesion categories in this dataset: AKIES, BCC, hemangioma, intraepithelial carcinoma, nevus, SCC, pyogenic granuloma, seborrhoeic keratosis, DF, and MEL. These classes have 331, 76, 257, 239, 65, 45, 97, 88, 78, and 24 images for nevus, MEL, seborrhoeic keratosis, BCC, DF, AKIES, hemangioma, SCC, Intraepithelial Carcinoma, and Pyogenic Granuloma [70].
The Interactive Atlas of Dermoscopy (EDRA) [71] is another type of skin image dataset that consisted of 20 labels. These labels named melanoma with several subtypes, BCC, blue nevus, Clark’s nevus, combined nevus, congenital nevus, dermal nevus, DF, lentigo, melanosis, recurrent nevus, Reed nevus, SK, and VASC.
The ISIC challenge 2020 dataset consisted of 33,126 dermoscopic images. These images were acquired from over 2000 patients. Images in the dataset were decomposed into nine classes in addition to an unknown image class [72]. Table 1 summarizes the total number of images and the total number of images in each class for all of these datasets.
For more than 30 years, skin cancer detection by CAD systems has been a hot topic of research [73]. For example, several methods for melanoma identification, classification, and segmentation have been developed and tested [74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99]. The following section addresses the researcher’s effort in the state of the art using journal papers published in ScienceDirect, IEEE, and SpringerLink databases during the last five years only.

2.3. Traditional Machine Learning

The rule asymmetry, border, color, diameter (ABCD) was used by authors in [100] to analyze the border and color of skin lesions. They classified the features using a multilayer perceptron network (MLP) based on backpropagation training. In [101], a Gabor filter and geodesic active contours are used to enhance the image and remove hair. Then, the ABCD scoring method was used to extract features. Finally, a combination of existing methods was used to classify lesions. In [102], the authors classified melanoma based on the thickness of the lesion by three values. Two classification schemata were used: the first classified lesions into thin or thick and the second schema classified lesions into thin, intermediate, and thick. They combined logistic regression with artificial neural networks for classification. In [103], lesions were enhanced using a median filter separately on each channel of the RGB space. Finally, these lesions were segmented based on a deformable model. A segmentation method based on the Chan–Vese model was proposed by [104]. Images were first enhanced using an isotropic diffusion filter, and then ABCD was used to extract the features from the segmented regions. These features were classified using a support vector machine (SVM). In [105], the authors proposed a classification system for BCC and MEL using Paraconsistent logic (PL) and annotation with two values. They extracted the degrees of evidence, formation pattern, and diagnosis comparison. The spectra values that were used to differentiate between normal, BCC and MEL were 30, 96, and 19, respectively. A Delaunay Triangulation was used in [106] to extract the binary mask of ROIs. The authors of [107] segmented the histopathological images by extracting the granular layer boundary, and then the intensity profiles were used to classify two lesions only.
A comparison of 4 classification methods that were used for skin lesion diagnosis is summarized in [108] to determine the best method. Based on the histopathology of BCC, the authors of [109] combined two or three Fourier transform features to form one Z-transform feature. A CAD system using principal component analysis (PCA) and SVM was proposed in [110] to classify skin psoriasis. A framework for the segmentation of BCC was proposed in [111]. The hemoglobin component was clustered using k-means. In [112], the classification system for skin cancer was based on deep lesions using 3D reconstruction. They utilized adaptive snake, stereo vision, structure from motion, depth from focus for segmentation, and classification.
In [113], the lesion was extracted using a self-generated neural network. Then, the descriptive border, texture, and color features were removed. Finally, an ensemble classifier network that combined fuzzy neural networks with backpropagation (BP) neural network was used to classify the lesions based on the extracted features. A fixed grid wavelet network was used by [114] to enhance and segment skin lesions. Then, these features were classified by D-optimality orthogonal matching pursuit. Based on a chaotic time series analysis of the boundary, Khodadadi et al. [115] analyzed the irregular boundary of an infected skin lesion by Lyapunov exponent and Kolmogorov–Sinai entropy.
In [116], a segmentation technique for skin lesions was proposed based on ant colony using three types of features from lesions such as texture, relative colors, and geometrical properties. Finally, these features were classified by two classifiers: artificial neural network (ANN) and K-nearest neighbour (KNN). Based on shape and color, the authors of [117] combined some features after segmentation using ABCD. Finally, these features were classified and tested individually and after being connected. The Histogram of Gradients (HOG) and the Histogram of Lines (HL) were used in [118] to create a bag of features for each one separately. The bag of features was used to extract texture and color features for skin lesion detection. Color features were extracted using 3rd Zernike moments.
Roberta et al. [119] proposed a skin lesion diagnosis based on the ensemble model for feature manipulation. Przystalski et al. [120] proposed multispectral lesion analysis by fractal methods. Jaisakthi et al. [121] proposed a segmentation method for skin lesions using the Grab-Cut algorithm and k-means. Do et al. [122] introduced a melanoma detection system using a smartphone. Images were acquired using a smartphone camera. Then, they searched for the best processing method that worked appropriately with smartphones. This method started with a hierarchical segmentation approach and numerical features to classify a skin lesion. Adjed et al. [123] proposed a feature extraction method using wavelet transform, curvelet transforms, and local binary pattern (LBP). Finally, the extracted features were classified using an SVM. Hosseinzade [124] divided lesion images based on fabric characteristics. Fabric characteristics were described by the Gabor filter, and then these characteristics were classified by k-means.
Akram et al. [125] proposed an automatic skin lesion segmentation and classification system. They used several diverse ABCD, fuzzy C-means, pair threshold binary decomposition, HOG, and linear discriminant analysis (LDA). Jamil et al. [126] proposed a technique for skin lesion detection. Finally, they classified lesions based on color, shape, Gabor wavelet, and Gray intensity features. Khan et al. [127] combined Bhattacharyya distance and variance as an entropy-based method for skin lesion detection and classification.
Tan et al. [128] enhanced Particle Swarm Optimization (PSO) for skin lesion feature optimization. The authors modified two PSO models for discriminative feature selection; the first one performing a global search by combining lesion features and an in-depth local search by separating lesions into specific areas. The second modified PSO was for random acceleration coefficients. Subsequently, these features were classified using different classification methods. Tajeddin et al. [129] classified skin melanoma based on highly discriminative features. They started with contour propagation for lesion segmentation. Based on the peripheral area to extract features, lesions were mapped by log-polar space using Daugman’s transformation. Finally, different classifiers were compared.
To classify skin lesions, the authors in [130] utilized the structural co-occurrence Matrix of frequencies extracted from dermoscopic images. Peñaranda et al. [131] classified skin lesions by analyzing skin cells using Fourier transform infrared. Finally, a study was conducted that used the perturbations that influenced results to determine the right effects. Wahba et al. [132] proposed a skin lesion classification system based on the Gray-level difference method. They tried to discriminate between four lesions by extracting features using ABCD and cumulative level-difference mean. Finally, these features were classified using an SVM. Zakeri et al. [133] proposed a CAD system to differentiate between melanoma and dysplastic lesions. They enhanced the grey-level co-occurrence matrix to extract features. Finally, these features were classified using an SVM. Pathan et al. [134] proposed a detection system for pigment networks and differentiated between typical and atypical network patterns. In [135], laser-induced breakdown spectroscopy was used with a combination of statistical methods to distinguish between the soft tissue of the skin.
Chatterjee et al. [136] utilized the non-invasive image of a skin lesion to distinguish melanoma from nevi. They obtained the texture pattern of the skin using 2D wavelet packet decomposition. Qasim et al. [137] proposed a skin lesion classification system. KNN was used with the enhanced images by the Gaussian filter to extract ROI. Finally, the segmented ROI was classified using an SVM. Madooei et al. [138] utilized a blue-whitish structure to differentiate melanoma from nevi lesions. Saez et al. [139] utilized the color of lesions to classify these lesions as melanoma or nevi. The lesions were classified by the color itself and their neighborhood color values. Navarro et al. [140] proposed a segmentation system for skin lesions. They classified the segmented lesions into melanoma and nevi. Riaz et al. [141] proposed a CAD system for skin lesions. Their system started with lesion segmentation to extract ROI. They utilized Kullback–Leibler divergence to detect lesion boundaries.
Sabbaghi et al. [142] presented a QuadTree based on the perception of lesion color. They found that the three most common colors of melanoma were blue-grey, black, and pink. Finally, they used different classifiers, such as SVM, ANN, LDA, and random forests (RFs). Murugan et al. [143] utilized watershed segmentation to extract ROI. Features were extracted using ABCD and Gray-Level Co-occurrence Matrix (GLCM). Finally, these features were classified using KNN, RF, SVM. Khalid et al. [144] suggested a segmentation method for dermoscopic skin lesion images using a combination of wavelet transform with morphological operations. Majumder et al. [145] proposed three features that were used in melanoma classification based on the ABCD rule. Chatterjee et al. [146] introduced fractal-based regional texture analysis (FRTA) to extract shape, fractal dimension, texture, and color features to classify three lesions using SVM with RBF.
Chatterjee et al. [147] proposed a classification system for four kinds of lesions. Features were extracted using cross-correlation techniques based on frequency domain analysis. Their system differentiated between benign and malignant lesions of the epidermal and melanocytic classes in a binary manner. Upadhyay et al. [148] extracted color, orienttion histogram, and gradient location of skin lesion features. These features were fused and classified as benign or malignant using an SVM. Pathana et al. [149] proposed a skin lesion CAD system. Garcia-Arroyo et al. [150] proposed a skin lesion detection system. Lesions were segmented using fuzzy histogram thresholding. Hu et al. [151] suggested a skin lesion classification approach. To measure the similarity between features, they introduced codewords for a bag of features. Moradi et al. [152] suggested a skin lesion segmentation and classification system based on sparse kernel representation. Pereira et al. [153] proposed a skin lesion classification system based on characteristics of lesions borderline in addition to combining LBP with gradients.

2.4. Deep Learning

Kawahara et al. [154] proposed a skin lesion classification system using a convolutional neural network (CNN). The modified pre-trained CNN was able to classify images with different resolutions. Yu et al. [155] proposed a novel CNN for skin lesion segmentation and classification. The proposed CNN was based on residual learning and consisted of 50 layers. Codella et al. [156] proposed a deep residual network for skin lesion classification using the benchmark dataset ISIC2016. Bozorgtabar et al. [157] proposed a decision support system that localized skin cancer automatically using deep convolution learning for pixel-wise image segmentation. Yuan et al. [158] proposed a skin lesion segmentation system by leveraging CNN. The network consisted of 19 layers. Instead of using the traditional loss function, they utilized Jaccard Distance as a loss function.
Sultana et al. [159] proposed a skin lesion detection system using deep residual learning with a regularized framework. Rundo et al. [160] utilized ABCD for lesions to analyze skin lesions. Finally, ad hoc clustering was performed using a pre-trained deep Levenberg–Marquardt neural network. Creswell et al. [161] proposed denoising adversarial autoencoders to classify limited and imbalanced skin lesion images. Harangi [162] ensembled four different CNNs to investigate the impact on performance. Guo et al. [163] utilized and ensembled multi-ResNet to analyze skin lesions. The training images for each ResNet were pre-treated in different ways while the labels were still like the original.
Monedero et al. [164] utilized the thickness of lesions to detect melanoma using the Breslow index. The extracted texture, shape, pigment network, and color features of lesions were classified using GoogleNet to classify lesions into five types. Hagerty et al. [165] utilized deep learning and conventional image processing to extract different skin lesion features. The extracted features from deep learning and traditional processing of images were combined and fused. Finally, the newly generated features were used to classify lesions. Polap [166] proposed a skin lesion classification based on IoT. He used deep learning over IoT. The proposed model classified images in a short time, but with a low classification rate. Such a system could be used in a smart home as a part of an intelligent monitoring system [167].
Sarkar et al. [168] proposed a depth-wise separable residual deep convolutional network to classify skin cancer. The non-local means filter was succeeded by the contrast-limited adaptive histogram equalization (CLAHE) over the discrete wavelet transform (DWT) algorithm. Zhang et al. [169] proposed a CNN model with attention residual learning to classify skin lesions into three classes. The proposed deep model has four residual blocks with a total of 50 layers that consist of the deep model. Albahar [170] introduced a skin lesion detection system for binary classification of malignant or benign. He proposed a CNN model consisting of seven layers. He also proposed a regularization method to control the complexity of the classifier using the standard deviation of the weight matrix.
González-Díaz [171] introduced a skin lesion CAD system using CNN called DermaKNet. The proposed CNN was based on ResNet50, but the author started by applying a modulation block over the convolutional res5c layer outputs. Two pooling layers (AVG and Polar AVG) were worked together at the same time. Three fully connected layers were used at the end of CNN. However, because melanoma growth differently, the third fully connected one precedes the asymmetry block. The asymmetry block was used to detect the different methods of melanoma growth.
Kawahara et al. [172] proposed a skin lesion classification system using CNN that simultaneously worked on multiple tasks. The proposed CNN is able to classify seven-point melanoma checklist criteria. The proposed CNN skin lesion diagnosis classified skin lesion images and meta-data of patients. Yu et al. [173] proposed a skin lesion classification system using CNN and the local descriptor encoding method. The lesion features were extracted from images using ResNet50 and ResNet101. Then, a fisher vector (FV) was used to build a global image representation using the extracted features of ResNet. Finally, an SVM was used with a Chi-squared kernel for classification. Dorj et al. [174] proposed a skin cancer classification system using CNN. The pre-trained Alex-Net was used to extract features while the Error-Correcting Output Codes (ECOC) SVM was used to classify the extracted features.
Gavrilov et al. [175] proposed a skin neoplasm (cancer) classification system using CNN. They applied transfer learning to inception V3 (Googlenet). Furthermore, the development of web and mobile applications was created to allow patients to assess their lesions and give a preliminary diagnosis using images captured by themselves. Chen et al. [176] proposed a classification system for facial skin diseases. They used three CNN models with transfer learning to classify five skin diseases of the face. The proposed model was worked on through a cloud platform. Mahbod et al. [177] utilized four different CNN models (AlexNet, VGG, ResNet-18, and ResNet-101) to classify three skin lesions. They used SVM, RF, and MLP to classify the extracted features from CNN. The different classification results were enameled together to generate a single classification for the input lesions. Brinker et al. [178] proposed a skin lesion classification system using a pre-trained model. The modified pre-trained model ResNet50 outperformed expert dermatologists in classifying lesions into melanoma and nevus.
Tan et al. [179] utilized PSO for skin lesion segmentation. They tried to optimize PSO using different methods, such as Firefly Algorithm (FA), spiral research action, probability distributions, crossover, and mutation. To enhance lesion segmentation, K-Means was used. The hybrid learning PSO (HLPSO) was used in the development of CNN. The classification system could classify lesions into melanoma and nevus. Khan et al. [180] used a custom CNN of ten layers for image segmentation and a deep pre-trained CNN model for feature extraction. Then, an improved moth flame optimization (IMFO) algorithm was used for feature selection. The selected features were fused using multiset maximum correlation analysis and classified using the Kernel Extreme Learning Machine (ELM) classifier.
Tschandl et al. [181] proposed combining and expanding different CNNs for the segmentation and classification of skin lesions. They used three well-known benchmark datasets. Finally, they found that post-processing with a small dataset that contains noise decreased Jaccard loss. Vasconcelos et al. [182] proposed a skin lesion segmentation system using morphological geodesic active contour. Different CNN models were used, such as full resolution convolutional networks (FrCNs), deep class-specific learning with probability-based step-wise integration (DCL-PSI). The proposed model was able to classify skin lesions into melanoma and nevus.
Burlina et al. [183] proposed a CNN for acute Lyme disease from erythema migrans images, even with different acquisition and quality conditions. They fine-tuned and replaced the final layers of ResNet50. The proposed model was able to classify four different lesions. Maron et al. [184] proposed a system that classified five types of skin lesions. They applied transfer learning to ResNet50 in addition to fine-tuning CNN weights. They compared the classification rate using a CNN against 112 expert dermatologists.
Goyal et al. [185] proposed a skin lesion segmentation system by ensembling the segmentation output from Mask R-CNN and DeeplabV3+. Albert [186] proposed Predict-Evaluate-Correct K-fold (PECK), which trains CNNs from limited data. The proposed system was used for skin lesion classification in his research, Synthesis, and Convergence of Intermediate Decaying Omnigradients (SCIDOG), to detect the contour of lesions. Finally, the segmented lesions were classified using a CNN with SVM and RF. Ahmad et al. [187] utilized three loss functions by fine-tuning ResNet152 and InceptionResNet-V2 layers. Euclidean space was used to compute the L-2 distance between images. Finally, the L-2 distance was used to adapt the weights to classify images.
Kwasigroch et al. [188] proposed a skin lesion classification using a CNN with hill-climbing for search space. This process led to increasing the size of the network, which reduced the computational cost. Adegun et al. [189] proposed an encoder and decoder network with subnetworks connected using skip connections. The proposed CNN was used for skin lesion segmentation and pixel-wise classification. Song et al. [190] suggested that CNNs could segment, detect, and classify skin lesions. To control the imbalanced datasets, they utilized a loss function based on the Jaccard distance and the focal loss. Wei et al. [191] proposed a skin lesion recognition system based on fine-grained classification to discriminate features. A different lightweight CNN was utilized for segmentation and classification.
Gong et al. [192] proposed a dermoscopic skin image classification system using deep learning models. The authors enhanced skin images using StyleGANs while the enhanced image was classified using 43 CNNs. CNNs were divided into three groups with different fusion methods. Finally, the classification decision was generated using the max voting technique.
Nasiri et al. [193] proposed a skin lesion classification system based on deep learning with case-based reasoning (CBR). Öztürk et al. [194] proposed a segmentation system for skin lesions. Hosny et al. [195] proposed a new deep CNN classification system for skin lesions. The authors performed three different experiments with three datasets. They compared the accuracy between using traditional machine learning classifiers and the emerging technology with CNN. They found that the conventional machine learning classifier led to a lower classification rate.
Amin et al. [196] proposed a skin lesion classification system. Firstly, they enhanced images, then biorthogonal 2-D wavelet transforms and the Otsu algorithm were used to segment lesions. Finally, two pre-trained models were fused serially to extract features for classification using PCA. Mahbod et al. [197] proposed investigating the effect of the different image sizes of skin lesions using transfer learning with pre-trained models.
Hameed et al. [198] proposed a skin lesion classification system based on a multiclass multilevel algorithm. Traditional machine learning and deep learning methods were used with the proposed model. Zhang et al. [199] proposed an optimization algorithm for optimal weight selection to minimize the network output error for skin lesion classification. Hasan et al. [200] proposed a semantic segmentation network to segment skin lesions called DSNet. They utilized depth-wise separable convolution to reduce the number of parameters that produced a lightweight network. Al-masni et al. [201] proposed a diagnostic framework for skin lesion classification systems, which combined segmentation of lesion boundaries with multiple classification stages. The proposed system segmented the lesions using a full resolution convolutional network (FrCN) with four CNNs for classification. This system was evaluated and tested using three benchmark datasets.
Pour et al. [202] proposed a segmentation method for skin lesions using a CNN. The CNN was trained from scratch using a small dataset with augmentation. Olusola et al. [203] utilized image augmentation (a variant of SMOTE). Then, they classified skin lesions into benign and malignant using SquuezeNet. Hosny et al. [204] utilized Alexnet with transfer learning to classify the challenging dataset ISIC2018. They used different approaches for lesion segmentation. Hosny et al. [205] proposed a CAD system for skin lesions using the challenging dataset ISIC2019. That dataset has several challenges, such as imbalanced classes and unknown images. The authors utilized the bootstrap weighted classifier with a multiclass SVM. This classifier changed the weights according to the image class. Finally, the authors dealt with unknown images in two different ways. They trained GoogleNet with a new class containing a different number of unknown images collected from various sources. The second way was the similarity score; if the high similarity score of the tested image with the known eight classes was less than 0.5, the tested image was identified as an unknown image or out of distribution. The number of classes used for skin disease recognition in the analyzed works is summarized in Figure 6. Generally, the vast majority of works used two-class recognition only, while only one study [172] used 10 classes for recognition.

3. Discussion and Conclusions

The paper is an analytical survey of the literature on skin lesion image classification and disease recognition. It is a comprehensive review of the methods and algorithms used in the processing, segmentation, and classification of skin lesion images. Both classical machine learning methods and deep convolutional neural network models were explored. A discussion of the available and known datasets with a comparison between these datasets was introduced. At the end of the systematic survey, a table was used to compare state-of-the-arts methods in a novel way. The column “simple” in Table 2 refers to the proposed method not being complicated and easy to apply and did not require specified hardware, whereas the column “Contribution Achieved” was added to indicate if the research paper achieved what was discussed or not.
The comparison of methods of classification for skin lesions shows that the problem formulations of each work vary slightly. The efficient melanoma detection process has five core elements, focused on data acquisition (collection), fine-tuning, selection of features, deep learning, and final model development. The first step is to acquire data in which data from publicly available benchmarks, non-listed and non-public databases, such as the melanoma detection images collected from the internet, are used for detecting skin cancer.
There were several methods of learning with regard to deep learning based on transfer learning, while others were based on ensemble approaches, and some employed neural networks and hybrid techniques of fully convolutional neural networks. The pre-trained deep learning models and handcrafted methods that were based on a deep-leaning approach have already shown promising results for high-precision accuracy of melanoma detection.
There is a limited range of images for training and testing available for comparison as most of the datasets are small. With small datasets, the proposed methods do well, but are prone to over-fitting, and when tested with a large image set, are reliably unpredictable. For example, just 200 images are included in the PH2 dataset. The problem of training with a small dataset could be mitigated by data augmentation, image generation by an adversarial generative network, and transfer learning. Some researchers use non-public databases and internet images. This makes it more difficult to replicate the findings and results because the dataset is not available, whereas the selection of images from the internet may be biased.
The creation of large public image datasets with images as representative of the world’s people as possible to avoid racial bias [206] is another major task in this research field. Image prejudice based on gender and race AI prejudice means that the models and algorithms fail to give optimal results for people of an under-represented gender or ethnicity.
Mostly, skin lesions from light-colored skin can be seen in current datasets. For example, the ISIC dataset images are mostly obtained in the USA, Europe and Australia. In addition, CNNs try to extract the skin color to achieve a proper classification for dark-skinned humans. This can only happen if the training dataset contains sufficient images of dark-skinned people. The size of the lesion also has significant importance. If the lesion size is smaller than 6mm, melanoma can not be detected easily.
The addition of clinical data such as race, age, gender, skin type, as inputs for classifiers may help to increase classification accuracy. This supplemental data could be beneficial for dermatologists’ decision making. These aspects should be included in future work. Finally, according to the authors’ perspective and based on the size of the dataset, if the dataset contains a large number of images per class, deep learning is better than traditional machine learning. Even with datasets containing few images, deep learning can overcome this issue by using different methods of augmentation. Deep learning makes intelligent decisions on its own with a higher accuracy rate. The pre-trained deep learning models and handcrafted methods that were based on a deep learning approach have already shown promising results for the high-precision accuracy of melanoma detection.

Author Contributions

Methodology, M.A.K.; formal analysis, K.M.H., M.M.E., R.D.; investigation, M.A.K., K.M.H., M.M.E.; resources, M.A.K., M.M.E.; data curation, M.A.K.; writing—original draft preparation, M.A.K., M.M.E.; writing—review and editing, K.M.H., R.D.; project administration, K.M.H.; funding acquisition, M.M.E., R.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Capdehourat, G.; Corez, A.; Bazzano, A.; Alonso, R.; Musé, P. Toward a combined tool to assist dermatologists in melanoma detection from dermoscopic images of pigmented skin lesions. Pattern Recognit. Lett. 2011, 32, 2187–2196. [Google Scholar] [CrossRef]
  2. American Cancer Society. Statistics 2013. Available online: https://www.cancer.org/research/cancer-facts-statistics/all-cancer-facts-figures/cancer-facts-figures-2013.html?fbclid=IwAR2gMmnaky1m3LdETjBwoTiRkaxDiaKvWss9UlSVx6YqWmR-rrehUjBMpvs (accessed on 10 May 2021).
  3. Korotkov, K.; Garcia, R. Computerized analysis of pigmented skin lesions: A review. Artif. Intell. Med. 2012, 56, 69–90. [Google Scholar] [CrossRef] [PubMed]
  4. Skin Cancer Foundation. Skin Cancer Information. 2016. Available online: http://www.skincancer.org/skin-cancer-information (accessed on 20 August 2016).
  5. AlZubi, S.; Islam, N.; Abbod, M. Multiresolution analysis using wavelet, ridgelet, and curvelet transforms for medical image segmentation. J. Biomed. Imaging 2011, 4, 2011. Available online: http://www.dermquest.com (accessed on 20 July 2015). [CrossRef] [Green Version]
  6. Arroyo, J.L.G.; Zapirain, B.G. Detection of pigment 131 network in dermoscopy images using supervised machine learning and structural analysis. Comput. Biol. Med. 2014, 44, 144–157. [Google Scholar] [CrossRef]
  7. Lee, H.Y.; Chay, W.Y.; Tang, M.B.; Chio, M.T.; Tan, S.H. Melanoma: Differences between asian and caucasian patients. Ann. Acad. Med. Singap. 2012, 41, 17–20. [Google Scholar] [PubMed]
  8. Rigel, D.S.; Russak, J.; Friedman, R. The evolution of melanoma diagnosis: 25 years beyond the abcds. CA Cancer J. Clin. 2010, 60, 301–316. [Google Scholar] [CrossRef]
  9. Laikova, K.V.; Oberemok, V.V.; Krasnodubets, A.M.; Gal’chinsky, N.V.; Useinov, R.Z.; Novikov, I.A.; Temirova, Z.Z.; Gorlov, M.V.; Shved, N.A.; Kumeiko, V.V.; et al. Advances in the Understanding of Skin Cancer: Ultraviolet Radiation, Mutations, and Antisense Oligonucleotides as Anticancer Drugs. Molecules 2019, 24, 1516. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Apalla, Z.; Lallas, A.; Sotiriou, E.; Lazaridou, E.; Ioannides, D. Epidemiological trends in skin cancer. Dermatol. Pract. Concept. 2017, 7, 1–6. [Google Scholar] [CrossRef] [Green Version]
  11. Karimkhani, C.; Green, A.C.; Nijsten, T.; Weinstock, M.A.; Dellavalle, R.P.; Naghavi, M.; Fitzmaurice, C. The global burden of melanoma: Results from the Global Burden of Disease Study 2015. Br. J. Dermatol. 2017, 177, 134–140. [Google Scholar] [CrossRef] [Green Version]
  12. Schadendorf, D.; Lebbé, C.; Hausenzur, A.; Avril, M.F.; Hariharan, S.; Bharmal, M.; Becker, J.C. Merkel cell carcinoma: Epidemiology, prognosis, therapy and unmet medical needs. Eur. J. Cancer 2017, 71, 53–69. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Timerman, D.; McEnery-Stonelake, M.; Joyce, C.J.; Nambudiri, V.E.; Hodi, F.S.; Claus, E.B.; Ibrahim, N.; Lin, J.Y. Vitamin D deficiency is associated with a worse prognosis in metastatic melanoma. Oncotarget 2017, 8, 6873–6882. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Feller, L.; Khammissa, R.A.; Kramer, B.; Altini, M.; Lemmer, J. Basal cell carcinoma, squamous cell carcinoma and melanoma of the head and face. Head Face Med. 2016, 12, 11. [Google Scholar] [CrossRef] [Green Version]
  15. Becker, J.C.; Stang, A.; DeCaprio, J.A.; Cerroni, L.; Lebbé, C.; Veness, M.; Nghiem, P. Merkel cell carcinoma. Nat. Rev. Dis. Primers 2017, 3, 170–177. [Google Scholar] [CrossRef] [PubMed]
  16. Glazer, A.M.; Winkelmann, R.R.; Farberg, A.S.; Rigel, D.S. Analysis of trends in US melanoma incidence and mortality. JAMA Dermatol. 2016, 153, 225–226. [Google Scholar] [CrossRef] [Green Version]
  17. Lv, R.; Sun, Q. A Network Meta-Analysis of Non-Melanoma Skin Cancer (NMSC) Treatments: Efficacy and Safety Assessment. J. Cell. Biochem. 2017, 118, 3686–3695. [Google Scholar] [CrossRef]
  18. Lindelof, B.; Hedblad, M.-A. Accuracy in the clinical diagnosis and pattern of malignant melanoma at a dermatological clinic. J. Dermatol. 1994, 21, 461–464. [Google Scholar] [CrossRef] [PubMed]
  19. Morton, C.A.; Mackie, R.M. Clinical accuracy of the diagnosis of cutaneous malignant melanoma. Br. J. Dermatol. 1998, 138, 283–287. [Google Scholar] [CrossRef]
  20. Menzies, S.W.; Bischof, L.; Talbot, H.; Gutenev, A.; Avramidis, M.; Wong, L.; Lo, S.K.; Mackellar, G.; Skladnev, V.; McCarthy, W.; et al. The performance of SolarScan: An automated dermoscopy image analysis instrument for the diagnosis of primary melanoma. Arch. Dermatol. 2005, 141, 1388–1396. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Binder, M.; Schwarz, M.; Winkler, A.; Steiner, A.; Kaider, A.; Wolff, K.; Pehamberger, H. Epiluminescence microscopy: A useful tool for the diagnosis of pigmented skin lesions for formally trained dermatologists. Arch. Dermatol. 1995, 131, 286–291. [Google Scholar] [CrossRef]
  22. Pehamberger, H.; Binder, M.; Steiner, A.; Wolff, K. In vivo epiluminescence microscopy: Improvement of early diagnosis of melanoma. J. Investig. Dermatol. 1993, 100, 3. [Google Scholar] [CrossRef] [Green Version]
  23. Dhawan, A.P.; Gordon, R.; Rangayyan, R.M. Nevoscopy: Three-dimensional computed tomography of nevi and melanomas in situ by transillumination. IEEE Trans. Onmedical Imaging 1984, 3, 54–61. [Google Scholar] [CrossRef]
  24. Zouridakis, G.; Duvic, M.D.M.; Mullani, N.A. Transillumination Imaging for Early Skin Cancer Detection; Technol Report 2005; Biomedical Imaging Lab., Department of Computer Science, University of Houston: Houston, TX, USA, 2005. [Google Scholar]
  25. Vestergaard, M.E.; Macaskill, P.; Holt, P.E.; Menzies, S.W. Dermoscopy compared with naked eye examination for the diagnosis of primary melanoma: A meta-analysis of studies performed in a clinical setting. Br. J. Dermatol. 2008, 159, 669–676. [Google Scholar] [CrossRef]
  26. Carli, P.; Giorgi, V.D.; Crocetti, E.; Mannone, F.; Massi, D.; Chiarugi, A.; Giannotti, B. Improvement of malignant/benign ratio in excised melanocytic lesions in the “dermoscopy era”: A retrospective study 1997–2001. Br. J. Dermatol. 2004, 150, 687–692. [Google Scholar] [CrossRef]
  27. Carli, P.; Giorgi, V.D.; Chiarugi, A.; Nardini, P.; Weinstock, M.A.; Crocetti, E.; Stante, M.; Giannotti, B. Addition of dermoscopy to conventional naked-eye examination in melanoma screening: A randomized study. J. Am. Acad. Dermatol. 2004, 50, 683–689. [Google Scholar] [CrossRef] [PubMed]
  28. Mayer, J. Systematic review of the diagnostic accuracy of dermatoscopy in detecting malignant melanoma. Med. J. Aust. 1997, 167, 206–210. [Google Scholar] [CrossRef] [PubMed]
  29. Piccolo, D.; Ferrari, A.; Peris, K.; Daidone, R.; Ruggeri, B.; Chimenti, S. Dermoscopic diagnosis by a trained clinician vs. a clinician with minimal dermoscopy training vs. computeraided diagnosis of 341 pigmented skin lesions: A comparative study. Br. J. Dermatol. 2002, 147, 481–486. [Google Scholar] [CrossRef]
  30. Braun, R.P.; Rabinovitz, H.S.; Oliviero, M.; Kopf, A.W.; Saurat, J.-H. Dermoscopy of pigmented skin lesions. J. Am. Acad. Dermatol. 2005, 52, 109–121. [Google Scholar] [CrossRef]
  31. Kittler, H.; Pehamberger, H.; Wolff, K.; Binder, M. Diagnostic accuracy of dermoscopy. Lancet Oncol. 2002, 3, 159–165. [Google Scholar] [CrossRef]
  32. Whited, J.D.; Grichnik, J.M. Does this patient have a mole or a melanoma? J. Am. Med. Assoc. 1998, 279, 696–701. [Google Scholar] [CrossRef]
  33. Burroni, M.; Corona, R.; Dell’Eva, G.; Sera, F.; Bono, R.; Puddu, P.; Perotti, R.; Nobile, F.; Andreassi, L.; Rubegni, P. Melanoma computer-aided diagnosis: Reliability and feasibility study. Clin. Cancer Res. 2004, 10, 1881–1886. [Google Scholar] [CrossRef] [Green Version]
  34. Nami, N.; Giannini, E.; Burroni, M.; Fimiani, M.; Rubegni, P. Teledermatology: State-of-the-art and future perspectives. Expert Rev. Dermatol. 2014, 7, 1–3. [Google Scholar] [CrossRef] [Green Version]
  35. Fabbrocini, G.; Triassi, M.; Mauriello, M.C.; Torre, G.; Annunziata, M.C.; De Vita, V.; Pastore, F.; D’Arco, V.; Monfrecola, G. Epidemiology of skin cancer: Role of some environmental factors. Cancers 2010, 2, 1980–1989. [Google Scholar] [CrossRef] [Green Version]
  36. Haenssle, H.; Fink, C.; Schneiderbauer, R.; Toberer, F.; Buhl, T.; Blum, A.; Reader Study Level-I and Level-II Groups. Man against machine: Diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann. Oncol. 2018, 29, 1836–1842. [Google Scholar] [CrossRef]
  37. Argenziano, G.; Soyer, H.P. Dermoscopy of pigmented skin lesions: A valuable tool for early diagnosis of melanoma. Lancet Oncol. 2001, 2, 443–449. [Google Scholar] [CrossRef]
  38. Ali, A.R.A.; Deserno, T.M. A systematic review of automated melanoma detection in dermatoscopic images and its ground truth data. Proc. SPIE Int. Soc. Opt. Eng. 2012, 8318, 1–6. [Google Scholar] [CrossRef]
  39. Fabbrocini, G.; De Vita, V.; Pastore, F.; D’Arco, V.; Mazzella, C.; Annunziata, M.C.; Cacciapuoti, S.; Mauriello, M.C.; Monfrecola, A. Teledermatology: From prevention to diagnosis of nonmelanoma and melanoma skin cancer. Int. J. Telemed. Appl. 2011, 2011, 125762. [Google Scholar] [CrossRef]
  40. Foraker, R.; Kite, B.; Kelley, M.M.; Lai, A.M.; Roth, C.; Lopetegui, M.A.; Shoben, A.B.; Langan, M.; Rutledge, N.; Payne, P.R.O. EHR-based visualization tool: Adoption rates, satisfaction, and patient outcomes. EGEMS 2015, 3, 1159. [Google Scholar] [CrossRef] [Green Version]
  41. Fabbrocini, G.; Betta, G.; Di Leo, G.; Liguori, C.; Paolillo, A.; Pietrosanto, A.; Sommella, P.; Rescigno, O.; Cacciapuoti, S.; Pastore, F.; et al. Epiluminescence image processing for melanocytic skin lesion diagnosis based on 7-point check-list: A preliminary discussion on three parameters. Open Dermatol. J. 2010, 4, 110–115. [Google Scholar] [CrossRef] [Green Version]
  42. Hart, P.E.; Stork, D.G.; Duda, R.O. Pattern Classification, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2000. [Google Scholar]
  43. Oliveira, R.B.; Papa, J.P.; Pereira, A.S.; Tavares, J.M.R.S. Computational methods for pigmented skin lesion classification in images: Review and future trends. Neural Comput. Appl. 2016, 29, 613–636. [Google Scholar] [CrossRef] [Green Version]
  44. Cascinelli, N.; Ferrario, M.; Tonelli, T.; Leo, E. Apossible new tool for clinical diagnosis of melanoma: The computer. J. Am. Acad. Dermatol. 1987, 16, 361–367. [Google Scholar] [CrossRef]
  45. Hall, P.N.; Claridge, E.; Smith, J.D.M. Computer screening for early detection of melanoma—Is there a future? Br. J. Dermatol. 1995, 132, 325–338. [Google Scholar] [CrossRef]
  46. Cristofolini, M.; Bauer, P.; Boi, S.; Cristofolini, P.; Micciolo, R.; Sicher, M.C. Diagnosis of cutaneous melanoma: Accuracy of a computerized image analysis system(SkinView). Ski. Res. Technol. 1997, 3, 23–27. [Google Scholar] [CrossRef] [PubMed]
  47. Umbaugh, S.E. Computer Vision in Medicine: Color Metrics and Image Segmentation Methods for Skin Cancer Diagnosis; Electrical Engineering Department, University of Missouri: Rolla, MO, USA, 1990. [Google Scholar]
  48. Stanganelli, I.; Brucale, A.; Calori, L.; Gori, R.; Lovato, A.; Magi, S.; Kopf, B.; Bacchilega, R.; Rapisarda, V.; Testori, A.; et al. Computer-aided diagnosis of melanocytic lesions. Anticancer Res. 2005, 25, 4577–4582. [Google Scholar] [PubMed]
  49. Rubegni, P.; Cevenini, G.; Burroni, M.; Perotti, R.; Dell’Eva, G.; Sbano, P.; Miracco, C.; Luzi, P.; Tosi, P.; Barbini, P.; et al. Automated diagnosis of pigmented skin lesions. Int. J. Cancer 2002, 101, 576–580. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Sober, A.J.; Burstein, J.M. Computerized digital image analysis: An aid for melanoma diagnosis—Preliminary investigations and brief review. J. Dermatol. 1994, 21, 885–890. [Google Scholar] [CrossRef] [PubMed]
  51. Rajpara, S.M.; Botello, A.P.; Townend, J.; Ormerod, A.D. Systematic review of dermoscopy and digital dermoscopy/artificial intelligence for the diagnosis of melanoma. Br. J. Dermatol. 2009, 161, 591–604. [Google Scholar] [CrossRef] [PubMed]
  52. Rosado, B.; Menzies, S.; Harbauer, A.; Pehamberger, H.; Wolff, K.; Binder, M.; Kittler, H. Accuracy of computer diagnosis of melanoma: A quantitative meta-analysis. Arch. Dermatol. 2003, 139, 361–367. [Google Scholar] [CrossRef]
  53. Bauer, P.; Cristofolini, P.; Boi, S.; Burroni, M.; Dell’Eva, G.; Micciolo, R.; Cristofolini, M. Digital epiluminescence microscopy: Usefulness in the differential diagnosis of cutaneous pigmentary lesions. A statistical comparison between visual and computer inspection. Melanoma Res. 2000, 10, 345–349. [Google Scholar] [CrossRef]
  54. Maglogiannis, I.; Doukas, C.N. Overview of advanced computer vision systems for skin lesions characterization. IEEE Trans. Inf. Technol. Biomed. 2009, 13, 721–733. [Google Scholar] [CrossRef]
  55. Friedman, R.J.; Gutkowicz-Krusin, D.; Farber, M.J.; Warycha, M.; Schneider-Kels, L.; Papastathis, N.; Mihm, M.C.; Googe, P.; King, R.; Prieto, V.G.; et al. The diagnostic performance of expert dermoscopists vs a computervision system on small-diameter melanomas. Arch. Dermatol. 2008, 144, 476–482. [Google Scholar] [CrossRef] [Green Version]
  56. Blum, A.; Zalaudek, I.; Argenziano, G. Digital image analysis for diagnosis of skin tumors. Semin. Cutan. Med. Surgery 2008, 27, 11–15. [Google Scholar] [CrossRef] [PubMed]
  57. Maruthamuthu, M.K.; Raffiee, A.H.; Oliveira, D.M.D.; Ardekani, A.M.; Verma, M.S. Raman spectra-based deep learning: A tool to identify microbial contamination. MicrobiologyOpen 2020, 9, e1122. [Google Scholar] [CrossRef] [PubMed]
  58. Zhao, Z.; Wu, C.M.; Zhang, S.; He, F.; Liu, F.; Wang, B.; Huang, Y.; Shi, W.; Jian, D.; Xie, H.; et al. A Novel Convolutional Neural Network for the Diagnosis and Classification of Rosacea: Usability Study. Jmir Med. Inform. 2021, 9, e23415. [Google Scholar] [CrossRef]
  59. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; The PRISMA Group. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Med. 2009, 6, E1000097. [Google Scholar] [CrossRef] [Green Version]
  60. Siegel, R.L.; Miller, K.D.; Jemal, A. Cancer statistics, 2018. CA Cancer J. Clin. 2018, 68, 7–30. [Google Scholar] [CrossRef] [PubMed]
  61. Giotis, I.; Molders, N.; Land, S.; Biehl, M.; Jonkman, M.F.; Petkov, N. MED-NODE: A computer-assisted melanoma diagnosis system using non-dermoscopic images. Expert Syst. Appl. 2018, 42, 6578–6585. [Google Scholar] [CrossRef]
  62. Dermatology Information System. Available online: http://www.dermis.net (accessed on 25 January 2016).
  63. DermQuest. Available online: http://www.dermquest.com (accessed on 25 January 2016).
  64. Mendonça, T.; Ferreira, P.M.; Marques, J.S.; Marcal, A.R.S.; Rozeira, J. PH2- A dermoscopic image database for research and benchmarking. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 5437–5440. [Google Scholar]
  65. Gutman, D.; Codella, N.C.F.; Emre, C.; Brian, H.; Michael, M.; Nabin, M.; Allan, H. Skin Lesion Analysis toward Melanoma Detection: A Challenge at the International Symposium on Biomedical Imaging (ISBI) 2016, hosted by the International Skin Imaging Collaboration (ISIC). arXiv 2016, arXiv:1605.01397. [Google Scholar]
  66. Codella, N.; Gutman, D.; Celebi, M.E.; Helba, D.; Marchetti, M.A.; Dusza, S.; Kalloo, A.; Liopyris, K.; Mishra, N.; Kittler, H.; et al. Skin Lesion Analysis Toward Melanoma Detection: A Challenge at the 2017 International Symposium on Biomedical Imaging (ISBI), Hosted by the International Skin Imaging Collaboration (ISIC). arXiv 2017, arXiv:1710.05006. [Google Scholar]
  67. Codella, N.; Rotemberg, V.; Tschandl, P.; Celebi, M.E.; Dusza, S.; Gutman, D.; Helba, B.; Kalloo, A.; Liopyris, K.; Marchetti, M.; et al. Skin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted by the International Skin Imaging Collaboration (ISIC). arXiv 2018, arXiv:1902.03368. [Google Scholar]
  68. Tsch, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-sources dermatoscopic images of common pigmented skin lesions. Sci. Data 2018, 5, 180161. [Google Scholar] [CrossRef]
  69. Combalia, M.; Codella, N.C.F.; Rotemberg, V.; Helba, B.; Vilaplana, V.; Reiter, O.; Halpern, A.C.; Puig, S.; Malvehy, J. BCN20000: Dermoscopic Lesions in the Wild. arXiv 2019, arXiv:1908.02288. [Google Scholar]
  70. Ballerini, L.; Fisher, R.B.; Aldridge, B.; Rees, J. A Color and Texture Based Hierarchical K-NN Approach to the Classification of Non-melanoma Skin Lesions. In Color Medical Image Analysis; Celebi, M., Schaefer, G., Eds.; Lecture Notes in Computational Vision and Biomechanics; Springer: Dordrecht, The Netherland, 2013; Volume 6. [Google Scholar]
  71. Lio, P.A.; Nghiem, P. Interactive atlas of dermoscopy. J. Am. Acad. Dermatol. 2004, 50, 807–808. [Google Scholar] [CrossRef]
  72. Rotemberg, V.; Kurtansky, N.; Betz-Stablein, B.; Caffery, L.; Chousakos, E.; Codella, N.; Combalia, M.; Dusza, S.; Guitera, P.; Gutman, D.; et al. A patient-centric dataset of images and metadata for identifying melanomas using clinical context. Sci. Data 2021, 8, 34. [Google Scholar] [CrossRef] [PubMed]
  73. Korotkov, K. Automatic Change Detection in Multiple Pigmented Skin Lesions. Ph.D. Thesis, Universitat de Girona, Girona, Spain, 2014. [Google Scholar]
  74. Kadry, S.; Taniar, D.; Damasevicius, R.; Rajinikanth, V.; Lawal, I.A. Extraction of abnormal skin lesion from dermoscopy image using VGG-SegNet. In Proceedings of the 2021 IEEE 7th International Conference on Bio Signals, Images and Instrumentation, ICBSII 2021, Chennai, India, 25–27 March 2021. [Google Scholar] [CrossRef]
  75. Mishra, N.K.; Celebi, M.E. An overview of melanoma detection in dermoscopy images using image processing and machine learning. arXiv 2016, arXiv:1601.07843. [Google Scholar]
  76. Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Classification of skin lesions using transfer learning and augmentation with Alex-net. PLoS ONE 2019, 14, e0217293. [Google Scholar] [CrossRef] [Green Version]
  77. Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Skin Cancer Classification using Deep Learning and Transfer Learning. In Proceedings of the 9th Cairo International Biomedical Engineering, Cairo, Egypt, 20–22 December 2018; pp. 90–93. [Google Scholar]
  78. Hosny, K.M.; Kassem, M.A.; Fouad, M.M. Skin Melanoma Classification Using Deep Convolutional Neural Networks. In Deep Learning in Computer Vision: Theories and Applications; CRC: Boca Raton, FL, USA, 2020; ISBN 9781138544420. [Google Scholar] [CrossRef]
  79. Glaister, J.; Wong, A.; Clausi, D.A. Segmentation of Skin Lesions From Digital Images Using Joint Statistical Texture Distinctiveness. IEEE Trans. Biomed. Eng. 2014, 61, 1220–1230. [Google Scholar] [CrossRef] [PubMed]
  80. Celebi, M.E.; Zornberg, A. Automated Quantification of Clinically Significant Colors in Dermoscopy Images and Its Application to Skin Lesion Classification. IEEE Syst. J. 2014, 8, 980–984. [Google Scholar] [CrossRef]
  81. Barata, C.; Ruela, M.; Francisco, M.; Mendonça, T.; Marques, J.S. Two Systems for the Detection of Melanomas in Dermoscopy Images Using Texture and Color Features. IEEE Syst. J. 2014, 8, 965–979. [Google Scholar] [CrossRef]
  82. Sáez, A.; Serrano, C.; Acha, B. Model-Based Classification Methods of Global Patterns in Dermoscopic Images. IEEE Trans. Med. Imaging 2014, 33, 1137–1147. [Google Scholar] [CrossRef] [PubMed]
  83. Abuzaghleh, O.; Barkana, B.D.; Faezipour, M. Automated skin lesion analysis based on color and shape geometry feature set for melanoma early detection and prevention. In Proceedings of the IEEE Long Island Systems, Applications and Technology (LISAT) Conference 2014, Farmingdale, NY, USA, 2 May 2014; pp. 1–6. [Google Scholar] [CrossRef]
  84. Abuzaghleh, O.; Barkana, B.D.; Faezipour, M. SKINcure: A real time image analysis system to aid in the malignant melanoma prevention and early detection. In Proceedings of the Southwest Symposium on Image Analysis and Interpretation, San Diego, CA, USA, 6–8 April 2014; pp. 85–88. [Google Scholar] [CrossRef]
  85. Surówka, G.; Ogorzałek, M. On optimal wavelet bases for classification of skin lesion images through ensemble learning. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), Beijing, China, 6–11 July 2014; pp. 165–170. [Google Scholar] [CrossRef]
  86. Lezoray, O.; Revenu, M.; Desvignes, M. Graph-based skin lesion segmentation of multispectral dermoscopic images. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 897–901. [Google Scholar] [CrossRef] [Green Version]
  87. Sheha, M.A.; Sharwy, A.; Mabrouk, M.S. Pigmented skin lesion diagnosis using geometric and chromatic features. In Proceedings of the Cairo International Biomedical Engineering Conference (CIBEC), Giza, Egypt, 11–13 December 2014; pp. 115–120. [Google Scholar] [CrossRef]
  88. Dhinagar, N.J.; Celenk, M. Analysis of regularity in skin pigmentation and vascularity by an optimized feature space for early cancer classification. In Proceedings of the 7th International Conference on Biomedical Engineering and Informatics, Dalian, China, 14–16 October 2014; pp. 709–713. [Google Scholar] [CrossRef]
  89. Haider, S.; Cho, D.; Amelard, R.; Wong, A.; Clausi, D.A. Enhanced classification of malignant melanoma lesions via the integration of physiological features from dermatological photographs. In Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 6455–6458. [Google Scholar] [CrossRef]
  90. Masood, A.; Al-Jumaily, A.A. Integrating soft and hard threshold selection algorithms for accurate segmentation of skin lesion. In Proceedings of the 2nd Middle East Conference on Biomedical Engineering, Doha, Qatar, 17–20 February 2014; pp. 83–86. [Google Scholar] [CrossRef]
  91. Abuzaghleh, O.; Barkana, B.D.; Faezipour, M. Noninvasive Real-Time Automated Skin Lesion Analysis System for Melanoma Early Detection and Prevention. IEEE J. Transl. Eng. Health Med. 2015, 3, 1–12. [Google Scholar] [CrossRef]
  92. Harmouche, R.; Subbanna, N.K.; Collins, D.L.; Arnold, D.L.; Arbel, T. Probabilistic Multiple Sclerosis Lesion Classification Based on Modeling Regional Intensity Variability and Local Neighborhood Information. IEEE Trans. Biomed. Eng. 2015, 62, 1281–1292. [Google Scholar] [CrossRef] [PubMed]
  93. Lu, C.; Ma, Z.; Mandal, M. Automated segmentation of the epidermis area in skin whole slide histopathological images. IET Image Process. 2015, 9, 735–742. [Google Scholar] [CrossRef] [Green Version]
  94. Jiji, G.W.; Raj, P.S.J.D. Content-based image retrieval in dermatology using intelligent technique. IET Image Process. 2015, 9, 306–317. [Google Scholar] [CrossRef]
  95. Barata, C.; Celebi, M.E.; Marques, J.S. Improving Dermoscopy Image Classification Using Color Constancy. IEEE J. Biomed. Health Inform. 2015, 19, 1146–1152. [Google Scholar] [CrossRef] [PubMed]
  96. Valavanis, I.; Maglogiannis, I.; Chatziioannou, A.A. Exploring Robust Diagnostic Signatures for Cutaneous Melanoma Utilizing Genetic and Imaging Data. IEEE J. Biomed. Health Inform. 2015, 19, 190–198. [Google Scholar] [CrossRef] [PubMed]
  97. Amelard, R.; Glaister, J.; Wong, A.; Clausi, D.A. High-Level Intuitive Features (HLIFs) for Intuitive Skin Lesion Description. IEEE Trans. Biomed. Eng. 2015, 62, 820–831. [Google Scholar] [CrossRef] [PubMed]
  98. Shimizu, K.; Iyatomi, H.; Celebi, M.E.; Norton, K.; Tanaka, M. Four-Class Classification of Skin Lesions With Task Decomposition Strategy. IEEE Trans. Biomed. Eng. 2015, 62, 274–283. [Google Scholar] [CrossRef] [PubMed]
  99. Schaefer, G.; Krawczyk, B.; Celebi, M.E.; Iyatomi, H. An ensemble classification approach for melanoma diagnosis. Memetic Comp. 2014, 6, 233–240. [Google Scholar] [CrossRef]
  100. Alencar, F.E.S.; Lopes, D.C.; Neto, F.M.M. Development of a System Classification of Images Dermoscopic for Mobile Devices. IEEE Lat. Am. Trans. 2016, 14, 325–330. [Google Scholar] [CrossRef]
  101. Kasmi, R.; Mokrani, K. Classification of malignant melanoma and benign skin lesions: Implementation of automatic ABCD rule. IET Image Process. 2016, 10, 448–455. [Google Scholar] [CrossRef]
  102. Sáez, A.; Sánchez-Monedero, J.; Gutiérrez, P.A.; Hervás-Martínez, C. Machine Learning Methods for Binary and Multiclass Classification of Melanoma Thickness From Dermoscopic Images. IEEE Trans. Med. Imaging 2016, 35, 1036–1045. [Google Scholar] [CrossRef]
  103. Ma, Z.; Tavares, J.M.R.S. A Novel Approach to Segment Skin Lesions in Dermoscopic Images Based on a Deformable Model. IEEE J. Biomed. Health Inform. 2016, 20, 615–623. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  104. Oliveira, R.B.; Marranghello, N.; Pereira, A.S.; Tavares, J.M.R.S. A computational approach for detecting pigmented skin lesions in macroscopic images. Expert Syst. Appl. 2016, 61, 53–63. [Google Scholar] [CrossRef] [Green Version]
  105. Inácio, D.F.; Célio, V.N.; Vilanova, G.D.; Conceição, M.M.; Fábio, G.; Minoro, A.J.; Tavares, P.M.; Landulfo, S. Paraconsistent analysis network applied in the treatment of Raman spectroscopy data to support medical diagnosis of skin cancer. Med. Biol. Eng. Comput. 2016, 54, 1453–1467. [Google Scholar]
  106. Pennisi, A.; Bloisi, D.D.; Nardi, D.A.; Giampetruzzi, R.; Mondino, C.; Facchiano, A. Skin lesion image segmentation using Delaunay Triangulation for melanoma detection. Comput. Med. Imaging Graph. 2016, 52, 89–103. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  107. Noroozi, N.; Zakerolhosseini, A. Differential diagnosis of squamous cell carcinoma in situ using skin histopathological images. Comput. Biol. Med. 2016, 70, 23–39. [Google Scholar] [CrossRef]
  108. Odeh, S.M.; Baareh, A.K.M. A comparison of classification methods as diagnostic system: A case study on skin lesions. Comput. Methods Programs Biomed. 2016, 137, 311–319. [Google Scholar] [CrossRef]
  109. Noroozi, N.; Zakerolhosseini, A. Computer assisted diagnosis of basal cell carcinoma using Z-transform features. J. Vis. Commun. Image Represent. 2016, 40 Pt A, 128–148. [Google Scholar] [CrossRef]
  110. Shrivastava, V.K.; Londhe, N.D.; Sonawane, R.S.; Suri, J.S. Computer-aided diagnosis of psoriasis skin images with HOS, texture and color features: A first comparative study of its kind. Comput. Methods Programs Biomed. 2016, 126, 98–109. [Google Scholar] [CrossRef]
  111. Kharazmi, P.; AlJasser, M.I.; Lui, H.; Wang, Z.J.; Lee, T.K. Automated Detection and Segmentation of Vascular Structures of Skin Lesions Seen in Dermoscopy, With an Application to Basal Cell Carcinoma Classification. IEEE J. Biomed. Health Inform. 2017, 21, 1675–1684. [Google Scholar] [CrossRef]
  112. Satheesha, T.Y.; Satyanarayana, D.; Prasad, M.N.G.; Dhruve, K.D. Melanoma Is Skin Deep: A 3D Reconstruction Technique for Computerized Dermoscopic Skin Lesion Classification. IEEE J. Transl. Eng. Health Med. 2017, 5, 1–17. [Google Scholar] [CrossRef]
  113. Xie, F.; Fan, H.; Li, Y.; Jiang, Z.; Meng, R.; Bovik, A. Melanoma Classification on Dermoscopy Images Using a Neural Network Ensemble Model. IEEE Trans. Med. Imaging 2017, 36, 849–858. [Google Scholar] [CrossRef] [PubMed]
  114. Sadri, A.R.; Azarianpour, S.; Zekri, M.; Celebi, M.E.; Sadri, S. WN-based approach to melanoma diagnosis from dermoscopy images. IET Image Process. 2017, 11, 475–482. [Google Scholar] [CrossRef]
  115. Hamed, K.; Khaki, S.A.; Mohammad, A.; Jahed, M.; Ali, H. Nonlinear Analysis of the Contour Boundary Irregularity of Skin Lesion Using Lyapunov Exponent and K-S Entropy. J. Med. Biol. Eng. 2017, 37, 409–419. [Google Scholar]
  116. Dalila, F.; Zohra, A.; Reda, K.; Hocine, C. Segmentation and classification of melanoma and benign skin lesions. Optik 2017, 140, 749–761. [Google Scholar] [CrossRef]
  117. Ma, Z.; Tavares, J.M.R.S. Effective features to classify skin lesions in dermoscopic images. Expert Syst. Appl. 2017, 84, 92–101. [Google Scholar] [CrossRef] [Green Version]
  118. Alfed, N.; Khelifi, F. Bagged textural and color features for melanoma skin cancer detection in dermoscopic and standard images. Expert Syst. Appl. 2017, 90, 101–110. [Google Scholar] [CrossRef] [Green Version]
  119. Oliveira, R.B.; Pereira, A.S.; Tavares, J.M.R.S. Skin lesion computational diagnosis of dermoscopic images: Ensemble models based on input feature manipulation. Comput. Methods Programs Biomed. 2017, 149, 43–53. [Google Scholar] [CrossRef] [Green Version]
  120. Przystalski, K.; Ogorzałek, M.J. Multispectral skin patterns analysis using fractal methods. Expert Syst. Appl. 2017, 88, 318–326. [Google Scholar] [CrossRef]
  121. Jaisakthi, S.M.; Mirunalini, P.; Aravindan, C. Automated skin lesion segmentation of dermoscopic images using GrabCut and k-means algorithms. IET Comput. Vis. 2018, 12, 1088–1095. [Google Scholar] [CrossRef]
  122. Do, T.; Hoang, T.; Pomponiu, V.; Zhou, Y.; Chen, Z.; Cheung, N.; Koh, D.; Tan, A.; Tan, S. Accessible Melanoma Detection Using Smartphones and Mobile Image Analysis. IEEE Trans. Multimed. 2018, 20, 2849–2864. [Google Scholar] [CrossRef] [Green Version]
  123. Adjed, F.; Gardezi, S.J.S.; Ababsa, F.; Faye, I.; Dass, S.C. Fusion of structural and textural features for melanoma recognition. IET Comput. Vis. 2018, 12, 185–195. [Google Scholar] [CrossRef]
  124. Hosseinzadeh, H. Automated skin lesion division utilizing Gabor filters based on shark smell optimizing method. Evol. Syst. 2018, 11, 1–10. [Google Scholar] [CrossRef]
  125. Akram, T.; Khan, M.A.; Sharif, M.; Yasmin, M. Skin lesion segmentation and recognition using multichannel saliency estimation and M-SVM on selected serially fused features. J. Ambient. Intell. Humaniz. Comput. 2018. [Google Scholar] [CrossRef]
  126. Jamil, U.; Khalid, S.; Akram, M.U.; Ahmad, A.; Jabbar, S. Melanocytic and nevus lesion detection from diseased dermoscopic images using fuzzy and wavelet techniques. Soft Comput. 2018, 22, 1577–1593. [Google Scholar] [CrossRef]
  127. Khan, M.; Akram, T.; Sharif, M.; Shahzad, A.; Aurangzeb, K.; Alhussein, M.; Haider, S.I.; Altamrah, A. An implementation of normal distribution based segmentation and entropy controlled features selection for skin lesion detection and classification. BMC Cancer 2018, 18, 638. [Google Scholar] [CrossRef] [PubMed]
  128. Tan, T.Y.; Zhang, L.; Neoh, S.C.; Lim, C.P. Intelligent skin cancer detection using enhanced particle swarm optimization. Knowl. Based Syst. 2018, 158, 118–135. [Google Scholar] [CrossRef]
  129. Tajeddin, N.Z.; Asl, B.M. Melanoma recognition in dermoscopy images using lesion’s peripheral region information. Comput. Methods Programs Biomed. 2018, 163, 143–153. [Google Scholar] [CrossRef]
  130. Filho, P.P.R.; Peixoto, S.A.; Nóbrega, R.V.M.; Hemanth, D.J.; Medeiros, A.G.; Sangaiah, A.K.; Albuquerque, V.H.C. Automatic histologically-closer classification of skin lesions. Comput. Med. Imaging Graph. 2018, 68, 40–54. [Google Scholar] [CrossRef]
  131. Peñaranda, F.; Naranjo, V.; Lloyd, G.; Kastl, L.; Kemper, B.; Schnekenburger, J.; Nallala, J.; Stone, N. Discrimination of skin cancer cells using Fourier transform infrared spectroscopy. Comput. Biol. Med. 2018, 100, 50–61. [Google Scholar] [CrossRef]
  132. Wahba, M.A.; Ashour, A.S.; Guo, Y.; Napoleon, S.A.; Elnaby, M.M.A. A novel cumulative level difference mean based GLDM and modified ABCD features ranked using eigenvector centrality approach for four skin lesion types classification. Comput. Methods Programs Biomed. 2018, 165, 163–174. [Google Scholar] [CrossRef] [PubMed]
  133. Zakeri, A.; Hokmabadi, A. Improvement in the diagnosis of melanoma and dysplastic lesions by introducing ABCD-PDT features and a hybrid classifier. Biocybern. Biomed. Eng. 2018, 38, 456–466. [Google Scholar] [CrossRef]
  134. Pathan, S.; Prabhu, K.G.; Siddalingaswamy, P.C. A methodological approach to classify typical and atypical pigment network patterns for melanoma diagnosis. Biomed. Signal Process. Control 2018, 44, 25–37. [Google Scholar] [CrossRef]
  135. Li, X.; Yang, S.; Fan, R.; Yu, X.; Chen, D.G. Discrimination of soft tissues using laser-induced breakdown spectroscopy in combination with k nearest neighbors (kNN) and support vector machine (SVM) classifiers. Opt. Laser Technol. 2018, 102, 233–239. [Google Scholar] [CrossRef]
  136. Chatterjee, S.; Dey, D.; Munshi, S. Optimal selection of features using wavelet fractal descriptors and automatic correlation bias reduction for classifying skin lesions. Biomed. Signal Process. Control 2018, 40, 252–262. [Google Scholar] [CrossRef]
  137. Khan, M.Q.; Hussain, A.; Rehman, S.U.; Khan, U.; Maqsood, M.; Mehmood, K.; Khan, M.A. Classification of Melanoma and Nevus in Digital Images for Diagnosis of Skin Cancer. IEEE Access 2019, 7, 90132–90144. [Google Scholar] [CrossRef]
  138. Madooei, A.; Drew, M.S.; Hajimirsadeghi, H. Learning to Detect Blue–White Structures in Dermoscopy Images With Weak Supervision. IEEE J. Biomed. Health Inform. 2019, 23, 779–786. [Google Scholar] [CrossRef] [Green Version]
  139. Sáez, A.; Acha, B.; Serrano, A.; Serrano, C. Statistical Detection of Colors in Dermoscopic Images With a Texton-Based Estimation of Probabilities. IEEE J. Biomed. Health Inform. 2019, 23, 560–569. [Google Scholar] [CrossRef]
  140. Navarro, F.; Escudero-Viñolo, M.; Bescós, J. Accurate Segmentation and Registration of Skin Lesion Images to Evaluate Lesion Change. IEEE J. Biomed. Health Inform. 2019, 23, 501–508. [Google Scholar] [CrossRef]
  141. Riaz, F.; Naeem, S.; Nawaz, R.; Coimbra, M. Active Contours Based Segmentation and Lesion Periphery Analysis for Characterization of Skin Lesions in Dermoscopy Images. IEEE J. Biomed. Health Inform. 2019, 23, 489–500. [Google Scholar] [CrossRef]
  142. Mahmouei, S.S.; Aldeen, M.; Stoecker, W.V.; Garnavi, R. Biologically Inspired QuadTree Color Detection in Dermoscopy Images of Melanoma. IEEE J. Biomed. Health Inform. 2019, 23, 570–577. [Google Scholar] [CrossRef]
  143. Murugan, A.; Nair, S.H.; Kumar, K.P.S. Detection of Skin Cancer Using SVM, Random Forest and kNN Classifiers. J. Med. Syst. 2019, 43, 269. [Google Scholar] [CrossRef]
  144. Khalid, S.; Jamil, U.; Saleem, K.; Akram, M.U.; Manzoor, W.; Ahmed, W.; Sohail, A. Segmentation of skin lesion using Cohen–Daubechies–Feauveau biorthogonal wavelet. SpringerPlus 2016, 5, 1603. [Google Scholar] [CrossRef] [Green Version]
  145. Majumder, S.; Ullah, M.A. Feature extraction from dermoscopy images for melanoma diagnosis. SN Appl. Sci. 2019, 1, 753. [Google Scholar] [CrossRef] [Green Version]
  146. Chatterjee, S.; Dey, D.; Munshi, S. Integration of morphological preprocessing and fractal based feature extraction with recursive feature elimination for skin lesion types classification. Comput. Methods Programs Biomed. 2019, 178, 201–218. [Google Scholar] [CrossRef]
  147. Chatterjee, S.; Dey, D.; Munshi, S.; Gorai, S. Extraction of features from cross correlation in space and frequency domains for classification of skin lesions. Biomed. Signal Process. Control 2019, 53, 101581. [Google Scholar] [CrossRef]
  148. Upadhyay, P.K.; Chandra, S. An improved bag of dense features for skin lesion recognition. J. King Saud Univ. Comput. Inf. Sci. 2019. [Google Scholar] [CrossRef]
  149. Pathan, S.; Prabhu, K.G.; Siddalingaswamy, P.C. Automated detection of melanocytes related pigmented skin lesions: A clinical framework. Biomed. Signal Process. Control 2019, 51, 59–72. [Google Scholar] [CrossRef]
  150. Garcia-Arroyo, J.L.; Garcia-Zapirain, B. Segmentation of skin lesions in dermoscopy images using fuzzy classification of pixels and histogram thresholding. Comput. Methods Programs Biomed. 2019, 168, 11–19. [Google Scholar] [CrossRef]
  151. Hu, K.; Niu, X.; Liu, S.; Zhang, Y.; Cao, C.; Xiao, F.; Yang, W.; Gao, X. Classification of melanoma based on feature similarity measurement for codebook learning in the bag-of-features model. Biomed. Signal Process. Control 2019, 51, 200–209. [Google Scholar] [CrossRef]
  152. Moradi, N.; Mahdavi-Amiri, N. Kernel sparse representation based model for skin lesions segmentation and classification. Comput. Methods Programs Biomed. 2019, 182, 105038. [Google Scholar] [CrossRef]
  153. Pereira, P.M.M.; Fonseca-Pinto, R.; Paiva, R.P.; Assuncao, P.A.A.; Tavora, L.M.N.; Thomaz, L.A.; Faria, S.M.M. Skin lesion classification enhancement using border-line features—The melanoma vs nevus problem. Biomed. Signal Process. Control 2020, 57, 2020. [Google Scholar] [CrossRef]
  154. Kawahara, J.; Hamarneh, G. Multi-resolution-Tract CNN with Hybrid Pretrained and Skin-Lesion Trained Layers. Mach. Learn. Med. Imaging 2016, 10019, 164–171. [Google Scholar]
  155. Yu, L.; Chen, H.; Dou, Q.; Qin, J.; Heng, P. Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks. IEEE Trans. Med. Imaging 2017, 36, 994–1004. [Google Scholar] [CrossRef] [PubMed]
  156. Nguyen, N.C.F.C.Q.-B.; Pankanti, S.; Gutman, D.A.; Helba, B.; Halpern, A.C.; Smith, J.R. Deep learning ensembles for melanoma recognition in dermoscopy images. IBM J. Res. Dev. 2017, 61, 5–15. [Google Scholar]
  157. Bozorgtabar, B.; Sedai, S.; Roy, P.K.; Garnavi, R. Skin lesion segmentation using deep convolution networks guided by local unsupervised learning. IBM J. Res. Dev. 2017, 61, 6–14. [Google Scholar] [CrossRef]
  158. Yuan, Y.; Chao, M.; Lo, Y. Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks With Jaccard Distance. IEEE Trans. Med. Imaging 2017, 36, 1876–1886. [Google Scholar] [CrossRef]
  159. Sultana, N.N.; Mandal, B.; Puhan, N.B. Deep residual network with regularised fisher framework for detection of melanoma. IET Comput. Vis. 2018, 12, 1096–1104. [Google Scholar] [CrossRef] [Green Version]
  160. Rundo, F.; Conoci, S.; Banna, G.L.; Ortis, A.; Stanco, F.; Battiato, S. Evaluation of Levenberg–Marquardt neural networks and stacked autoencoders clustering for skin lesion analysis, screening and follow-up. IET Comput. Vis. 2018, 12, 957–962. [Google Scholar] [CrossRef]
  161. Creswell, A.; Pouplin, A.; Bharath, A.A. Denoising adversarial autoencoders: Classifying skin lesions using limited labelled training data. IET Comput. Vis. 2018, 12, 1105–1111. [Google Scholar] [CrossRef] [Green Version]
  162. Harangi, B. Skin lesion classification with ensembles of deep convolutional neural networks. J. Biomed. Inform. 2018, 86, 25–32. [Google Scholar] [CrossRef]
  163. Guo, S.; Yang, Z. Multi-Channel-ResNet: An integration framework towards skin lesion analysis. Inform. Med. Unlocked 2018, 12, 67–74. [Google Scholar] [CrossRef]
  164. Sánchez-Monedero, J.; Pérez-Ortiz, M.; Sáez, A.; Gutiérrez, P.A.; Hervás-Martínez, C. Partial order label decomposition approaches for melanoma diagnosis. Appl. Soft Comput. 2018, 64, 341–355. [Google Scholar] [CrossRef] [Green Version]
  165. Hagerty, J.R.; Stanley, R.J.; Almubarak, H.A.; Lama, N.; Kasmi, R.; Guo, P.; Drugge, R.J.; Rabinovitz, H.S.; Oliviero, M.; Stoecker, W.V. Deep Learning and Handcrafted Method Fusion: Higher Diagnostic Accuracy for Melanoma Dermoscopy Images. IEEE J. Biomed. Health Inform. 2019, 23, 1385–1391. [Google Scholar] [CrossRef]
  166. Połap, D. Analysis of Skin Marks Through the Use of Intelligent Things. IEEE Access 2019, 7, 149355–149363. [Google Scholar] [CrossRef]
  167. Połap, D.; Winnicka, A.; Serwata, K.; Kęsik, K.; Woźniak, M. An intelligent system for monitoring skin diseases. Sensors 2018, 18, 2552. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  168. Sarkar, R.; Chatterjee, C.C.; Hazra, A. Diagnosis of melanoma from dermoscopic images using a deep depthwise separable residual convolutional network. IET Image Process. 2019, 13, 2130–2142. [Google Scholar] [CrossRef]
  169. Zhang, J.; Xie, Y.; Xia, Y.; Shen, C. Attention Residual Learning for Skin Lesion Classification. IEEE Trans. Med. Imaging 2019, 38, 2092–2103. [Google Scholar] [CrossRef] [PubMed]
  170. Albahar, M.A. Skin Lesion Classification Using Convolutional Neural Network With Novel Regularizer. IEEE Access 2019, 7, 38306–38313. [Google Scholar] [CrossRef]
  171. González-Díaz, I. DermaKNet: Incorporating the Knowledge of Dermatologists to Convolutional Neural Networks for Skin Lesion Diagnosis. IEEE J. Biomed. Health Inform. 2019, 23, 547–559. [Google Scholar] [CrossRef] [PubMed]
  172. Kawahara, J.; Daneshvar, S.; Argenziano, G.; Hamarneh, G. Seven-Point Checklist and Skin Lesion Classification Using Multitask Multimodal Neural Nets. IEEE J. Biomed. Health Inform. 2019, 23, 538–546. [Google Scholar] [CrossRef]
  173. Yu, Z.; Jiang, X.; Zhou, F.; Qin, J.; Ni, D.; Chen, S.; Lei, B.; Wang, T. Melanoma Recognition in Dermoscopy Images via Aggregated Deep Convolutional Features. IEEE Trans. Biomed. Eng. 2019, 66, 1006–1016. [Google Scholar] [CrossRef] [PubMed]
  174. Dorj, U.; Lee, K.; Choi, J.; Lee, M. The skin cancer classification using deep convolutional neural network. Multimed. Tools Appl. 2018, 77, 9909–9924. [Google Scholar] [CrossRef]
  175. Gavrilov, D.A.; Melerzanov, A.V.; Shchelkunov, N.N.; Zakirov, E.I. Use of Neural Network-Based Deep Learning Techniques for the Diagnostics of Skin Diseases. Biomed. Eng. 2019, 52, 348–352. [Google Scholar] [CrossRef]
  176. Chen, M.; Zhou, P.; Wu, D.; Hu, L.; Hassan, M.M.; Alamri, A. AI-Skin: Skin disease recognition based on self-learning and wide data collection through a closed-loop framework. Inf. Fusion 2020, 54, 1–9. [Google Scholar] [CrossRef] [Green Version]
  177. Mahbod, A.; Schaefer, G.; Ellinger, I.; Ecker, R.; Pitiot, A.; Wang, C. Fusing fine-tuned deep features for skin lesion classification. Comput. Med. Imaging Graph. 2019, 71, 19–29. [Google Scholar] [CrossRef] [Green Version]
  178. Brinker, T.J.; Hekler, A.; Enk, A.H.; Klode, J.; Hauschild, A.; Berking, C.; Schilling, B.; Haferkamp, S.; Schadendorf, D.; Holland-Letz, T.; et al. Deep learning outperformed 136 of 157 dermatologists in a head-to-head dermoscopic melanoma image classification task. Eur. J. Cancer 2019, 113, 47–54. [Google Scholar] [CrossRef] [Green Version]
  179. Tan, T.Y.; Zhang, L.; Lim, C.P. Adaptive melanoma diagnosis using evolving clustering, ensemble and deep neural networks. Knowl. Based Syst. 2020, 187, 104807. [Google Scholar] [CrossRef]
  180. Khan, M.A.; Sharif, M.; Akram, T.; Damaševičius, R.; Maskeliūnas, R. Skin lesion segmentation and multiclass classification using deep learning features and improved moth flame optimization. Diagnostics 2021, 11, 811. [Google Scholar] [CrossRef]
  181. Tschandl, P.; Sinz, C.; Kittler, H. Domain-specific classification-pretrained fully convolutional network encoders for skin lesion segmentation. Comput. Biol. Med. 2019, 104, 111–116. [Google Scholar] [CrossRef]
  182. Vasconcelos, F.F.X.; Medeiros, A.G.; Peixoto, S.A.; Filho, P.P.R. Automatic skin lesions segmentation based on a new morphological approach via geodesic active contour. Cogn. Syst. Res. 2019, 55, 44–59. [Google Scholar] [CrossRef]
  183. Burlina, P.M.; Joshi, N.J.; Ng, E.; Billings, S.D.; Rebman, A.W.; Aucott, J.N. Automated detection of erythema migrans and other confounding skin lesions via deep learning. Comput. Biol. Med. 2019, 105, 151–156. [Google Scholar] [CrossRef] [PubMed]
  184. Maron, R.C.; Weichenthal, M.; Utikal, J.S.; Hekler, A.; Berking, C.; Hauschild, A.; Enk, A.H.; Haferkamp, S.; Klode, J.; Schadendorf, D.; et al. Systematic outperformance of 112 dermatologists in multiclass skin cancer image classification by convolutional neural networks. Eur. J. Cancer 2019, 119, 57–65. [Google Scholar] [CrossRef] [Green Version]
  185. Goyal, M.; Oakley, A.; Bansal, P.; Dancey, D.; Yap, M.H. Skin Lesion Segmentation in Dermoscopic Images with Ensemble Deep Learning Methods. IEEE Access 2020, 8, 4171–4181. [Google Scholar] [CrossRef]
  186. Albert, B.A. Deep Learning from Limited Training Data: Novel Segmentation and Ensemble Algorithms Applied to Automatic Melanoma Diagnosis. IEEE Access 2020, 8, 31254–31269. [Google Scholar] [CrossRef]
  187. Ahmad, B.; Usama, M.; Huang, C.; Hwang, K.; Hossain, M.S.; Muhammad, G. Discriminative Feature Learning for Skin Disease Classification Using Deep Convolutional Neural Network. IEEE Access 2020, 8, 39025–39033. [Google Scholar] [CrossRef]
  188. Kwasigroch, A.; Grochowski, M.; Mikołajczyk, A. Neural Architecture Search for Skin Lesion Classification. IEEE Access 2020, 8, 9061–9071. [Google Scholar] [CrossRef]
  189. Adegun, A.A.; Viriri, S. Deep Learning-Based System for Automatic Melanoma Detection. IEEE Access 2020, 8, 7160–7172. [Google Scholar] [CrossRef]
  190. Song, L.; Lin, J.P.; Wang, Z.J.; Wang, H. An End-to-end Multi-task Deep Learning Framework for Skin Lesion Analysis. IEEE J. Biomed. Health Inform. 2020. [Google Scholar] [CrossRef]
  191. Wei, L.; Ding, K.; Hu, H. Automatic Skin Cancer Detection in Dermoscopy Images Based on Ensemble Lightweight Deep Learning Network. IEEE Access 2020, 8, 99633–99647. [Google Scholar] [CrossRef]
  192. Gong, A.; Yao, X.; Lin, W. Dermoscopy Image Classification Based on StyleGANs and Decision Fusion. IEEE Access 2020, 8, 70640–70650. [Google Scholar] [CrossRef]
  193. Nasiri, S.; Helsper, J.; Jung, M.; Fathi, M. DePicT Melanoma Deep-CLASS: A deep convolutional neural networks approach to classify skin lesion images. BMC Bioinform. 2020, 21, 84. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  194. Öztürk, S.; Özkaya, U. Skin Lesion Segmentation with Improved Convolutional Neural Network. J. Digit. Imaging 2020, 33, 958–970. [Google Scholar] [CrossRef] [PubMed]
  195. Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Skin Melanoma Classification Using ROI and Data Augmentation with Deep Convolutional Neural Networks. Multimed. Tools Appl. 2020, 79, 24029–24055. [Google Scholar] [CrossRef]
  196. Javeria, A.; Abida, S.; Nadia, G.; Muhammad, A.A.; Muhammad, W.N.; Faisal, A.; Syed Ahmad, C.B. Integrated design of deep features fusion for localization and classification of skin cancer. Pattern Recognit. Lett. 2020, 131, 63–70. [Google Scholar]
  197. Mahbod, A.; Schaefer, G.; Wang, C.; Dorffner, G.; Ecker, R.; Ellinger, I. Transfer learning using a multi-scale and multi-network ensemble for skin lesion classification. Comput. Methods Programs Biomed. 2020, 193, 105475. [Google Scholar] [CrossRef]
  198. Hameed, N.; Shabut, A.M.; Ghosh, M.K.; Hossain, M.A. Multi-class multi-level classification algorithm for skin lesions classification using machine learning techniques. Expert Syst. Appl. 2020, 141, 112961. [Google Scholar] [CrossRef]
  199. Zhang, N.; Cai, Y.; Wang, Y.; Tian, Y.; Wang, X.; Badami, B. Skin cancer diagnosis based on optimized convolutional neural network. Artif. Intell. Med. 2020, 102, 101756. [Google Scholar] [CrossRef]
  200. Hasan, K.; Dahal, L.; Samarakoon, P.N.; Tushar, F.I.; Martí, R. DSNet: Automatic dermoscopic skin lesion segmentation. Comput. Biol. Med. 2020, 120, 103738. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  201. Al-masni, M.A.; Kim, D.; Kim, T. Multiple skin lesions diagnostics via integrated deep convolutional networks for segmentation and classification. Comput. Methods Programs Biomed. 2020, 190, 105351. [Google Scholar] [CrossRef]
  202. Pour, M.P.; Seker, H. Transform domain representation-driven convolutional neural networks for skin lesion segmentation. Expert Syst. Appl. 2020, 144, 113129. [Google Scholar] [CrossRef]
  203. Abayomi-Alli, O.O.; Damaševičius, R.; Misra, S.; Maskeliūnas, R.; Abayomi-Alli, A. Malignant skin melanoma detection using image augmentation by oversampling in non-linear lower-dimensional embedding manifold. Turk. J. Elec. Eng. Comp. Sci. 2021, in press. [Google Scholar]
  204. Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Classification of Skin Lesions into Seven Classes Using Transfer Learning with AlexNet. J. Digit. Imaging 2020, 33, 1325–1334. [Google Scholar] [CrossRef] [PubMed]
  205. Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Skin Lesions Classification into Eight Classes for ISIC 2019 Using Deep Convolutional Neural Network and Transfer learning. IEEE Access 2020, 8, 114822–114832. [Google Scholar]
  206. Ntoutsi, E.; Fafalios, P.; Gadiraju, U.; Iosifidis, V.; Nejdl, W.; Vidal, M.; Ruggieri, S.; Turini, F.; Papadopoulos, S.; Krasanakis, E.; et al. Bias in data-driven artificial intelligence systems—An introductory survey. Wires Data Min. Knowl. Discov. 2020, 10, 3. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Different kinds of skin cancer classified by the Skin Cancer Foundation [4].
Figure 1. Different kinds of skin cancer classified by the Skin Cancer Foundation [4].
Diagnostics 11 01390 g001
Figure 2. Study selection using PRISMA flow diagram.
Figure 2. Study selection using PRISMA flow diagram.
Diagnostics 11 01390 g002
Figure 3. A graphic collection of images from the PH2 database, including common nevi (1st row), atypical nevi (2nd row), and melanomas (3rd row), https://www.fc.up.pt/addi/ph2%20database.html, accessed on 15 April 2021.
Figure 3. A graphic collection of images from the PH2 database, including common nevi (1st row), atypical nevi (2nd row), and melanomas (3rd row), https://www.fc.up.pt/addi/ph2%20database.html, accessed on 15 April 2021.
Diagnostics 11 01390 g003
Figure 4. Different skin lesions examples of ISIC 2017: (a) melanoma, (b) nevus, and (c) seborrheic keratosis.
Figure 4. Different skin lesions examples of ISIC 2017: (a) melanoma, (b) nevus, and (c) seborrheic keratosis.
Diagnostics 11 01390 g004
Figure 5. ISIC 2019 different skin lesions examples.
Figure 5. ISIC 2019 different skin lesions examples.
Diagnostics 11 01390 g005
Figure 6. Number of classes used for skin disease recognition.
Figure 6. Number of classes used for skin disease recognition.
Diagnostics 11 01390 g006
Table 1. Summary of all the discussed datasets.
Table 1. Summary of all the discussed datasets.
Nevus or
Atypical
Nevus
Common
Nevus
MelanomaSeborrheic
Keratosis
Basal
Cell
Carcinoma
DermatofibromaActinic
Keratosis
Vascular
Lesion or
Hemangioma
Squamous
Cell
Carcinoma
Intraepithelial
Carcinoma
Pyogenic
Granuloma
Total
MedNode100-70--------170
Dermis26-43--------69
DermQuest61-76--------137
Ph2808040 -------200
ISIC 2016726-173--------899
ISIC 20171372-374254-------2000
ISIC 20186705-11131099514115327142---10,015
ISIC 201912,875-452226243323239867253628 25,331
Dermofit331-762572396545978878241300
EDRA56055196454220-29-64-1011
ISIC20204651935841357-37----6002
Table 2. A comparison between state-of-the-arts methods with novel methods.
Table 2. A comparison between state-of-the-arts methods with novel methods.
ReferenceSimpleDatasetColored
Images
EnhancementSegmentationContributionContribution
Achieved
Methods and ToolsNo. of
Classes
Accuracy
(%)
Sensitivity
(%)
Specificity
(%)
Precision
(%)
Dice
(%)
NumberTypeFree
[100]Y1DYNYYMobile AssessmentNABCD, MLP2886693N/AN/A
[101]N1ANNYYClassify lesion AtlasYGabor filter, ABCD29491.2595.83N/AN/A
[102]N1ANYYYClassify lesion Atlas based
on the thickness
Ylogistic regression with ANN264.455.2N/AN/AN/A
[103]N1ANYYYIncrease classification using a novel segmentation methodYMedian Filter2N/AN/AN/AN/AN/A
1DY
[104]N3ANYYYSegment and classify
pigmented lesions
YAnisotropic Diffusion, Chan–Vese’s,
ABCD, SVM
279.01N/AN/A80N/A
1DY
[105]N1RSN---classificationYParaconsistent analysis network393.79N/AN/AN/AN/A
[106]Y1DYNYYSegmentationYDelaunay Triangulation275.9167.4695.93N/AN/A
[107]N1HNYNYSegmentation and ClassificationYGranular Layer, Intensity Profiles292.1N/A97.688.1N/A
[108]Y1FN---Comparison of four
Classification methods
YKNN with Sequential Scanning selection,
KNN with GA, ANN with GA,
Adaptive Neuro-Fuzzy Inference
294N/AN/AN/AN/A
[109]Y1HNYNYClassification lesions with and
without segmentation
YZ-transforms285.1891.668078.57N/A
[110]Y1JPGENYNNClassification of PsoriasisYPCA, SVM2100100100N/AN/A
[111]Y3ANNNYSegmentationYk-means2N/AN/AN/AN/AN/A
[112]Y1ANYNYClassification of skin cancer based on the
deep of lesion using 3D reconstruction
Yadaptive snake, stereo vision, structure from
motion, depth from focus
8869899N/AN/A
1DY3
[113]N2DNNNYClassification of melanomaYa self-generated neural network,
fuzzy neural networks with
backpropagation (BP) neural networks
294.179593.75N/AN/A
[114]N1JPGENYYYSegmentation and ClassificationYfixed grid wavelet network, D-optimality
orthogonal matching pursuit
291.8292.6191N/AN/A
[115]N1SINNNYClassification of the regular and
irregular boundary of skin lesion
Y1D time series for Lyapunov exponent and Kolmogorov–Sinai entropy29510092.586.5N/A
1DY
[116]Y2DYYNYSegmentation and classification of lesions to melanoma and BenignYAn ant colony, KNN, ANN2N/AN/AN/AN/AN/A
[117]N1ANYNYSegmentation and combine
features, classification
YABCD, SVM29072.594.4N/AN/A
2DY
[118]N2DNNYYCodebook generated from a
bag of features
YHistogram of Gradients, Histogram
of Lines, Zernike moments
292.9696.0484.78N/AN/A
[119]N1DYYNYClassification of skin lesion using feature subset selectionYOptimum-path forest integrated
with a majority voting
294.391.896.7N/AN/A
[120]N1SIA scopeNNNYAnalysis of multispectral skin lesionYBox-counting dimension and lacunarity, Hunter score pattern detection, RBF kernel, SVM29759.697.8N/AN/A
[121]N2DYYYYAutomatic segmentation of skin lesionYThe grab-Cut algorithm, k-means296.048998.79N/A91.39
[122]Y1CCDNYNYClassification of skin lesion
using a smartphone
YOtsu’s, Minimum Spanning Tree, Color Triangle287.9889.0986.87N/AN/A
[123]Y1DYYYNEnhancement and fusion of skin lesion features for classificationYWavelet transform, Curvelet transform, local binary pattern, SVM286.1778.9393.25N/AN/A
[124]N---NYYExtract fabric characteristics for lesion classificationYGabor filters based on shark smell optimizing method, K-means-N/AN/AN/AN/AN/A
[125]N3DYNYYSkin lesion segmentation and recognition based on fused featuresYABCD, fuzzy C-means, pair threshold binary decomposition, HOG, linear discriminant analysis, linear regression, complex tree, W-KNN, Naive Bayes, ensemble boosted tree, ensemble subspace discriminant analysis2, 39998.59998.5N/A
[126]N1DYYYYSkin lesion segmentation with several classifiersY2-D Gabor wavelet, OTSU’s, median filtering, morphological operations, m- mediod classifier, SVM, Gaussian Mixture Modeling297.2696.4896.9N/AN/A
[127]N3DYYYYClassify the selected features of parallel fusion of features after segmentationYGlobal maxima and minima, uniform and normal distribution, mean, mean deviation, HOG, Harlick method, M-SVM293.2939793.5N/A
[128]N3DY/NNYYOptimize features of skin lesion and classify these featuresYParticle Swarm Optimization, KNN, SVM, decision tree2N/AN/AN/AN/AN/A
[129]N1DYYYYUsing high discriminative features for melanoma classificationYHistogram correction, OTSU’s, corner detection, Gray-level co-occurrence matrix features, Daugman’s rubber sheet model, RUSBoost, linear SVM2959595N/A92
[130]N3DYNYYClassify melanoma based on the main frequencies from dermoscopic imagesYABCD, HU Moments, GLCM, Structural Co-occurrence matrix, MLP, LSSVM, Minimal Learning Machine289.9392.1589.9N/A91.05
[131]N1IN-YYDifferentiate infrared spectroscopy of skin lesionYFourier transform infrared, Morphological information, PCA285N/AN/AN/AN/A
[132]N1DYYYYClassifying four skin lesions based on the Gray-level difference methodNGray-level difference method, ABCD, SVM4100100100N/AN/A
[133]N1CCDN YYSegment skin lesion and using a hybrid classifier to classify these lesionsYABCD, pigment distribution and texture, GLCM, Log-linearized Gaussian mixture neural network, KNN, linear discriminant analysis, LDA, SVM, majority vote398.59599.5N/AN/A
[134]N1DYYYYClassify typical and atypical pigment network to diagnose melanomaYLaplacian filter, median filter, polynomial curve fitting, connected component analysis, 2D Gabor filters, Gray-Level co-occurrence Matrix, Pearson product-moment co-relation, Probabilistic SVM, ANN286.784.688.7N/AN/A
[135]N1LIBS spectraN-YYClassification of skin tissueYlaser-induced breakdown spectroscopy, PCA, KNN, SVM276.8474.286.9N/AN/A
[136]Y2ANNYYClassification of melanoma versus nevi by correlation bias reductionY2D wavelet packet decomposition, SVM recursive feature elimination (SVM -RFE)298.2897.63100N/AN/A
2DY
[137]Y1DNYYYClassification of lesions into melanoma and neviYGaussian Filter, KNN, SVM296979697N/A
[138]Y1ANYNYUsing of blue-whitish structure for melanoma classificationYmultiple instance learning (MIL) paradigm, Markov network, SVM284.574.4287.961.54N/A
1DY
[139]N1ANYNYClassify lesion into melanoma or nevi based of color featuresYK-means, pixel-based classification289.42N/AN/AN/AN/A
[140]Y1DY YYSegmentation and classification of skin lesionsYHough Transform, ABCD, SP-SIFT, LF-SLIC region labeling296N/AN/AN/A93.8
[141]Y2DYYYYDetect the boundaries of lesions to classify into melanoma and neviYKullback-Leibler divergence, local binary patterns, SVM, KNN280.7N/AN/AN/AN/A
[142]N1JPGENYYYA QuadTree-based melanoma detection system based on colorYhybrid thresholding method, adaptive histogram thresholding, Euclidean distance transform, QuadTree, Kolmogorov-Smirnov, SVM, ANN, LDA, random forests273.875.773.3N/AN/A
[143]Y1DYNYYSegmentation and classification of skin lesionYMedian Filter, watershed segmentation, ABCD, GLCM, KNN, RF, SVM289.4391.1587.71N/AN/A
[144]Y1DYNYYSegmentations of Enhanced dermoscopic lesion imagesYwavelet transform, morphological operations, Gray Thresholding, Cohen–Daubechies–Feauveau biorthogonal wavelet, Active contour, Color enhancement, Adaptive thresholding, Gradient vector flow293.87N/AN/AN/A92.72
[145]Y1DYNYYThree distinct features to classify melanomaNABCD, Otsu, Chan–Vese, Dull-Razor, ANN298.29898.2N/AN/A
[146]N2ANNYYClassification three lesions based on shape, fractal dimension, texture, and color featuresYrecursive feature elimination, GLCM, fractal-based regional texture analysis, SVM, RBF398.9998.2898.48N/A91.42
2DY
[147]N2ANYNNExtraction of features using frequency domain analysis and classify these featuresYCross spectrum-based feature extraction, Spatial feature extraction, SVM-RFE with CBR498.7298.8998.83N/AN/A
2DY
[148]Y1DNYNNImprove bag of dense features to Classify skin lesionsYGradient Location, Orientation Histogram, color features, SVM278N/AN/AN/AN/A
[149]N2DYNYYCAD system for clinical assistNchroma based deformable models, speed function, Chan-Vese, Wilcoxon Rank Sum statistics, Discrete Wavelet Transform, Asymmetry, and Compactness Index, SVM2889582N/AN/A
[150]Y2DYNYYSegmentation of skin lesions using fuzzy pixels classification and histogram thresholdingYfuzzy classification, histogram thresholding288.486.992.3N/A76
[151]N1DYYYYLesion classification based on feature similarity measurement for codebook learning in the bag-of-features modelYCodebook learning, k-means, color histogram, scale-invariant feature transform (SIFT)2828083N/AN/A
1CY
[152]Y1DYYYYSegmentation and classification of skin lesions using Kernel sparse representationYSparse coding, kernel dictionary, K-SVD391.3493.1791.48N/A91.25
[153]N1DNNYYImprove skin lesion classification using borderline characteristicsYGradient-based Histogram Thresholding, Local Binary Patterns Clustering, Euclidean distance, Discrete Fourier Transform spectrum (DCT), power spectral density (PSD), SVM, Feedforward Neural Network (FNN)2916896N/AN/A
1CY
[154]Y1DNYNYMulti-resolution-Tract CNNNAlexNet, GPU1079.5N/AN/AN/AN/A
[155]Y1DYYYYAutomatic segmentation and classification for skin lesionsYCNN with 50 layers, residual learning, SoftMax, SVM, Augmentation, GPU294N/AN/AN/AN/A
[156]N1DYNYYClassification of segmented skin lesionsYU-Net, Sparse coding, Deep Residual Network (DRN), Augmentation2768262N/AN/A
[157]Y1DYNYYSegmentation of skin lesions using deep learningYfully convolutional networks (FCN), VGG, Augmentation2N/AN/AN/AN/A89.2
[158]Y1DYYYYAutomatic Skin Lesion Segmentation Using CNNYFCN with 19 layers, Jaccard Distance, Augmentation, GPU295.591.896.6N/A92.2
[159]Y3DYYYNDetection of melanoma using CNN and regularized fisher frameworkYResNet50, transfer learning, SoftMax, SVM, Augmentation, GPU278.33588.8N/AN/A
1CY
[160]N1DYNYYEvaluation of skin lesion using Levenberg neural networks and stacked autoencoders clusteringYABCD, morphological analysis, Levenberg–Marquardt neural network2N/A9898N/AN/A
[161]N1DYYNNClassify limited and imbalanced skin lesion imagesNAdversarial Autoencoder with 19 layers, Augmentation, GPU2N/AN/A83N/AN/A
[162]Y1DYYNNEnsemble different CNN for skin lesion classificationYGoogLeNet, AlexNet, ResNet, VGGNet, Sum of the probabilities, Product of the possibilities, Simple majority voting (SMV), Sum of the maximal probabilities (SMP), Weighted ensemble of CNN, Augmentation, GPU386.655.678.5N/AN/A
[163]N1DYYNNSkin lesion analysis using Multichannel ResNetYEnsemble multi-ResNet50, ANN, concatenated Fully Connected Layer, Augmentation, GPU382.4N/AN/AN/AN/A
[164]N1ANYYYClassify skin lesions based on border thicknessYGoogleNet, Breslow index566.289.1985N/AN/A
[165]N1DYYYYClassify lesion using fused features that extracted from deep learning and image processingYResNet50, Telangiectasia Vessel Detection Algorithm, transfer learning, GPU5N/AN/AN/AN/AN/A
[166]Y1DYYNNClassify skin lesions over IoT using deep learningYCNN with nine layers, IoT,781.4N/AN/AN/AN/A
[168]Y3DYYYNSkin lesion classification using depthwise separable residual CNNYNon-local means filter, contrast-limited adaptive histogram equalization, discrete Wavelet transforms, depthwise separable residual DCNN299.599.31100N/AN/A
1CY
[169]Y1DYYYYClassify skin lesion using Attention Residual LearningYAttention residual learning CNN, ResNet50, transfer learning, Augmentation, GPU3N/AN/AN/AN/AN/A
[170]Y1DYYNNClassify skin lesion using CNN with novel regularization methodYCNN 7 layers, standard deviation of the weight matrix, GPU297.4994.393.6N/AN/A
[171]N1DYNYYClassify skin lesion by Incorporating the knowledge of dermatologists to CNNYResNet50, DermaKNet, Modulation Block, asymmetry block, AVG layer, Polar AVG layer.391.7N/A65.2N/AN/A
[172]Y1--YYYClassify skin lesions based on the novel 7-point melanoma checklist using Multitask CNNYMultitask CNN, 7-point melanoma checklist, Augmentation, GPU387.177.389.463N/A
[173]Y1DYYNNClassify skin lesion using Aggregated CNNYResNet50, ResNet101, fisher vector (FV), SVM, Chi-squared kernel, transfer learning, Augmentation, GPU286.81N/AN/AN/AN/A
[174]Y---YNNUsing CNN as a feature extractor for skin lesion images and classify these featuresYAlex-Net, ECOC SVM, transfer learning494.297.8390.74N/AN/A
[175]Y1DYYNNClassification of skin neoplasms using CNN and transfer learning with web and mobile applicationNInception V3 (GoogleNet), transfer learning, Augmentation, GPU-91N/AN/AN/AN/A
[176]N1CNYNYAn application used through the cloud to classify diseases of face skinYLeNet-5, AlexNet and VGG16, transfer learning, Augmentation, GPU5N/AN/AN/AN/AN/A
[177]Y2DYYYYSkin lesion classification using 4 CNNs and ensembling of the final classification resultsYAlexNet, VGG, ResNet-18, ResNet-101, SVM, MLP, random forest, transfer learning, Augmentation, GPU387.78573.29N/AN/A
[178]Y1DYYNNCompare the ability of deep learning model to classify skin lesions with expert dermatologistsYResNet50, local outlier factor, transfer learning, GPU2N/A87.560N/AN/A
[179]N1DNYYYEvolving the deep learning model PSO, hybrid learning PSO, Firefly Algorithm, spiral research action, probability distributions, crossover, mutation, K-Means, VGG16, Augmentation, GPU273.76N/AN/AN/AN/A
2DY
[181]Y3DYYYYCombine and expand current segmentation CNN to enhance the classification of skin lesionsYU-Net, ResNet34, LinkNet34, LinkNet152, fine-tuning, PyTorch, transfer learning, Augmentation, GPU, Jaccard-loss,2N/AN/AN/AN/AN/A
[182]N1DYNYYskin lesion segmentation based using geodesic morphological active contourYGaussian filter, Otsu’s threshold, deformable models, partial differential equation, Mathematical morphology, active geodesic contour, neural network, deep learning, statistical region merging (SRM),294.5991.7297.99N/A89
[183]Y1--YNNErythema migrans and the other confounding lesions of skin usingYResNet50, Keras, TensorFlow, fine-tuning, transfer learning, Augmentation, GPU486.5376.475.96N/A92.09
[184]Y1DYYNNComparing between the CNN and 112 dermatologists for skin lesion detectionYResNet50, fine-tuning, transfer learning, Augmentation, GPU5N/A56.598.2N/AN/A
[185]Y2DYYYYSegmentation of skin lesions by ensemble the segmentation output of 2 CNNYDEEPLABV3+, Mask R-CNN, ABCD, fine-tuning, transfer learning, Augmentation, GPU294.0889.9395N/AN/A
[186]Y1CYNYYProposing an algorithm that able to train CNN with limited dataYInception V3 (GoogleNet), PECK, SCIDOG, SVM, RF, fine-tuning, transfer learning, Augmentation, GPU2919293N/A90.7
[187]Y1CNYNNClassify skin disease of faces using Euclidean space to compute L-2 distance between imagesYResNet152, InceptionResNet-V2, fine-tuning, Euclidean space, L-2 distance, transfer learning, Augmentation, GPU487.4297.0497.23N/AN/A
[188]Y1DYYYYNeural Architecture Search to increase the size of the network based on the dataset sizeYVGG8, VGG11, VGG16, 5-fold validation, Neural Architecture Search (NAS), hill-climbing, transfer learning, Augmentation, GPU277N/AN/AN/AN/A
[189]Y2DYYYYSkin lesion segmentation and pixel-wise classification using encoder and decoder networkYCNN, encoder-decoder deep network with skip connection, softmax, transfer learning, Augmentation, GPU2959796N/AN/A
[190]Y2DYYYYMultitasks DCNN for skin lesion segmentation, detection, and classificationYMultitask DCNN, Jaccard distance, focal loss, Augmentation, GPU295.983.198.6N/A95
[191]Y1DYYYYLight Lightweight CNN for skin lesion segmentation and classificationYLightweight CNN, MobileNet, DenseNet, U-Net, focal loss, fine-tune, transfer learning, Augmentation, GPU296.293.497.4N/A88.9
[192]N1DYYYNEnhancement and classification of dermoscopic skin imagesYStyleGANs, 43 CNNs (ResNet50, VGG11, VGG13, AlexNet, SENet, etc.) max voting, fine-tune, transfer learning, Augmentation, GPU899.598.399.6N/A92.3
[193]Y1DYYYYSkin lesion classification based on CNNYDeep-class CNN, Augmentation, GPU2757378N/AN/A
[194]Y2DYYYYSkin lesion segmentation using encoder-decoder FCNYFCN, GPU396.9296.8895.31N/AN/A
[195]Y3DYYNYClassify skin melanoma by extracting ROI and CNNSYAlexNet, ResNet101, GoogleNet, Multiclass SVM, SoftMax, Histogram based windowing process, hierarchical clustering, fine-tune, transfer learning, Augmentation, GPU398.1497.2798.60N/A88.64
1CY
[196]Y3DYNYYThe fusion of extracted deep features of a skin lesion for classificationYBiorthogonal 2-D wavelet transform, Otsu algorithm, Alex and VGG-16, PCA, fusion, fine-tune, transfer learning, Augmentation, GPU299.999.599.6N/AN/A
[197]N3DYYYYEnsemble multiscale and multi-CNN networkYEfficientNetB0, EfficientNetB1, SeResNeXt-50, fusion, fine-tune, transfer learning, Augmentation, GPU796.3N/AN/A91.382
[198]Y2DYYYYClassification of skin lesions by multiclass multilevel using traditional machine learning and transfer learningYK-means, Otsu’s thresholding, GLCM, ANN. k-fold validation, AlexNet, fine-tune, transfer learning, Augmentation, GPU493.0287.8798.1797.96N/A
[199]N2DYNYYOptimized algorithm for weight selection to minimize the output of the networkYThe bubble-net mechanism, Whale optimization algorithm (WOA), Lévy Flight Mechanism, genetic algorithm, shark smell optimization (SSO), world cup optimization algorithm, grasshopper optimization algorithm (GOA), particle swarm optimization algorithm (PSO), LeNet, fine-tune, transfer learning, GPU2N/AN/AN/AN/AN/A
[200]Y3DYNYYsemantic skin lesion segmentation with parameters reducingYU-Net, FCN8s, DSNet, Augmentation, GPU7N/A87.595.5N/AN/A
[201]N3DYYYYDifferent CNN network integration for segmentation and multiple classification stageYInception-v3, ResNet-50, Inception-ResNet-v2, and DenseNet-201789.288187.16N/A81.82
[202]Y3DYYYYSkin lesion segmentation based on CNNYCIElab, FCN, U-Net, Augmentation, GPU2N/AN/AN/AN/A87.1
[205]Y1DYYNNClassify the challenging dataset ISIC2018YAlexNet, 10-fold cross-validation, fine-tune, transfer learning, Augmentation, GPU792.9970.449662.78N/A
[204]Y1DYYNNClassify the challenging dataset ISIC2019YGoogleNet, Similarity score, bootstrap weighted SVM classifier, SoftMax, fine-tune, transfer learning, Augmentation, GPU998.7095.699.2795.06N/A
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kassem, M.A.; Hosny, K.M.; Damaševičius, R.; Eltoukhy, M.M. Machine Learning and Deep Learning Methods for Skin Lesion Classification and Diagnosis: A Systematic Review. Diagnostics 2021, 11, 1390. https://doi.org/10.3390/diagnostics11081390

AMA Style

Kassem MA, Hosny KM, Damaševičius R, Eltoukhy MM. Machine Learning and Deep Learning Methods for Skin Lesion Classification and Diagnosis: A Systematic Review. Diagnostics. 2021; 11(8):1390. https://doi.org/10.3390/diagnostics11081390

Chicago/Turabian Style

Kassem, Mohamed A., Khalid M. Hosny, Robertas Damaševičius, and Mohamed Meselhy Eltoukhy. 2021. "Machine Learning and Deep Learning Methods for Skin Lesion Classification and Diagnosis: A Systematic Review" Diagnostics 11, no. 8: 1390. https://doi.org/10.3390/diagnostics11081390

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop