Next Article in Journal
Machine Learning Approach for Improved Longitudinal Prediction of Progression from Mild Cognitive Impairment to Alzheimer’s Disease
Previous Article in Journal
Deep Learning Model Based on You Only Look Once Algorithm for Detection and Visualization of Fracture Areas in Three-Dimensional Skeletal Images
Previous Article in Special Issue
MRI-Based Radiomics Analysis of Levator Ani Muscle for Predicting Urine Incontinence after Robot-Assisted Radical Prostatectomy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessment of Deep Learning Models for Cutaneous Leishmania Parasite Diagnosis Using Microscopic Images

1
Department of Microbiology and Clinical Microbiology, Faculty of Medicine, Near East University, North Cyprus, Mersin 10, Lefkoşa 99010, Turkey
2
Department of Biomedical Engineering, Faculty of Engineering, Near East University, North Cyprus, Mersin 10, Lefkoşa 99010, Turkey
3
Research Center for Science, Technology and Engineering (BILTEM), Near East University, TRNC, Mersin 10, Lefkoşa 99138, Turkey
4
Department of Molecular Biology and Genetics, Faculty of Arts and Sciences, European University of Lefke, Lefke 99010, Turkey
5
Department of Clinical Microbiology and Infectious Diseases, Faculty of Medicine, Near East University, North Cyprus, Mersin 10, Lefkoşa 99010, Turkey
*
Author to whom correspondence should be addressed.
Diagnostics 2024, 14(1), 12; https://doi.org/10.3390/diagnostics14010012
Submission received: 14 November 2023 / Revised: 11 December 2023 / Accepted: 14 December 2023 / Published: 20 December 2023
(This article belongs to the Special Issue Deep Learning in Medical Image Segmentation and Diagnosis)

Abstract

:
Cutaneous leishmaniasis (CL) is a common illness that causes skin lesions, principally ulcerations, on exposed regions of the body. Although neglected tropical diseases (NTDs) are typically found in tropical areas, they have recently become more common along Africa’s northern coast, particularly in Libya. The devastation of healthcare infrastructure during the 2011 war and the following conflicts, as well as governmental apathy, may be causal factors associated with this catastrophic event. The main objective of this study is to evaluate alternative diagnostic strategies for recognizing amastigotes of cutaneous leishmaniasis parasites at various stages using Convolutional Neural Networks (CNNs). The research is additionally aimed at testing different classification models employing a dataset of ultra-thin skin smear images of Leishmania parasite-infected people with cutaneous leishmaniasis. The pre-trained deep learning models including EfficientNetB0, DenseNet201, ResNet101, MobileNetv2, and Xception are used for the cutaneous leishmania parasite diagnosis task. To assess the models’ effectiveness, we employed a five-fold cross-validation approach to guarantee the consistency of the models’ outputs when applied to different portions of the full dataset. Following a thorough assessment and contrast of the various models, DenseNet-201 proved to be the most suitable choice. It attained a mean accuracy of 0.9914 along with outstanding results for sensitivity, specificity, positive predictive value, negative predictive value, F1-score, Matthew’s correlation coefficient, and Cohen’s Kappa coefficient. The DenseNet-201 model surpassed the other models based on a comprehensive evaluation of these key classification performance metrics.

1. Introduction

The Trypanosomatidae family consists of single-celled, obligate parasites that infect invertebrates, vertebrates, and plants. These eukaryotic flagellates possess a distinctive single mitochondrion DNA known as kinetoplast, which is a defining characteristic of the Kinetoplastea Class. The positioning of the kinetoplast in relation to the nucleus and the flagella’s point of emergence allows for the identification of specific life cycle forms, which are unique to certain genera within the family [1,2]. Cutaneous Leishmaniasis is a disease that is frequently found in tropical and subtropical areas and is transmitted by this particular protozoan species. The bite of an infected sandfly spreads this animal protozoan [3]. Despite its prevalence throughout the world, most cases are found in South America, the Mediterranean basins, and some parts of Asia and Africa [4]. The Caribbean, the Mediterranean basins, the Arabian Peninsula, and Central Asia account for about 95% of all CL cases [5]. The World Health Organization (WHO) estimates that between 700,000 and 1,000,000 new cases are reported each year. Malnutrition, population dislocation, inadequate housing, unhygienic circumstances, a compromised immune system, and a shortage of resources are all associated factors [6]. The inoculation site is usually an exposed part of the body, such as the scalp and extremities, where both of these groups normally form tiny pustules [7]. Infection with the Leishmania parasite can result in cutaneous leishmaniasis (CL), an infection of the skin that can occur in people who have been bitten by infected sand flies. There have also been reports of transmission between humans via infected acupuncture needles, transfusions of blood, or prenatal transmission [7,8]. The types and severity of skin infections associated with this condition vary, and they can typically appear many weeks or even months after infection [9]. Papules gradually enlarge into nodules that become ulcerated [10]. The last ulcer, a hallmark of CL, self-heals in three to eighteen months, depending on the species. According to estimates, up to 10% of CL cases develop, become chronic, and display increasingly severe clinical symptoms [7,11,12]. Despite the fact that CL rarely results in death, it can have a detrimental influence on quality of life and cause significant morbidity [13]. A recent genetic investigation in Libya revealed that parasites such as Leishmania major and Leishmania tropica are the primary cause of CL in the nation. Additionally, the illnesses identified in Libya’s various regions substantially reflect those observed in CL-affected Mediterranean locales.
Artificial intelligence (AI) principles have enormous potential for a wide range of applications, including risk modeling and classification, self-detection, diagnostics, including the classification of small molecules into illness subgroups, and the prediction of treatment response and prognosis. AI is increasingly being used in medical and biological research as well as therapeutic treatment [14,15]. Several preclinical and clinical studies on healthcare have been conducted recently using supervised machine learning (SML) and various AI technologies. In terms of e-health care, particularly the accurate identification and classification of diseases, SML has changed almost every industry internationally. For many aspects of healthcare, multiple university and industry labs are creating AI technologies [16]. There are numerous applications based on medical imaging investigations, including the identification and diagnosis of squamous cell carcinoma [17] and lung diseases [18]. Medical image analysis has recently benefited greatly from deep learning applications [19].
Convolutional neural networks (CNNs) have produced exceptional classification and segmentation results for images. Clinical prediction frameworks have been developed, and significant linkages have been explained thanks to the application of data analysis, machine learning, and deep learning algorithms in modern healthcare [20,21]. A CNN is a deep training algorithm that primarily focuses on object and image classification algorithms [22]. The first layer of a convolutional neural network is composed of the convolutional layer and the pooling layer together, while the final layer is the fully connected (FC) layer [23]. The density between these layers can be scaled up to capture more fine detail, but doing so will require more computer power depending on the complexity of the images [24,25]. The foundational component of CNN is where the majority of computations take place. After the first conversion layer, there could be a second conversion layer. During the convolution process, a kernel or filter within this particular layer shifts throughout the image’s receptive fields to assess whether a characteristic is present [26,27]. The input parameter count is decreased by the pooling layer, but some data are also lost as a result. Positively, this layer streamlines operations and increases the CNN’s effectiveness [28]. Complete connection refers to the connection of all inputs or nodes from one layer to each activated unit or cluster from the following layer [29].
Microbiologists continue searching for novel microbiology diagnostic procedures that are more rapid, less expensive, more accurate, and treatment-oriented due to the shortcomings of conventional diagnostic performances. Due to variations in species and sizes, the physical detection of leishmaniasis can be laborious, slow, and inaccurate. Expertise is required for precise detection, which is more challenging in complex circumstances. This study’s primary objective is to assess alternative diagnostic approaches for the cutaneous leishmaniasis parasites using a convolutional neural network model.
  • The implementation of pre-trained models (EfficientNet-B0, DenseNet201, Mobilenet-v2, ResNet101, and Xception) for the classification of microscopic images as positive and negative is one of the study’s key achievements;
  • Performance evaluation of the models can be assessed using various metrics, including accuracy, sensitivity, specificity, precision, F1-Score, Matthew’s correlation coefficient (MCC), Negative Predictive Value (NPV), Cohen’s kappa, Area Under Curve (AUC), and Receiver Operating Characteristic (ROC) curve;
  • One of the purposes of this investigation is to evaluate and compare the efficiency of five different pre-trained deep-learning models within the realm of classification tasks.

2. Related Work

In this study, focusing on the classification of cutaneous leishmaniasis, microscopic images are mostly taken into consideration. In this section, studies in the literature are presented in detail.

2.1. Cutaneous Leishmaniasis

The study used microscopic images to analyse diverse cultures of cutaneous leishmania parasites. With diameters of roughly 1500–1300 pixels, the images caught and annotated the promastigote and amastigote stages of Leishmania infantum, Leishmania major, and Leishmania braziliensis. The work was a success in terms of creating parasite cultures, annotating images, and training the U-Net model. During training, a U-Net model was used for pixel-wise classification to solve class imbalances. Performance was evaluated using criteria such as precision, recall, and F1-score. The approach used to deal with class imbalance was critical for analysing pixel percentages per class and revealed information on image region distribution and representation. The algorithm used accurately detected 82.3% of amastigote stage frequencies [30]. Using a dataset of 300 images gathered from 50 laboratory slides, this study attempted to develop an artificial intelligence-based technique to perform the automated diagnosis of leishmaniasis. These slides were obtained from patients at the Valfajr Clinic in Shiraz, Iran, and included 150 images from positive and 150 from negative leishmania slides. The Viola–Jones technique was used in the algorithm, which included three important steps: feature extraction, integral picture creation for quicker processing, and classification using Haar-like features. The classifier was trained using the AdaBoost algorithm after discriminative characteristics were chosen. The task of recognising amastigotes outside of macrophages had a recall of 0.520 and a precision of 0.711 [31]. The images have been captured with a smartphone at a microscopic magnification of 50 and exported in PNG format with an average resolution of 1320 px × 1900 px. The collection was 45 microscopic images in total. The resulting images were pre-processed, and pre-training was implemented for segmentation images of promastigotes and amastigotes forms for cutaneous leishmanials using the K-means algorithm, histogram thresholding, and the U-net structure. Individual precision and recall values for amastigotes phases were 61.07% and 87.90%, respectively [32].

2.2. Plasmodium Parasites (Malaria)

For the all-important work of identifying malaria by blood smear testing, 27,558 malaria blood smear images from the National Institute of Health (NIH) and expanded wielding rotation, zooming, and flipping, then a two-stage technique was created. Initially, a U-Net method was used to precisely segment red blood cell clusters. Following that, faster R-CNN was used to recognise smaller cell objects inside these linked components. This technique was especially successful because of its versatility, with U-Net-derived cell-cluster masks guiding the detection process, resulting in higher true positive rates and lower false alarms. A unique CNN termed Attentive Dense Circular Net (ADCN) was introduced for the successful classification of malaria-infected red blood cells, inspired by residual and dense networks and including an attention mechanism. The revolutionary inclusion of attention mechanisms in ADCN allows it to focus on key features, resulting in a patient-level accuracy of 97.47% in the classification of infected RBCs [33,34]. Pattanaik and colleagues introduced a novel approach, the Multi-Magnification Deep Residual Network (MM-ResNet), tailored for the classification of malaria-infected blood smears captured using Android smartphones. Their study utilised a publicly available dataset consisting of 1182 field-stained images, encompassing three magnification levels: 200×, 400×, and 1000×. MM-ResNet is built upon convolutional layers, batch normalization, and ReLU activation functions, trained with a single pass through the data, reducing the data requirements. This model effectively mitigates the challenges posed by the low image quality, varying luminance, and noise inherent in smartphone-captured images, thanks to the utilization of residual connections and an abundance of filters. Remarkably, the proposed MM-ResNet achieved an impressive accuracy rate of 98.08% during five-fold cross-validation, demonstrating its efficacy in malaria blood smear classification across varying magnifications and challenging image conditions [35].

2.3. Trypanosoma Parasites

Traditional approaches for identifying Chagas disease’s acute phase entail detecting Trypanosoma cruzi in peripheral blood slides using microscopy. An innovative approach analyses image tiles from these samples using MobileNet V2 convolutional layers, yielding 1280-dimensional feature vectors that are input into a single-neuron classifier. Initial validation tests on a small 12-slide dataset achieved 96.4% accuracy but dipped to 72.0% on the 13th slide. Incorporating images from six additional slides, including thick blood samples, raised the accuracy to 95.4% on two further slides. Raster scans with overlapping windows efficiently reveal positive Trypanosoma cruzi occurrences in both blood smear and thick blood images, highlighting the method’s potential to boost Chagas disease detection [36]. In this study, an innovative approach was developed for the automatic detection of the Trypanosoma cruzi parasite in blood smears using machine learning techniques applied to mobile phone images. A total of 33 slides containing thin blood smears from Swiss mice infected with the T. cruzi Y strain during the acute phase were analyzed. Images were standardised to 768 × 1024 pixels2, and parasites were segmented, with 100 × 100 pixels2 regions around each parasite cropped based on manual annotations. Object features were extracted after converting from RGB to CIEL colour space. To enhance performance and mitigate noise, Principal Component Analysis (PCA) was applied. Supervised learning classifiers, such as Support Vector Machines (SVM), K-nearest neighbors (KNN), and Random Forest (RF), were employed due to their strong generalization with limited data. The dataset was split into training and test sets (4:1 ratio) and classified using the RF algorithm. The proposed method demonstrated significant precision (87.6%), sensitivity (90.5%), and an impressive Area Under the Receiver Operating Characteristic (ROC) Curve of 0.942. This research showcases the potential of automating image analysis through mobile devices, offering a cost-effective and efficient alternative to traditional optical microscope methods [37]. A mobile bot application was created using deep learning to identify Trypanosoma evansi infection using 125 images of T. evansi obtained in an oil immersion field. In a one-stage learning technique, the YOLOv4-tiny model was used to categorise two particular parasite stages: “Trypanosoma_evansi_slender” and “Trypanosoma_evansi_stumpy”. The addition of 20-degree rotational angles to the data resulted in remarkable performance metrics: 95% sensitivity, specificity, precision, accuracy, and F1 score with a misclassification rate of less than 5%. CIRA Bot, the simulation platform, produced results equivalent to those of computational trials, with an area under the ROC curve of 0.964 and a precision-recall curve of 0.962. This breakthrough offers considerable potential for using thin-blood film evaluation to diagnose T. evansi infection [38].

2.4. Cryptococcus neoformans

Web scraping was used in this study to collect data on microscopic images of both C. neoformans and non-C. neoformans using specified keywords such as “India-ink-stained smear of CSF”, “C. neoformans”, and others. These images were classified as positive or negative, yielding a dataset of 63 high-quality microscopic images for each category, which was later expanded to 1000 images. The study is supposed to recognise and classify C. neoformans images using a deep learning strategy based on convolutional neural networks (CNN). Various frameworks and libraries were used to carry out model training, validation, testing, and evaluation. The VGG16 model, deemed cutting-edge, performed successfully, with an accuracy of 86.88% and a loss of 0.36203, respectively [39].
Artificial intelligence is not only used for medical issues; fingerprinting is another application of biometrics used for personal identification due to its uniqueness and individualistic characteristics. However, traditional algorithms encounter difficulties when recreating fingerprint images due to poor quality and structured noise. This paper introduces a novel fingerprint system that employs a sparse autoencoder (SAE) algorithm to reconstruct fingerprint images. The SAE, an unsupervised deep learning model, replicates its input at the output. The model is trained and optimised using pre-processed datasets of fingerprint images, and its robustness is validated using three datasets, with 70% for training and 30% for testing. By fine-tuning and optimizing the SAE with L2 and sparsity regularization, the efficiency of learning representation is improved. The results demonstrate that the proposed approach significantly enhances the quality of reproduced fingerprint images by capturing distinct ridge structures and eliminating overlapping patterns [40].

3. Methodology

3.1. Study Design

This study aimed to develop a cutaneous leishmania parasite diagnosis system using the images observed under a microscope. Considered pre-trained models, including MobileNet-v2, Xception, DenseNet-201, ResNet-101, and EfficientNet-b0, as shown in Figure 1.

3.2. Data Preparation

A prospective cohort research project was undertaken in the Al-Murqub regions (8841.08 square kilometers) 32°19′12.7″ N 13°57′39.2″ E in the northwestern part of Libya, which encompass five cities and roughly nine villages where ZCL is endemic. The samples have been collected in August, September, and October 2022. Furthermore, the Ethical Committee at Emhammed Almgarif Health Center, situated in the Al-Murqub district, provided approval to conduct the research (EMHC. REF. 22.08.1063).
Infectious disease and clinical microbiology physicians, in conjunction with medical microbiologists, conducted the classification of all visual depictions, including both positive and negative representations. Some of these images are shown in Figure 2.

3.3. Pre-Trained Models

The MobileNet-v2, Xception, DenseNet-201, ResNet-101, and EfficientNet-b0 models, which have been trained on a vast dataset and are renowned for their excellent performance in computer vision tasks, were utilised in this study. By utilizing their pre-trained weights, training time is minimised, and model accuracy is enhanced. Furthermore, the models have undergone fine-tuning on the particular dataset employed in this experiment to optimise their performance even more.

3.3.1. DenseNet-201

DenseNet-201 is a 201-layer deep convolutional neural network. It provides a pre-trained variation that was trained on over 1 million images obtained from the ImageNet collection. This previously trained machine learning algorithm is capable of classifying images into 1000 separate object categories, which include items such as mice, keyboards, pencils, and numerous animals. As a result, the network has developed detailed feature representations that cover a wide range of image types. It uses an input picture size of 224 by 224 pixels [41].

3.3.2. Xception

Xception, an acronym for extreme inception, is a convolutional neural network (CNN) model developed by the Google team. Similar to other deep CNNs, this model utilises depth-wise separable convolution and shortcuts between convolutional blocks. However, in the case of Xception, the order of depthwise and pointwise convolutions is reversed compared to MobileNet. In other words, the Xception model applies pointwise convolution before depthwise convolution. The architecture of Xception consists of three components: entry, middle, and exit flows [42].

3.3.3. MobileNet-v2

MobileNet-v2 is designed to be as efficient as possible by employing depthwise separable convolutions, inverted residual blocks, and linear bottlenecks. It is a convolutional neural network with 53 layers of depth. This network can be launched with a pre-trained model that was trained on a huge dataset of over one million photographs from ImageNet. This pre-trained model can classify images into over 1000 different object categories, including keyboards, mice, pencils, and several animal species. Notably, the network operates with picture inputs of 224 by 224 pixels [43].

3.3.4. ResNet-101

ResNet-101 is a well-known model in the field of computer vision that was created to meet issues in image identification jobs. This model has a complex network structure with 104 convolutional layers organised into 33 layers blocks. Twenty-nine of these blocks are physically connected to the ones before them, producing a hierarchy of interconnected levels. Extensive empirical research has shown that these residual networks have a higher degree of optimization ease and may successfully exploit increased depth to produce improved accuracy. These findings have been validated by thorough experimentation on the ImageNet dataset, confirming ResNet-101′s applicability and resilience in the field of computer vision [44].

3.3.5. EfficientNet-b0

The EfficientNet-b0 model is well-known for its smart and all-encompassing network design. It distinguishes itself by obtaining a phenomenal top-1 accuracy of 84.3% on the difficult ImageNet dataset while also delivering considerable efficiency benefits. Despite its efficiency, EfficientNet contains a large number of variables, approximately 66 million and conducts nearly 37 billion floating-point operations per second (FLOPS). EfficientNet is around 8.4 times more compact and has 6.1 times quicker prediction performance when compared to the most sophisticated convolutional neural networks (CNNs) obtainable [45].

3.4. Experimental Design

The configuration of the computer used to perform the experiments is 32 GB of RAM, an NVIDIA GeForce RTX 2080-Ti graphics processor, and an i9-9th Generation CPU. Each model is trained using the Adam optimiser while maintaining a batch size of 10 and a learning rate of 0.001. Additionally, all models underwent training for a maximum of 30 epochs.

3.5. Performance Evaluation

The performances of the models are evaluated using Receiver Operating Characteristic (ROC) curve analysis. In this analysis, the area under the curve (AUC) was calculated to measure the overall model accuracy in both the training and validation sets. Additionally, we assessed the model performance using metrics such as sensitivity, specificity, precision, Matthew’s correlation coefficient (MCC), and Cohen’s kappa and F1-score to provide a comprehensive evaluation of its effectiveness.
Accuracy is a metric that quantifies the proportion of right predictions to determine how accurately an algorithm predicts outcomes.
A c c u r a c y = T P + T N T P + T N + F P + F N
Sensitivity, often known as recall, assesses an algorithm’s ability to recognise genuine positives within all positive cases.
S e n s i t i v i t y = T P T P + F N
Precision calculates the ratio of true positives compared to all predicted positives to determine the accuracy of an algorithm’s positive predictions.
P r e c i s i o n = T P T P + F P
Specificity, also known as the true negative percentage, is the number of actual negative observations correctly estimated by an algorithm out of all negative occurrences.
S p e c i f i c i t y = T N T N + F P
The F1 score is an indicator that combines precision and recall to provide a balanced evaluation of a model’s accuracy for positive as well as negative predictions. The weighted harmonic mean of these two metrics is used to calculate it.
F 1   s c o r e = 2 × P r e c i s i o n × r e c a l l p r e c i s i o n + r e c a l l
The Matthews correlation coefficient (MCC) evaluates the effectiveness of a binary classification model on a scale of −1 to +1. A −1 score indicates an inadequate classifier, while a +1 score indicates an accurate classifier. MCC is regarded as a well-balanced metric because it considers both positive and negative results.
M C C = T P × T N ( F P × F N ) ( T P + F P ) ( T P + F N ) ( T N + F P ) ( T N + F N )
Cohen’s kappa is a statistic used to quantify the level of agreement between a model’s overall accuracy and the accuracy expected by chance. The kappa value, which is determined using Equation (7), falls between 0 and 1. A kappa value of 0 implies no agreement, whereas a kappa value of 1 shows complete agreement.
K = P o P e 1 P e
where Po is the accuracy of the model (Equation (1)), and Pe is the hypothetical probability of chance agreement, computed as:
P e = T P + F N T P + F P + ( F P + T N ) ( F N + T N ) T P + T N + F P + F N 2

4. Results and Discussion

The healthcare system is transforming to improved accuracy, speed, and reliability due to the integration of AI-based techniques. In diagnosis, supervised and unsupervised machine learning models are being utilised to enable automated, intelligent diagnosis by healthcare professionals. Over the past decade, numerous deep learning models, including CNNs and ANNs, have been developed and implemented in healthcare to assist in diagnosing various diseases.
In recent years, deep learning models have been created and applied for the detection of several diseases. This study explored five pre-trained deep learning models for binary classification of microscopic images into positive and negative cutaneous leishmaniasis.

4.1. Results

The results presented are based on the mean performance across five-fold cross-validation. The classification performance measures for cutaneous leishmaniasis using the five different pre-trained deep learning models with five-fold cross-validation are shown in Table 1, Table 2, Table 3, Table 4 and Table 5.
The AUC scores and confusion matrices for the classification of microscopic images using pre-trained deep-learning models are presented in Figure 3 and Figure 4, respectively.
Figure 5 demonstrates the Class Activation Map (CAM) examples for both classes generated by the considered models.
Table 1, Table 2, Table 3, Table 4 and Table 5 demonstrate the results of the considered deep learning models when applied to microscopic images of the cutaneous leishmanial amastigote stage. The proportion of properly identified cases is measured by accuracy, whereas the F1-score combines precision and recall to provide a balanced assessment of model performance. The Matthews correlation coefficient (MCC) and the Cohen’s kappa coefficient were employed as assessment measures.
DenseNet-201 model exhibited notable performance by achieving an accuracy of approximately 0.99146911 and an F1 score of about 0.99102. These metrics suggest that the model accurately classified 99.14% of the samples in the dataset while maintaining a commendable balance between precision and recall, as evidenced by the high F1 score. EfficientNet-b0, while slightly trailing behind DenseNet-201 in terms of accuracy with an achievement of approximately 0.990727, demonstrated a strong F1 score of about 0.990129. This signifies that the model maintained an excellent trade-off between precision and recall, highlighting its robust classification capabilities. MobileNet-v2 showcased admirable performance, with an accuracy of about 0.98738748 and an F1 score of approximately 0.986638. Despite slightly lower accuracy than the aforementioned models, MobileNet-v2 still managed to classify 98.74% of the samples accurately and upheld a high F1-score, signifying its effectiveness in classification tasks. ResNet-101 delivered robust results, with an accuracy of around 0.9851632 and an F1 score of about 0.984165. This model’s high accuracy and balanced F1 score underscore its proficiency in accurate sample classification. Xception, akin to MobileNet-v2, achieved commendable accuracy, approximately 0.987757163, and a well-balanced F1-score of about 0.986926. This outcome reflects Xception’s effectiveness in the classification task, as it maintained a high level of accuracy and a harmonious balance between precision and recall.
The results of various deep learning models on microscopic images of the cutaneous leishmanial amastigote stage. The assessment metrics utilised, the Matthews correlation coefficient (MCC) and Cohen’s kappa coefficient, provide information about the agreement between projected and actual classifications. DenseNet-201 outperformed the other topologies, with an MCC of 0.98302 and a Cohen’s kappa coefficient of 0.98289. With an MCC of 0.98148 and a Cohen’s kappa coefficient of 0.98136, EfficientNet-b0 came in second. These results reveal that both designs had high agreement in identifying the images, demonstrating their effectiveness in this task.

4.2. Comparison of the Model Performances

The evaluation and comparison of model performance using a five-fold cross-validation approach has revealed significant findings: the DenseNet-201achieved superior results in terms of accuracy, sensitivity, NPV, MCC, F1-Score, and Cohen’s Kappa is 0.9914691, 0.9952845, 0.9958081, 0.99102, 0.98302 and 0.98289, respectively. In terms of specificity and AUC, Xception achieved the best results with 0.990890819 and 0.999481, respectively. EfficientNet-b0 achieved the best result in terms of PPV with 0.9901014. This shows that DenseNet-201 achieved the best result, followed by Xception, as shown in Table 6.

4.3. Discussion

When we compared our findings to those of prior research on the amastigote stage of CL, Górriz’s [30] research focused on the promastigote and amastigote stages of Leishmania infantum, Leishmania major, and Leishmania braziliensis, and they used the U-Net model to train. A model had a precision of 0.757, which means that 75.7% of the predicted amastigote of cutaneous leishmania occurrences were right. With a recall of 0.823, the model correctly detected 82.3% of the actual amastigote occurrences. Zare et al. [31] created The Viola–Jones approach algorithm with an adaboost optimiser algorithm using a dataset of 300 images of positive and negative cutaneous leishmaniasis. The results showed that detecting macrophages infected with leishmania parasites had a 65% recall and 50% precision, and identifying amastigotes outside of macrophages had a 52% recall and 71% precision. Limon Jacques [32] captured microscopic images with a smartphone at a magnification of 50 were pre-processed and subjected to pre-training using the K-means algorithm, histogram thresholding, and the U-net structure for segmenting promastigotes and amastigotes forms in cutaneous leishmaniasis. The precision and recall values for amastigotes stages were 61.07% and 87.90%, individually, based on the segmentation data. However, the precision and recall scores for promastigotes were 91.0% and 47.14%, respectively.
In the study conducted by Maqsood et al. [46], experimental evaluations are performed on the benchmark NIH Malaria Dataset, and the results reveal that the proposed Xception model is 0.9494% accurate and 0.9494% F1-score in detecting malaria from the microscopic blood smears. Meanwhile, Densenet-201 achieved 0.9054% accuracy and 0.9052% F1-score. Biswal et al. [47] conducted an experiment with the MobileNet-v2 neural network model on a Kaggle dataset of 12,444 augmented images exhibiting diverse blood cell types classified into four separate classes. Pre-processing processes included data refining and image resizing to a consistent size of 128 × 128 pixels. In the study, two adaptive optimization methods, Adam and stochastic gradient descent (SGD), were used. The Adam optimiser produced an accuracy of 0.920, while the SGD optimiser provided an accuracy of 0.90, reflecting how well the model performed in categorizing the blood cell images. The research was carried out by Hiremath [48]; an investigation was conducted to assess the efficacy of various models for the classification of histopathological breast cancer images into benign and malignant categories at different magnification levels, specifically at 40×, 100×, 200×, and 400×. The models employed for this task included EfficientNet-b0 and EfficientNet with HSV colour transformation. The research utilised an openly accessible dataset sourced from Kaggle, comprising a total of 7909 images, with 2480 representing benign cases and 5429 representing malignant cases. The performance evaluation of EfficientNet-b0 was based on the accuracy metric, resulting in classification accuracy rates of 86%, 88%, 88%, and 83% for the respective magnification levels. Xu and his colleagues [49] analysed a dataset consisting of 4011 IVCM images captured from a total of 48 eyes. These eyes were categorised into different groups, including 35 eyes with keratitis, 7 eyes with dry eyes, and 6 eyes with pterygium. The original IVCM images were standardised to a resolution of 224 × 224 pixels. Deep Transfer Learning was conducted using various neural network models, with one of them being Residual Network-101. The findings revealed that ResNet-101 exhibited a notable level of accuracy, achieving a score of 0.9283.

5. Conclusions

To summarise, the top-performing deep learning models for identifying microscopic images of the cutaneous leishmanial amastigote stage were DenseNet-201 and EfficientNet-b0. DenseNet-201 displayed a well-balanced trade-off between precision and recall, demonstrated by its high F1 score and an accuracy of roughly 99.14%. It also had a high level of agreement with actual classifications, as evidenced by its MCC and Cohen’s kappa coefficient. EfficientNet-b0 followed closely behind with an accuracy of approximately 99.07% and comparable performance in terms of F1 score, MCC, and Cohen’s kappa coefficient. All models demonstrated their ability to effectively categorise the data while maintaining a harmonious mix of precision and recall.
Finally, when the deep learning models are tested using microscopic images of the cutaneous leishmanial amastigote stage, DenseNet-201 and EfficientNet-b0 excelled at accurately identifying the samples, displaying good agreement with the real classifications. These models demonstrated strong classification abilities while maintaining a commendable combination of precision and recall. However, when choosing the best model for practical applications, it is critical to examine the specific requirements and limits of the given task.

Author Contributions

O.M., methodology; O.M.; software; A.M.A. and E.G., validation; A.M.A. and E.G., formal analysis; K.S., resources; A.M.A. and E.G., data curation; O.M., A.M.A. and E.G., writing—original draft preparation; K.S., writing—review and editing; K.S., supervision; K.S., project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The Ethical Committee at Emhammed Almgarif Health Center, situated in the Al-Murqub district, provided approval to conduct the research (EMHC. REF. 22.08.1063 Date: 5 August 2022).

Informed Consent Statement

The Ethics Appraisal Committee of Al-Ethics Marqab’s Health Service has approved the study’s ethical conduct (EMHC 1063/8/22). All study participants have provided informed consent. Parental or guardian consent was acquired for participants under the age of 18.

Data Availability Statement

The data available upon request.

Acknowledgments

Name: Abu Bakr Muhammad Hussain Cambo. Specialty: Dermatology. Email: abubaker8@gmail.com. Affiliation: Muhammad Al-Maqrif polyclinic. Name: Ahmet Ilhan. Specialty: Computer Engineering. Email: ahmet.ilhan@neu.edu.tr. Affiliation: Near East University, Faculty of Engineering, Department of Computer Engineering. I would like to express my sincere gratitude to him for his invaluable contributions and unwavering dedication. His remarkable efforts have left a lasting impact and are deeply appreciated.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Boucinha, C.; Andrade-Neto, V.V.; Ennes-Vidal, V.; Branquinha, M.H.; Santos, A.L.S.D.; Torres-Santos, E.C.; d’Avila-Levy, C.M. A Stroll through the History of Monoxenous Trypanosomatids Infection in Vertebrate Hosts. Front. Cell. Infect. 2022, 12, 68. [Google Scholar] [CrossRef] [PubMed]
  2. Kostygov, A.Y.; Frolov, A.O.; Malysheva, M.N.; Ganyukova, A.I.; Chistyakova, L.V.; Tashyreva, D.; Tesařová, M.; Spodareva, V.V.; Režnarová, J.; Macedo, D.H.; et al. Vickermania gen. nov., trypanosomatids that use two joined flagella to resist midgut peristaltic flow within the fly host. BMC Biol. 2020, 18, 187. [Google Scholar] [CrossRef] [PubMed]
  3. Arkant, C.; Karabulut, A.İ.; Koçer, Y.; Yeşilnacar, M.İ. Evaluation of Cutaneous Leishmaniasis cases in Şanlıurfa in 2019–2022 using geographic information systems. Intercont. Geoinf. Days 2022, 5, 124–127. [Google Scholar]
  4. Abadías-Granado, I.; Diago, A.; Cerro, P.A.; Palma-Ruiz, A.M.; Gilaberte, Y. Cutaneous and Mucocutaneous Leishmaniasis. Leishmaniasis cutánea y mucocutánea. Actas Dermo-Sifiliogr. 2021, 112, 601–618. [Google Scholar] [CrossRef] [PubMed]
  5. Padhi, T.R.; Das, S.; Sharma, S.; Rath, S.; Tripathy, D.; Panda, K.G.; Basu, S.; Besirli, C.G. Ocular parasitoses: A comprehensive review. Surv. Ophthalmol. 2017, 62, 161–189. [Google Scholar] [CrossRef] [PubMed]
  6. Perales-González, A.; Pérez-Garza, D.M.; Garza-Dávila, V.F.; Ocampo-Candiani, J. Cutaneous leishmaniasis by a needlestick injury, an occupational infection. PLoS Negl. Trop. Dis. 2023, 17, e0011150. [Google Scholar] [CrossRef] [PubMed]
  7. Gurel, M.S.; Tekin, B.; Uzun, S. Cutaneous leishmaniasis: A great imitator. Clin. Dermatol. 2020, 38, 140–151. [Google Scholar] [CrossRef] [PubMed]
  8. Grinnage-Pulley, T.; Scott, B.; Petersen, C.A. A Mother’s Gift: Congenital Transmission of Trypanosoma and Leishmania Species. PLoS Pathog. 2016, 12, e1005302. [Google Scholar] [CrossRef]
  9. Berriatua, E.; Maia, C.; Conceição, C.; Özbel, Y.; Töz, S.; Baneth, G.; Pérez-Cutillas, P.; Ortuño, M.; Muñoz, C.; Jumakanova, Z.; et al. Leishmaniases in the European Union and Neighboring Countries. Infect. Dis. 2021, 27, 1723–1727. [Google Scholar] [CrossRef]
  10. Nafari, A.; Cheraghipour, K.; Sepahvand, M.; Shahrokhi, G.; Gabal, E.; Mahmoudvand, H. Nanoparticles: New agents toward treatment of leishmaniasis. Parasite Epidemiol. Control. 2020, 10, e00156. [Google Scholar] [CrossRef]
  11. Uzun, S.; Gürel, M.S.; Durdu, M.; Akyol, M.; Fettahlıoğlu Karaman, B.; Aksoy, M.; Aytekin, S.; Borlu, M.; İnan Doğan, E.; Doğramacı, Ç.A.; et al. Clinical practice guidelines for the diagnosis and treatment of cutaneous leishmaniasis in Turkey. Int. J. Dermatol. 2018, 57, 973–982. [Google Scholar] [CrossRef] [PubMed]
  12. Reithinger, R.; Dujardin, J.C.; Louzir, H.; Pirmez, C.; Alexander, B.; Brooker, S. Cutaneous leishmaniasis. Lancet Infect. Dis. 2007, 7, 581–596. [Google Scholar] [CrossRef] [PubMed]
  13. Ramot, Y.; Zlotogorski, A. Multilesional cutaneous leishmaniasis. CMAJ Can. Med. Assoc. J. 2016, 188, 1034. [Google Scholar] [CrossRef] [PubMed]
  14. Rajkomar, A.; Dean, J.; Kohane, I. Machine Learning in Medicine. N. Engl. J. Med. 2019, 380, 1347–1358. [Google Scholar] [CrossRef]
  15. Ong, W.; Zhu, L.; Tan, Y.L.; Teo, E.C.; Tan, J.H.; Kumar, N.; Hallinan, J.T.P.D. Application of Machine Learning for Differentiating Bone Malignancy on Imaging: A Systematic Review. Cancers 2023, 15, 1837. [Google Scholar] [CrossRef] [PubMed]
  16. Kumar, A.; Gadag, S.; Nayak, U.Y. The Beginning of a New Era: Artificial Intelligence in Healthcare. Adv. Pharm. Bull. 2021, 1, 414–425. [Google Scholar] [CrossRef] [PubMed]
  17. Cao, G.; Song, W.; Zhao, Z. Gastric cancer diagnosis with mask R-CNN. In 2019 11th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC). Hum.-Mach. Syst. 2019, 1, 60–63. [Google Scholar] [CrossRef]
  18. Gunjan, V.K.; Singh, N.; Shaik, F.; Roy, S. Detection of lung cancer in CT scans using grey wolf optimization algorithm and recurrent neural network. Health Technol. 2022, 12, 1197–1210. [Google Scholar] [CrossRef]
  19. Niu, Y.; Gu, L.; Zhao, Y.; Lu, F. Explainable Diabetic Retinopathy Detection and Retinal Image Generation. IEEE J. Biomed. Health Inform. 2022, 26, 44–55. [Google Scholar] [CrossRef]
  20. Pal, J.; Das, S. A Convolutional Neural Network (CNN)-Based Pneumonia Detection Using Chest X-ray Images. In Using Multimedia Systems, Tools, and Technologies for Smart Healthcare Services; IGI Global: Hershey, PA, USA, 2023; pp. 63–82. [Google Scholar]
  21. Ozturk, S. Convolutional Neural Networks for Medical Image Processing Applications, 1st ed.; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar] [CrossRef]
  22. Uppamma, P.; Bhattacharya, S. Deep Learning and Medical Image Processing Techniques for Diabetic Retinopathy: A Survey of Applications, Challenges, and Future Trends. J. Healthc. Eng. 2023, 2023, 2728719. [Google Scholar] [CrossRef]
  23. Vaz, J.M.; Balaji, S. Convolutional neural networks (CNNs): Concepts and applications in pharmacogenomics. Mol. Divers. 2021, 25, 1569–1584. [Google Scholar] [CrossRef] [PubMed]
  24. Li, S.; Liu, Z.Q.; Chan, A.B. Heterogeneous multi-task learning for human pose estimation with deep convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Columbus, OH, USA, 23–28 June 2014; Volume 113, pp. 482–489. [Google Scholar] [CrossRef]
  25. Guo, J.; Li, C.; Sun, Z.; Li, J.; Wang, P.A. Deep Attention Model for Environmental Sound Classification from Multi-Feature Data. Appl. Sci. 2022, 12, 5988. [Google Scholar] [CrossRef]
  26. Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Chen, T. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef]
  27. Salehi, A.W.; Khan, S.; Gupta, G.; Alabduallah, B.I.; Almjally, A.; Alsolai, H.; Mellit, A. A Study of CNN and Transfer Learning in Medical Imaging: Advantages, Challenges, Future Scope. Sustainability 2023, 15, 5930. [Google Scholar] [CrossRef]
  28. Modarres, C.; Astorga, N.; Droguett, E.L.; Meruane, V. Convolutional neural networks for automated damage recognition and damage type identification. Control Health Monit. 2018, 25, e2230. [Google Scholar] [CrossRef]
  29. Melville, J.; Alguri, K.S.; Deemer, C.; Harley, J.B. Structural damage detection using deep learning of ultrasonic guided waves. AIP Conf. Proc. 2018, 1949, 230004. [Google Scholar] [CrossRef]
  30. Górriz, M.; Aparicio, A.; Raventós, B.; Vilaplana, V.; Sayrol, E.; López-Codina, D. Leishmaniasis parasite segmentation and classification using deep learning. In Proceedings of the Articulated Motion and Deformable Objects: 10th International Conference, AMDO 2018, Palma de Mallorca, Spain, 12–13 July 2018; Springer International Publishing: Cham, Switzerland, 2018; Volume 10, pp. 53–62. [Google Scholar] [CrossRef]
  31. Zare, M.; Akbarialiabad, H.; Parsaei, H.; Asgari, Q.; Alinejad, A.; Bahreini, M.S.; Hosseini, S.H.; Ghofrani-Jahromi, M.; Shahriarirad, R.; Amirmoezzi, Y.; et al. A machine learning-based system for detecting leishmaniasis in microscopic images. BMC Infect. Dis. 2022, 22, 48. [Google Scholar] [CrossRef]
  32. Limon Jacques, S.M. Image Analysis and Classification Techniques for Leishmaniosis Detection. Bachelor’s Thesis, Universitat Politècnica de Catalunya, Barcelona, Spain, 2017. [Google Scholar]
  33. Quan, Q.; Wang, J.; Liu, L. An effective convolutional neural network for classifying red blood cells in malaria diseases. Interdiscip. Sci. Comput. Life Sci. 2020, 12, 217–225. [Google Scholar] [CrossRef]
  34. Kassim, Y.M.; Palaniappan, K.; Yang, F.; Poostchi, M.; Palaniappan, N.; Maude, R.J.; Jaeger, S. Clustering-based dual deep learning model for detecting red blood cells in malaria diagnostic smears. IEEE J. Biomed. Health Inform. 2020, 25, 1735–1746. [Google Scholar] [CrossRef]
  35. Pattanaik, P.A.; Mittal, M.; Khan, M.Z.; Panda, S.N. Malaria detection using deep residual networks with mobile microscopy. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 1700–1705. [Google Scholar] [CrossRef]
  36. Pereira, A.S.; Mazza, L.O.; Pinto, P.C.; Gomes, J.G.R.; Nedjah, N.; Vanzan, D.F.; Pyrrho, A.S.; Soares, J.G. Deep convolutional neural network applied to Trypanosoma cruzi detection in blood samples. Int. J. Bio-Inspired Comput. 2022, 19, 1–17. [Google Scholar] [CrossRef]
  37. Morais, M.C.C.; Silva, D.; Milagre, M.M.; de Oliveira, M.T.; Pereira, T.; Silva, J.S.; Nakaya, H.I. Automatic detection of the parasite Trypanosoma cruzi in blood smears using a machine learning approach applied to mobile phone images. PeerJ 2022, 10, e13470. [Google Scholar] [CrossRef]
  38. Jomtarak, R.; Kittichai, V.; Kaewthamasorn, M.; Thanee, S.; Arnuphapprasert, A.; Naing, K.M.; Chuwongin, S. Mobile Bot Application for Identification of Trypanosoma evansi Infection through Thin-Blood Film Examination Based on Deep Learning Approach. In Proceedings of the 2023 IEEE International Conference on Cybernetics and Innovations (ICCI), Phetchaburi, Thailand, 30–31 March 2023; pp. 1–7. [Google Scholar]
  39. Seyer Cagatan, A.; Taiwo Mustapha, M.; Bagkur, C.; Sanlidag, T.; Ozsahin, D.U. An Alternative Diagnostic Method for C. neoformans: Preliminary Results of Deep-Learning Based Detection Model. Diagnostics 2022, 13, 81. [Google Scholar] [CrossRef]
  40. Saponara, S.; Elhanashi, A.; Gagliardi, A. Reconstruct fingerprint images using deep learning and sparse autoencoder algorithms. In Real-Time Image Processing and Deep Learning; SPIE: Bellingham, WA, USA, 2021; Volume 11736, pp. 9–18. [Google Scholar]
  41. Salim, F.; Saeed, F.; Basurra, S.; Qasem, S.N.; Al-Hadhrami, T. DenseNet-201 and Xception Pre-Trained Deep Learning Models for Fruit Recognition. Electronics 2023, 12, 3132. [Google Scholar] [CrossRef]
  42. Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
  43. Qayyum, W.; Ehtisham, R.; Bahrami, A.; Camp, C.; Mir, J.; Ahmad, A. Assessment of Convolutional Neural Network Pre-Trained Models for Detection and Orientation of Cracks. Materials 2023, 16, 826. [Google Scholar] [CrossRef]
  44. Khan, A.; Khan, M.A.; Javed, M.Y.; Alhaisoni, M.; Tariq, U.; Kadry, S.; Nam, Y. Human Gait Recognition Using Deep Learning and Improved Ant Colony Optimization. Comput. Mater. Contin. 2022, 70. [Google Scholar] [CrossRef]
  45. Nugroho, E.S.; Ardiyanto, I.; Nugroho, H.A. Systematic literature review of dermoscopic pigmented skin lesions classification using convolutional neural network (CNN). Int. J. Adv. Intell. Inform. 2023, 9, 363–382. [Google Scholar] [CrossRef]
  46. Maqsood, A.; Farid, M.S.; Khan, M.H.; Grzegorzek, M. Deep malaria parasite detection in thin blood smear microscopic images. Appl. Sci. 2021, 11, 2284. [Google Scholar] [CrossRef]
  47. Biswal, R.; Mallick, P.K.; Panda, A.R.; Chae, G.S.; Mishra, A. White Blood Cell Classification Using Pre-Trained Deep Neural Networks and Transfer Learning. In Proceedings of the 2023 1st International Conference on Circuits, Power and Intelligent Systems (CCPIS), Bhubaneswar, India, 1–3 September 2023; pp. 1–6. [Google Scholar]
  48. Hiremath, N.V. Breast Cancer Detection and Classification using EfficientNet B0 and EfficientNet B0-HSV. Ph.D. Thesis, National College of Ireland, Dublin, Ireland, 2022. [Google Scholar]
  49. Xu, F.; Qin, Y.; He, W.; Huang, G.; Lv, J.; Xie, X.; Diao, C.; Tang, F.; Jiang, L.; Lan, R.; et al. A deep transfer learning framework for the automated assessment of corneal inflammation on in vivo confocal microscopy images. PLoS ONE 2021, 16, e0252653. [Google Scholar] [CrossRef]
Figure 1. The block diagram of the proposed system.
Figure 1. The block diagram of the proposed system.
Diagnostics 14 00012 g001
Figure 2. (a): cutaneous Leishmania positive. (b): cutaneous Leishmania negative.
Figure 2. (a): cutaneous Leishmania positive. (b): cutaneous Leishmania negative.
Diagnostics 14 00012 g002
Figure 3. Confusion Matrix for the classification of microscopic images using pre-trained models.
Figure 3. Confusion Matrix for the classification of microscopic images using pre-trained models.
Diagnostics 14 00012 g003
Figure 4. AUC score of pre-trained models.
Figure 4. AUC score of pre-trained models.
Diagnostics 14 00012 g004
Figure 5. Original image, CAM, and Overlaid image for each class.
Figure 5. Original image, CAM, and Overlaid image for each class.
Diagnostics 14 00012 g005
Table 1. Displays the average performance of the DenseNet201 design.
Table 1. Displays the average performance of the DenseNet201 design.
DenseNet-201
FoldsACCSVSPPPVNPVF1-ScoreMCCCKAUC
Fold10.988868310.9790210.97683410.988280.9779270.9776830.999820
Fold20.990740710.98233220.98091610.990370.9816240.9814550.997786
Fold30.990723610.98251750.980620210.990220.9815680.9813990.999931
Fold40.99072360.98479090.99637680.99615380.98566310.990440.9814920.9814310.999766
Fold50.99628940.9916318110.99337750.99580.9925040.9924760.999944
Average0.99146910.99528450.98804950.98690480.99580810.991020.983020.982890.999450
Table 2. Demonstrates the typical productivity of the EfficientNet-b0 model implementation.
Table 2. Demonstrates the typical productivity of the EfficientNet-b0 model implementation.
EfficientNet-b0
FoldsACCSVSPPPVNPVF1-ScoreMCCCKAUC
Fold10.998144710.99650350.99606310.998030.996280.996281.000000
Fold20.990740710.98233220.98091610.990370.981620.981460.996322
Fold30.99072360.98418970.99650350.9960.98615920.990060.981430.981360.997609
Fold40.98515770.99239540.97826090.97752810.99264710.984910.970420.970310.999656
Fold50.98886830.9748954110.98039220.987290.977640.977391.000000
Average0.9907270.99029610.990720.99010140.99183970.990130.981480.981360.998717
Table 3. Exemplifies the standard productivity achieved through the implementation of the MobileNet-v2 model.
Table 3. Exemplifies the standard productivity achieved through the implementation of the MobileNet-v2 model.
MobileNet-v2
FoldsACCSV SPPPVNPVF1-ScoreMCCCKAUC
Fold10.98144710.98418970.9790210.97647060.98591550.980310.96280.962770.998231
Fold20.99074070.98443580.99646640.9960630.9860140.990220.981490.981430.999237
Fold30.99257880.99604740.98951050.98823530.99647890.992130.985140.985110.999572
Fold40.98144710.98479090.97826090.97735850.98540150.981060.962910.962880.999187
Fold50.99072360.98326360.99666670.99576270.98679870.989470.981240.981180.998075
Average0.98738750.98654550.98798510.9867780.98812170.986640.974710.974670.998860
Table 4. Shows the standard performance attained by implementing the ResNet-101 model.
Table 4. Shows the standard performance attained by implementing the ResNet-101 model.
ResNet-101
FoldsACCSVSPPPVNPVF1-ScoreMCCCKAUC
Fold10.97959180.96442690.9930070.99186990.96928330.977960.959290.958960.998784
Fold20.985185210.97173140.969811310.984670.970770.970340.998790
Fold30.98144710.98023720.98251750.98023720.98251750.980240.962750.962750.998452
Fold40.99072360.99239540.98913040.98863640.99272730.990510.981440.981440.999160
Fold50.98886830.98744770.990.98744770.990.987450.977450.977450.995328
Average0.98516320.98490140.98527730.98360050.98690560.984170.970340.970190.998103
Table 5. Illustrates the mean level of performance achieved by employing the Xception model.
Table 5. Illustrates the mean level of performance achieved by employing the Xception model.
Xception
FoldsACCSVSPPPVNPVF1-ScoreMCCCKAUC
Fold10.9870129870.9762845850.9965034970.9959677420.9793814430.9860280.9740680.9738980.999820
Fold20.9944444440.9961089490.9929328620.9922480620.9964539010.9941750.9888720.9888650.999698
Fold30.9758812620.9604743080.989510490.9878048780.9658703070.9739480.9518280.9515040.998480
Fold40.9870129870.9885931560.9855072460.9848484850.9890909090.9867170.974020.9740130.999476
Fold50.99443413710.990.98760330610.9937630.9888010.9887380.999930
Average0.9877571630.98429220.9908908190.9896944950.9861593120.9869260.9755180.9754040.999481
Table 6. Comparison of pre-trained model performances.
Table 6. Comparison of pre-trained model performances.
ModelsACCSVSPPPVNPVF1-ScoreMCCCKAUC
DenseNet-2010.99146910.99528450.98804950.98690480.99580810.991020.983020.982890.999450
EfficientNet-b00.9907270.99029610.990720.99010140.99183970.990130.981480.981360.998717
MobileNet-v20.98738750.98654550.98798510.9867780.98812170.986640.974710.974670.998860
ResNet-1010.98516320.98490140.98527730.98360050.98690560.984170.970340.970190.998103
Xception0.9877571630.98429220.9908908190.9896944950.9861593120.9869260.9755180.9754040.999481
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abdelmula, A.M.; Mirzaei, O.; Güler, E.; Süer, K. Assessment of Deep Learning Models for Cutaneous Leishmania Parasite Diagnosis Using Microscopic Images. Diagnostics 2024, 14, 12. https://doi.org/10.3390/diagnostics14010012

AMA Style

Abdelmula AM, Mirzaei O, Güler E, Süer K. Assessment of Deep Learning Models for Cutaneous Leishmania Parasite Diagnosis Using Microscopic Images. Diagnostics. 2024; 14(1):12. https://doi.org/10.3390/diagnostics14010012

Chicago/Turabian Style

Abdelmula, Ali Mansour, Omid Mirzaei, Emrah Güler, and Kaya Süer. 2024. "Assessment of Deep Learning Models for Cutaneous Leishmania Parasite Diagnosis Using Microscopic Images" Diagnostics 14, no. 1: 12. https://doi.org/10.3390/diagnostics14010012

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop