Next Article in Journal
Impaired Human Sexual and Erectile Function Affecting Semen Quality, in Obstructive Sleep Apnea: A Pilot Study
Previous Article in Journal
Role of Yes-Associated Protein in Psoriasis and Skin Tumor Pathogenesis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Workflow for Computer-Aided Evaluation of Keloid Based on Laser Speckle Contrast Imaging and Deep Learning

1
Department of Plastic Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
2
Department of Neurological Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
J. Pers. Med. 2022, 12(6), 981; https://doi.org/10.3390/jpm12060981
Submission received: 24 April 2022 / Revised: 5 June 2022 / Accepted: 7 June 2022 / Published: 16 June 2022
(This article belongs to the Topic Complex Systems and Artificial Intelligence)

Abstract

:
A keloid results from abnormal wound healing, which has different blood perfusion and growth states among patients. Active monitoring and treatment of actively growing keloids at the initial stage can effectively inhibit keloid enlargement and has important medical and aesthetic implications. LSCI (laser speckle contrast imaging) has been developed to obtain the blood perfusion of the keloid and shows a high relationship with the severity and prognosis. However, the LSCI-based method requires manual annotation and evaluation of the keloid, which is time consuming. Although many studies have designed deep-learning networks for the detection and classification of skin lesions, there are still challenges to the assessment of keloid growth status, especially based on small samples. This retrospective study included 150 untreated keloid patients, intensity images, and blood perfusion images obtained from LSCI. A newly proposed workflow based on cascaded vision transformer architecture was proposed, reaching a dice coefficient value of 0.895 for keloid segmentation by 2% improvement, an error of 8.6 ± 5.4 perfusion units, and a relative error of 7.8% ± 6.6% for blood calculation, and an accuracy of 0.927 for growth state prediction by 1.4% improvement than baseline.

1. Introduction

Keloids are benign dermal fibroproliferative lesions that grow beyond the wound margin. Reddish, protuberant, solid, pruritic, and painful nodules or plaques on the skin are the common symptoms [1]. Despite advances in our understanding of wound healing and collagen metabolism, managing keloids remains a clinical challenge [2]. Timely discovery and treatment of progressive keloids can effectively inhibit the continued growth of keloids and improve both clinical prognosis and cosmetic effect.
The technique of laser speckle contrast imaging (LSCI) is based on an analysis of the reduction in speckle contrast, which is a real-time noncontact measurement [3]. Many studies have been proposed based on LSCI to evaluate the blood perfusion of keloids [4,5]. Those studies have shown that keloids have significantly higher blood-perfusion levels than adjacent and nonadjacent normal skin tissues of keloids, indicating the potential growth state.
However, LSCI-based blood-flow monitoring, a semi-automatic system, still has some drawbacks in clinical work [6]. The calculation of the blood perfusion of the keloid relies on the manual keloid boundary delineation, which is time consuming and subject to mislabeling. In particular, when detecting the blood flow of multiple keloids with irregular boundaries and irregular skin pigmentation, the phenomenon of missed labeling and mislabeling is more serious.
Nowadays, with the rapid development of deep learning, methods based on computer-vision modeling show a strong recognition ability for medical image analysis [7]. Notably, in dermatology and plastic surgery, deep learning is widely used to study skin with excellent performance, such as evaluating burn areas [8], differentiating multiple skin diseases [9], and diagnosing benign and malignant melanoma [10]. However, from the literature review, we find no relevant research on keloid evaluation and prognosis prediction.
This paper aims to construct a deep-learning-based workflow, integrate multiple deep-learning and machine-learning models, eliminate gaps between upstream and downstream tasks, and provide more application scenarios. The major research contributions are described as follows:
(1)
A cascaded vision transformer architecture is newly designed, which encodes and concatenates upstream and downstream features.
(2)
An automatic workflow for keloids’ clinical evaluation is proposed, including keloid segmentation, blood-perfusion calculation, and prognosis prediction.
(3)
An improved result is achieved by comparing it to traditional convolution neural networks.

2. Related Work

Few research studies are related to keloid evaluation, especially using the blood perfusion based on LSCI. This part mainly summarizes the most advanced and past works of skin lesions. Additionally, related deep-learning methods were introduced in this section.

2.1. Skin Lesion Segmentation

Recently, convolution neural networks (CNN) have been extensively excavated for skin-lesion segmentation, and most works followed a U-shaped structure [11]. For example, Yang et al. proposed a deep hybrid convolutional network derived from the UNet++ and EfficitentNet [12].
Instead of modifying the U-shaped structure, many works have tried to add attention blocks in the network. Dong et al. proposed a feedback attention network based on a context encoder network for skin lesion segmentation [13]. Tao et al. proposed another attention block named channel spatial fast attention-guided filter for the densely connected convolution network [14]. Wu et al. proposed an adaptive dual attention module to refine the extracted features for upsampling operations [15].

2.2. Skin Lesion Classification

Classification is the most common task in the computer vision area, and several techniques have been developed since ImageNet was brought up [16]. Afza et al. used hybrid deep-features selection and extreme learning machine for multiclass skin lesion classification [17]. Arshad et al. used ResNet50 and ResNet101 to extract features and the skewness-controlled SVR approach to select the best features for multiclass skin lesion classification [18]. Moldovanu et al. used an ensemble of machine-learning techniques instead of deep-learning methods [19]. Yao et al. proposed a multi-weight new loss function to classify skin lesions on an imbalanced small dataset [20].

2.3. Coupling Segmentation and Classification

Instead of independently segmenting and classifying, many methods have been developed to couple the segmentation and classification in one framework. Manzoor proposed a lightweight approach by CNN and features fusion [21]. Amin et al. used a threshold method to segment the lesion and used AlexNet and VGG16 to extract features to classify benign and malignant cancer [22]. Khan proposed an integrated framework including machine learning and CNN models for skin lesion detection and recognition [23].
By coupling segmentation and classification into the same framework, the classification accuracies were improved in the above studies.

2.4. Vision Transformer

The transformer was first used in natural language processing (NLP) and has become one of the best models [24]. Recently, studies have shown that by cutting images into patches, the transformer network, namely vision transformer (VIT) can outperform traditional CNNs in most computer-vision areas [25]. The vision transformer has become a state-of-the-art method with promising performance. Cao et al. adopted a pyramid transformer layer and constructed a Global and Local Inter-pixel correlation learning network for skin lesion segmentation [26]. Wu et al. proposed a FAT-Net, which integrated an extra transformer branch to capture long-range dependencies and global context information [27]. However, vision transformer (VIT) requires a large training dataset, which greatly limited application in medical image analysis with relatively small datasets.

3. Materials and Methods

3.1. Participant

To develop the deep-learning-based workflow, we retrospectively enrolled patients diagnosed with keloids in Peking Union Medical College Hospital from 2019 to 2021. The inclusion criteria were (1) patients with at least one keloid; (2) patients with saved black and white intensity images and color blood-perfusion images from LSCI; (3) patients without prior treatment; (4) patients without systemic disease; (5) patients with follow-up data. After screening, 150 patients were included in this study, and the collected keloids were from 7 regions, including the back, chest, ear, face, hip, limb, and abdomen. This study (No: S-K196) was approved by the China Association for Ethical Studies.

3.2. Device and Manual Annotation

Keloid perfusion was assessed by LSCI (PeriCam PSI System®; Perimed, Järfälla, Sweden), and Image analysis was undertaken using the manufacturer’s software (PimSoft 1.2.2.0®; Perimed, Järfälla, Sweden). The software produced both a blood-perfusion image and an intensity image of the scanned area. We collected blood-perfusion images and intensity images of the included patients. The measurement of blood perfusion was expressed in perfusion units (PUs, mL/100 g/min). Two plastic surgeons manually annotated the intensity images using “labelme” software [28]. Each gross photograph was segmented into two tissues, background (nonkeloid) and keloid (keloid). The segmented images will be used to supervise the training of an automated segmentation model for keloids.

3.3. Establishment of the Automatic Segmentation Module

Studies have shown that the vision transformer is a strong inference network with dense bias inductions, which requires a large amount of training data. However, this study’s training data are really limited, and a well-pretrained weight is necessary before training. Instead of using the weight pretrained in the ImageNet directly, we adopt a pretraining method, namely Masked AutoEncoder (MAE) [29]. For each image, 75% of patches are masked as the input, and the training target is to reconstruct the original image. In this pretraining method, the model is trained to learn a rich hidden representation and infer complex, holistic reconstructions. We followed the official implementation with 1600 pretraining epochs.
The images are resized into 512 × 512 and then cut into patches with the size of 16 × 16. Therefore, a total of 1024 patches are generated for each image. A VIT with 12 layers was used as the encoder, and the upernet was used as the segmentation decoder [30]. Manual annotations were used to supervise the neural network to learn how to act as an expert and segment the keloid automatically. The model was programmed based on PyTorch, and a random vertical and horizontal flip, and a random rigid rotation of 15 degrees were employed as the data augmentation. The SGD optimizer was used for backpropagation, and the learning rate was exponentially attenuated from 0.02. With a total training of 100 epochs, the training process took two hours using one 3090 Nvidia GPU.

3.4. Establishment of the Blood-Perfusion Analysis Module

Though we could not implant the program inside the LSCI to obtain the original blood-flow images, considering that the LSCI will generate a heat map (from blue to red) of the blood-perfusion image, we can indirectly calculate the blood-perfusion value at each pixel according to the color. After aligning the color bar with the blood-perfusion value, we established a mapping function between the RGB of the image and the blood-perfusion value [31]. Further, based on the automatic segmentation results, we cropped the blood flow image in the keloid area and calculated its average blood-perfusion value. The automatically computed blood-perfusion value was compared with the origin blood-perfusion value obtained in LSCI to get the relative perfusion error.

3.5. Establishment of the Evaluation Module

Considering that the blood perfusion inside the keloid will reflect its intrinsic growth state and the keloid usually presents an uneven perfusion distribution, we did not directly use the mean blood-perfusion value to evaluate the growth state of the keloid. The blood-perfusion image was firstly masked by the segmentation result to remove unnecessary blood perfusion and then resized to 512 × 512. Similarly, the blood-perfusion image was cut into 1024 patches with the size of 16 × 16. A vision transformer was used to encode the blood-perfusion images.
To remove unnecessary regions and save memory costs, only patches with blood perfusion were fed into the transformer. After perfusion encoding, the perfusion features were concatenated with the intensity features from the intensity encoder. Finally, a prediction decoder with 4 transformer layers decoded the composited feature and output a three-classification prediction, namely regressive, stable, and progressive. Patients reported three types of keloid growth stages based on whether the keloid grew smaller, stayed the same size, or grew larger over the previous year [5]. The training was performed in an automatic augmentation manner with random erasing of 25% of pixels [32,33]. The initial learning rate was set to 0.001, and attenuated by 1/2 at 50, 100, and 200 epochs. This training process took about two hours on 3090 Nvidia GPU with a total of 400 training epochs.

3.6. Establishment of the Workflow and Evaluation Matrix

As shown in Figure 1, the purpose of this study was to improve the automation and diagnostic ability during the diagnosis and treatment process of keloids in clinical practice. In this study, three diagnosis modules were designed: an automatic segmentation module, a blood-perfusion analysis module, and an evaluation module. The whole workflow was a semi-automatic diagnosis procedure with a cascaded vision transformer structure. First, it is necessary to manually obtain the unlabeled intensity images and blood-perfusion images of the patient and then put them into the automatic segmentation module to segment the keloid. The segmentation result and the original blood-perfusion image were put into the blood-perfusion analysis module to obtain the average blood-perfusion and cropped blood-perfusion image. Finally, the cropped blood-perfusion image, together with previously encoded intensity features, were put into the evaluation module to evaluate the keloid growth state.
The modules designed in this paper were trained and validated independently using 5-fold cross-validation. For each module, we selected an independent evaluation function.
For the automatic segmentation module, we used the DICE value as the evaluation criteria. The higher the DICE value, the more accurate automatic segmentation results.
DICE = 2 | A B | | A | + | B |
where A and B represent the features to be evaluated from the manual segmentations and deep-learning results, respectively.
For the difference in blood perfusion, we used the perfusion blood error and the relative blood-perfusion difference to evaluate.
perfusion   blood   error = | A B |
relative   perfusion   blood   error = | A B | max ( | A | , | B | ) × 100 %
where A and B represent the blood-perfusion value obtained from the original LSCI and the blood-perfusion module, respectively.
For the final evaluation analysis, we reported sensitivity, specificity, Youden index, and accuracy for each category.

4. Results

4.1. Study Participants

In all, 150 keloids were enrolled in this study. As shown in Table 1, there were 75 men and 75 women with a mean age of 30.6 ± 11.1 years, a mean keloid duration of 7.1 ± 3.9, and a mean blood-perfusion of 129.9 ± 41.0. There were 49 (32.7%) keloids in the regressive stage, 37 (24.7%) in the stable stage, and 64 (42.7%) in the progressive stage. Blood perfusion varied greatly at different locations. Keloids on the face had the maximum mean blood-perfusion of 182.8 ± 23.0 PU, while keloids on the hip had the minimum mean blood-perfusion of 103.0 ± 40.2 PU.

4.2. Segmentation Module

After being trained in a five-fold training manner, the mean DICE value was calculated over five training processes. The proposed method finally achieved the mean DICE value of 0.895. Examples of segmentation can be found in Figure 2. The first column shows the original intensity image, the second column shows the manual annotation, and the third column shows the automatic segmentation. We can find that automatic segmentation has a smoother and more regular boundary than manual segmentation.
Ablation studies (Table 2 segmentation) were performed among architectures and pretraining methods. Resnet and hrnet were listed as the baseline methods [34,35]. When the pretraining weights were abandoned, the traditional CNNs outperformed VIT by about 11% DICE value (HRnet-c1 0.671 vs. VIT 0.562). However, when using ImageNet pretraining weights, the performance among CNNs and VIT were close (HRnet-c1 0.875 vs. VIT 0.870). Specifically, when using MAE as the pretraining method, the performance of VIT was further improved with the DICE value of 0.895 and outperformed CNNs by 2%.

4.3. Blood-Perfusion Module

Blood perfusions were calculated based on the automatic segmentation results and manually acquired blood-perfusion images. The cropped image is shown in the last column in Figure 2, eliminating the effect of skin pigmentation. The proposed blood-perfusion module showed high accuracy, with a small mean perfusion error of 8.6 ± 5.4 PU and a relative perfusion error of 7.8% ± 6.6%.

4.4. Evaluation Module

After masking the blood-perfusion image with the segmentation and cutting images into patches, the features were encoded with the blood features. Both blood features and intensity features were used for the final prediction. Results showed that the automatic evaluation module achieved high prediction accuracies. Table 3 reported the sensitivities, specificities, Youden index, and accuracies. The sensitivities of the three stages were 0.936, 0.892, and 0.939. The specificities of the three stages were 0.961, 0.965, and 0.964, respectively. The evaluation module showed a mean accuracy of 0.927.
Ablation studies (Table 2 prediction) showed that deep-learning-based methods could achieve accuracy of 0.89 even without pretraining, indicating the growth state is a relatively easy task to predict. Similarly, CNNs outperformed VIT without pretraining (Resnet101 0.893 vs. VIT 0.887). After using the patch selection and concatenation, VIT outperformed CNNs by 1.4% (Resnet101 0.913 vs. VIT 0.927).

5. Discussion

Many studies have shown that the application of LSCI can more accurately assess the growth state of the keloid, and high blood-perfusion often indicates that the keloid is more active [4,5,36]. The characteristics of the patients we enrolled (Table 1) showed that the blood flow of keloids at different sites had significant differences, which is consistent with previous studies [4]. These studies have established keloid evaluation methods using blood perfusion to avoid the subjective error caused by the traditional VSS (Vancouver Scar Scale) or VAS (Visual Analogue Scale) score. Our research was a structural keloid evaluation method established based on LSCI.
High blood-perfusion has been linked to microangiogenesis in damaged skin in previous investigations [37]. However, this conclusion appears to contradict that of other studies, which claimed that the keloid’s blood flow was impaired [38,39,40]. The perfusion level in keloids was previously estimated based on a single factor such as vascular density, transverse area, or shape, whereas our current work used LSCI to measure perfusion. Blood flow within the keloid evaluated by Laser Doppler Flowmetry (LDF) had a good association with the overall scores of a validated grading system encompassing redness, elevation, hardness, itch, and pain, according to Olimpia Timar-Banu et al. [41]. However, in our study, LSCI has a higher picture quality, repeatability, scanning efficiency, and lower variability than LDF because it does not need to be in touch with the targeted skin [42].
Artificial intelligence has been widely used in dermatology, such as psoriasis area evaluation, skin disease diagnosis, and melanoma diagnosis [9,10,43]. However, there is still no study on the AI-assisted diagnosis and treatment process for keloids. This study aims to simplify the clinical diagnosis and treatment process, assist clinical decision-making, and propose a highly automated workflow. Some patients have multiple and irregular keloids, which significantly prolongs manual delineation time and leads to mislabeling and missed labeling. The use of an automatic keloid segmentation module can considerably shorten the process and avoid human error. Through the workflow designed in this study, the computation time of one patient can be reduced to less than one second.
Previous research has shown intensity images and blood-perfusion images based on LSCI may not be compatible, and keloids with regular borders can exhibit heterogeneous blood-perfusion [44,45]. Studies have shown that the performance of LSCI in normal skin can also be affected by skin pigmentation, which brings interference to the direct application of LSCI images when predicting scar growth state [46]. In this study, based on accurate keloid segmentation, the LSCI image was cropped to the keloid region, thus eliminating the effect that skin pigmentation brings to the blood-perfusion images. On the other hand, based on accurate keloid segmentation, the evaluation module can focus more attention on the internal inhomogeneity of the keloid and make a precise evaluation of the current growth state.
Ablation studies: The proposed cascaded transformer architecture was compared to the traditional CNNs. Our results showed that CNNs outperformed VIT in segmentation and classification when pretraining weights were abandoned. However, the VIT performed similarly with CNNs when networks were well initialized. We believed the above observation was due to the training process of VIT requiring a large amount of data, while this study only included 150 samples. Using the MAE pretraining method, we found the VIT could be well initialized with dataset-specific hidden representation, which significantly improved the result in the small dataset. The proposed patch selection and feature concatenation method could integrate upstream and downstream features and save more than 90% memory cost.
Strengths and limitations: This study used unlabeled intensity images and blood-perfusion images based on LSCI as inputs, which were easy to obtain without additional manual intervention. After being trained with limited training samples and manual annotations, the deep-learning model does not require excessive human intervention during testing. Therefore, the workflow constructed in this paper requires low hardware, labor, and time costs, which is conducive to research and promotion in more clinical scenarios. At the same time, manual delineation often has rough boundaries and cannot avoid subjective bias. However, the proposed workflow has robust reproducibility while avoiding human errors and segmenting keloids with regular boundaries. Besides, the deep-learning model will be self-updated with increasing labeling quality and training samples in the future. In contrast, the improvement of human work is very limited after reaching saturation of the learning curve.
This study has some limitations. Among them, retrospective studies are the most significant research obstacle. Due to the lack of prospective design, none of the enrolled patients underwent keloid excision surgery. However, the postoperative keloid is more irregular, and our model may not accurately segment the postoperative keloids, which leads to the inability to evaluate the growth state of the keloid correctly. In future studies, we will further collect postoperative patient data to improve the model’s generalization. Additionally, the workflow designed in this study only covers the primary feature extraction, such as blood-perfusion value and intrinsic growth state. To predict other indicators, including pain and pruritus, it is still necessary to supplement more data to expand application scenarios.

6. Conclusions

This paper proposed a workflow with a cascaded transformer architecture for evaluating keloid states based on LSCI, which contained three modules: an automatic segmentation module, a blood-perfusion analysis module, and an evaluation module. We used the automatic segmentation module to segment and located the keloid, reaching a DICE value of 0.895. Based on the automated segmentation results and the blood-perfusion images, we automatically cropped the image into the keloid blood-perfusion area and calculated its average blood-perfusion value, which achieved an error of 8.6 ± 5.4 PU and a relative error of 7.8% ± 6.6%. Further, we cut the blood-perfusion images into patches and fed them into the evaluation module to predict the growth state, and finally, we achieved an average accuracy of 0.927. The workflow we designed integrates segmentation, analysis, and evaluation, which can assist and simplify the assessment of the keloid in future clinical work.

Author Contributions

S.L. was responsible for the measurement, experiments, and writing the manuscript. H.W. was mainly responsible for statistics. M.Z. and Y.X. contributed the methodology and technical help. N.Y. contributed to the collection of data. A.Z. helped revise the manuscript. X.W. contributed to the hardware support, systematic literature review, and manuscript editing. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by grants from Xiaojun Wang’s scientific research funds transferred from educational funds of Peking Union Medical College Hospital (No.OJ0301000006169).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of the China Association Peking Union Medical College Hospital (PUMCH) human research review committee (No: S-K196).

Informed Consent Statement

The requirement for informed consent was waived because of the retrospective nature of the study.

Data Availability Statement

The data and code used to support the findings of this study are available from the corresponding author on request.

Conflicts of Interest

There are no conflict of interest.

Abbreviations

LSCI—laser speckle contrast imaging; VIT—vision transformer; CNN—convolution neural networks; MAE—masked autoencoder.

References

  1. Al-Attar, A.; Mess, S.; Thomassen, J.M.; Kauffman, C.L.; Davison, S.P. Keloid Pathogenesis and Treatment. Plast. Reconstr. Surg. 2006, 117, 286–300. [Google Scholar] [CrossRef]
  2. Trace, A.P.; Enos, C.W.; Mantel, A.; Harvey, V.M. Keloids and Hypertrophic Scars: A Spectrum of Clinical Challenges. Am. J. Clin. Dermatol. 2016, 17, 201–223. [Google Scholar] [CrossRef]
  3. Roustit, M.; Cracowski, J.-L. Non-invasive Assessment of Skin Microvascular Function in Humans: An Insight Into Methods. Microcirculation 2011, 19, 47–64. [Google Scholar] [CrossRef] [Green Version]
  4. Liu, Q.; Wang, X.; Jia, Y.; Long, X.; Yu, N.; Wang, Y.; Chen, B. Increased blood flow in keloids and adjacent skin revealed by laser speckle contrast imaging. Lasers Surg. Med. 2016, 48, 360–364. [Google Scholar] [CrossRef]
  5. Chen, C.; Zhang, M.; Yu, N.; Zhang, W.; Long, X.; Wang, Y.; Wang, X. Heterogeneous Features of Keloids Assessed by Laser Speckle Contrast Imaging: A Cross-Sectional Study. Lasers Surg. Med. 2020, 53, 865–871. [Google Scholar] [CrossRef]
  6. Katsui, S.; Inoue, Y.; Igari, K.; Toyofuku, T.; Kudo, T.; Uetake, H. Novel assessment tool based on laser speckle contrast imaging to diagnose severe ischemia in the lower limb for patients with peripheral arterial disease. Lasers Surg. Med. 2017, 49, 645–651. [Google Scholar] [CrossRef]
  7. Young, A.T.; Xiong, M.; Pfau, J.; Keiser, M.J.; Wei, M.L. Artificial Intelligence in Dermatology: A Primer. J. Investig. Dermatol. 2020, 140, 1504–1512. [Google Scholar] [CrossRef]
  8. Huang, S.; Dang, J.; Sheckter, C.C.; Yenikomshian, H.A.; Gillenwater, J. A systematic review of machine learning and automation in burn wound evaluation: A promising but developing frontier. Burns 2021, 47, 1691–1704. [Google Scholar] [CrossRef]
  9. Zhu, C.-Y.; Wang, Y.-K.; Chen, H.-P.; Gao, K.-L.; Shu, C.; Wang, J.-C.; Yan, L.-F.; Yang, Y.-G.; Xie, F.-Y.; Liu, J. A Deep Learning Based Framework for Diagnosing Multiple Skin Diseases in a Clinical Environment. Front. Med. 2021, 8, 626369. [Google Scholar] [CrossRef]
  10. Dick, V.; Sinz, C.; Mittlböck, M.; Kittler, H.; Tschandl, P. Accuracy of Computer-Aided Diagnosis of Melanoma. JAMA Dermatol. 2019, 155, 1291–1299. [Google Scholar] [CrossRef]
  11. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 201; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar] [CrossRef] [Green Version]
  12. Yang, C.-H.; Ren, J.-H.; Huang, H.-C.; Chuang, L.-Y.; Chang, P.-Y. Deep Hybrid Convolutional Neural Network for Segmentation of Melanoma Skin Lesion. Comput. Intell. Neurosci. 2021, 2021, 9409508. [Google Scholar] [CrossRef]
  13. Dong, Y.; Wang, L.; Cheng, S.; Li, Y. FAC-Net: Feedback Attention Network Based on Context Encoder Network for Skin Lesion Segmentation. Sensors 2021, 21, 5172. [Google Scholar] [CrossRef]
  14. Tao, S.; Jiang, Y.; Cao, S.; Wu, C.; Ma, Z. Attention-Guided Network with Densely Connected Convolution for Skin Lesion Segmentation. Sensors 2021, 21, 3462. [Google Scholar] [CrossRef]
  15. Wu, H.; Pan, J.; Li, Z.; Wen, Z.; Qin, J. Automated Skin Lesion Segmentation Via an Adaptive Dual Attention Module. IEEE Trans. Med. Imaging 2020, 40, 357–370. [Google Scholar] [CrossRef]
  16. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. NIPS 2012, 60, 84–90. [Google Scholar] [CrossRef]
  17. Afza, F.; Sharif, M.; Khan, M.A.; Tariq, U.; Yong, H.-S.; Cha, J. Multiclass Skin Lesion Classification Using Hybrid Deep Features Selection and Extreme Learning Machine. Sensors 2022, 22, 799. [Google Scholar] [CrossRef]
  18. Arshad, M.; Khan, M.A.; Tariq, U.; Armghan, A.; Alenezi, F.; Javed, M.Y.; Aslam, S.M.; Kadry, S. A Computer-Aided Diagnosis System Using Deep Learning for Multiclass Skin Lesion Classification. Comput. Intell. Neurosci. 2021, 2021, 9619079. [Google Scholar] [CrossRef]
  19. Moldovanu, S.; Michis, F.A.D.; Biswas, K.C.; Culea-Florescu, A.; Moraru, L. Skin Lesion Classification Based on Surface Fractal Dimensions and Statistical Color Cluster Features Using an Ensemble of Machine Learning Techniques. Cancers 2021, 13, 5256. [Google Scholar] [CrossRef]
  20. Yao, P.; Shen, S.; Xu, M.; Liu, P.; Zhang, F.; Xing, J.; Shao, P.; Kaffenberger, B.; Xu, R.X. Single Model Deep Learning on Imbalanced Small Datasets for Skin Lesion Classification. IEEE Trans. Med. Imaging 2021, 41, 1242–1254. [Google Scholar] [CrossRef]
  21. Manzoor, K.; Majeed, F.; Siddique, A.; Meraj, T.; Rauf, H.T.; El-Meligy, M.A.; Sharaf, M.; Elgawad, A.E.E.A. A Lightweight Approach for Skin Lesion Detection Through Optimal Features Fusion. Comput. Mater. Contin. 2022, 70, 1617–1630. [Google Scholar] [CrossRef]
  22. Amin, J.; Sharif, A.; Gul, N.; Anjum, M.A.; Nisar, M.W.; Azam, F.; Bukhari, S.A.C. Integrated design of deep features fusion for localization and classification of skin cancer. Pattern Recognit. Lett. 2019, 131, 63–70. [Google Scholar] [CrossRef]
  23. Khan, M.A.; Sharif, M.; Akram, T.; Bukhari, S.A.C.; Nayak, R.S. Developed Newton-Raphson based deep features selection framework for skin lesion recognition. Pattern Recognit. Lett. 2019, 129, 293–303. [Google Scholar] [CrossRef]
  24. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  25. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.J.a.p.a. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  26. Cao, W.; Yuan, G.; Liu, Q.; Peng, C.; Xie, J.; Yang, X.; Ni, X.; Zheng, J. ICL-Net: Global and Local Inter-pixel Correlations Learning Network for Skin Lesion Segmentation. IEEE J. Biomed. Health Inform. 2022. [Google Scholar] [CrossRef]
  27. Wu, H.; Chen, S.; Chen, G.; Wang, W.; Lei, B.; Wen, Z. FAT-Net: Feature adaptive transformers for automated skin lesion segmentation. Med. Image Anal. 2021, 76, 102327. [Google Scholar] [CrossRef]
  28. Russell, B.C.; Torralba, A.; Murphy, K.P.; Freeman, W.T. LabelMe: A Database and Web-Based Tool for Image Annotation. Int. J. Comput. Vis. 2007, 77, 157–173. [Google Scholar] [CrossRef]
  29. He, K.; Chen, X.; Xie, S.; Li, Y.; Dollár, P.; Girshick, R.J.a.p.a. Masked autoencoders are scalable vision learners. arXiv 2021. [Google Scholar] [CrossRef]
  30. Xiao, T.; Liu, Y.; Zhou, B.; Jiang, Y.; Sun, J. Unified perceptual parsing for scene understanding. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
  31. Hunter, J.D. Matplotlib: A 2D graphics environment. Comput. Sci. Eng. 2007, 9, 90–95. [Google Scholar] [CrossRef]
  32. Cubuk, E.D.; Zoph, B.; Mane, D.; Vasudevan, V.; Le, Q.V. AutoAugment: Learning Augmentation Strategies from Data. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
  33. Zhong, Z.; Zheng, L.; Kang, G.; Li, S.; Yang, Y. Random Erasing Data Augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020. [Google Scholar]
  34. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
  35. Sun, K.; Xiao, B.; Liu, D.; Wang, J. Deep high-resolution representation learning for human pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5693–5703. [Google Scholar]
  36. Chen, J.; Zhuo, S.; Jiang, X.; Zhu, X.; Zheng, L.; Xie, S.; Lin, B.; Zeng, H. Multiphoton microscopy study of the morphological and quantity changes of collagen and elastic fiber components in keloid disease. J. Biomed. Opt. 2011, 16, 051305. [Google Scholar] [CrossRef] [PubMed]
  37. Shweiki, D.; Itin, A.; Soffer, D.; Keshet, E. Vascular endothelial growth factor induced by hypoxia may mediate hypoxia-initiated angiogenesis. Nature 1992, 359, 843–845. [Google Scholar] [CrossRef]
  38. Kischer, C.W.; Thies, A.C.; Chvapil, M. Perivascular myofibroblasts and microvascular occlusion in hypertrophic scars and keloids. Hum. Pathol. 1982, 13, 819–824. [Google Scholar] [CrossRef]
  39. Kurokawa, N.; Ueda, K.; Tsuji, M. Study of microvascular structure in keloid and hypertrophic scars: Density of microvessels and the efficacy of three-dimensional vascular imaging. J. Plast. Surg. Hand Surg. 2010, 44, 272–277. [Google Scholar] [CrossRef] [PubMed]
  40. Ueda, K.; Yasuda, Y.; Furuya, E.; Oba, S. Inadequate blood supply persists in keloids. Scand. J. Plast. Reconstr. Surg. Hand Surg. 2004, 38, 267–271. [Google Scholar] [CrossRef] [PubMed]
  41. Perry, D.M.; McGrouther, D.A.; Bayat, A. Current Tools for Noninvasive Objective Assessment of Skin Scars. Plast. Reconstr. Surg. 2010, 126, 912–923. [Google Scholar] [CrossRef]
  42. Roustit, M.; Millet, C.; Blaise, S.; Dufournet, B.; Cracowski, J. Excellent reproducibility of laser speckle contrast imaging to assess skin microvascular reactivity. Microvasc. Res. 2010, 80, 505–511. [Google Scholar] [CrossRef]
  43. Lin, Y.-L.; Huang, A.; Yang, C.-Y.; Chang, W.-Y. Measurement of Body Surface Area for Psoriasis Using U-net Models. Comput. Math. Methods Med. 2022, 2022, 7960151. [Google Scholar] [CrossRef]
  44. de Oliveira, G.V.; Chinkes, D.; Mitchell, C.; Oliveras, G.; Hawkins, H.K.; Herndon, D.N. Objective Assessment of Burn Scar Vascularity, Erythema, Pliability, Thickness, and Planimetry. Dermatol. Surg. 2006, 31, 48–58. [Google Scholar] [CrossRef]
  45. Lock-Andersen, J.; Wulf, H.C. Threshold level for measurement of UV sensitivity: Reproducibility of phototest. Photodermatol. Photoimmunol. Photomed. 1996, 12, 154–161. [Google Scholar] [CrossRef]
  46. Shih, B.B.; Allan, D.; de Gruijl, F.R.; Rhodes, L.E. Robust Detection of Minimal Sunburn in Pigmented Skin by 785 nm Laser Speckle Contrast Imaging of Blood Flux. J. Investig. Dermatol. 2015, 135, 1197–1199. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Proposed workflow for AI-assisted keloid segmentation and evaluation. We proposed a workflow with a cascaded vision transformer architecture for evaluating keloid states based on LSCI (laser speckle contrast imaging), which contained three modules: an automatic segmentation module, a blood-perfusion analysis module, and an evaluation module. The automatic segmentation module was used to segment and located the keloid, the blood-perfusion analysis module was used to crop the blood-perfusion image to the keloid area, and the evaluation module was used to evaluate the keloid growth states (regressive, stable, and progressive).
Figure 1. Proposed workflow for AI-assisted keloid segmentation and evaluation. We proposed a workflow with a cascaded vision transformer architecture for evaluating keloid states based on LSCI (laser speckle contrast imaging), which contained three modules: an automatic segmentation module, a blood-perfusion analysis module, and an evaluation module. The automatic segmentation module was used to segment and located the keloid, the blood-perfusion analysis module was used to crop the blood-perfusion image to the keloid area, and the evaluation module was used to evaluate the keloid growth states (regressive, stable, and progressive).
Jpm 12 00981 g001
Figure 2. Segmentation and cropping results of proposed modules. We showed three examples in this figure. The first column, original intensity images; the second column, manual annotations; the third column, automatic segmentations; the fourth column, original blood-perfusion images; the final column, the cropped blood-perfusion images. The first row showed the keloid in the progressive stage; the middle row showed the keloid in the stable stage; the last row showed the keloid in the regressive stage.
Figure 2. Segmentation and cropping results of proposed modules. We showed three examples in this figure. The first column, original intensity images; the second column, manual annotations; the third column, automatic segmentations; the fourth column, original blood-perfusion images; the final column, the cropped blood-perfusion images. The first row showed the keloid in the progressive stage; the middle row showed the keloid in the stable stage; the last row showed the keloid in the regressive stage.
Jpm 12 00981 g002
Table 1. Demographic characteristics.
Table 1. Demographic characteristics.
LocationNMaleFemaleAgeDurationPerfusionRegressiveStableProgressive
Back34181633.6 ± 11.47.6 ± 3.9127.6 ± 43.710915
Chest63293429.5 ± 12.36.8 ± 4.1135.8 ± 35.6141633
Ear84426.9 ± 10.36.1 ± 5.0157.8 ± 41.7134
Face64227.8 ± 3.79.7 ± 4.1182.8 ± 23.0015
Hip95434.2 ± 8.86.0 ± 3.3103.0 ± 40.2711
Limb1881030.3 ± 10.46.7 ± 4.0105.2 ± 38.41233
Abdomen127529.3 ± 8.28.0 ± 4.2118.5 ± 32.7543
All150757530.6 ± 11.17.1 ± 3.9129.9 ± 41.0493764
Table 2. Ablation study.
Table 2. Ablation study.
Segmentation Prediction
MethodPretrainDICEMethodPretrainAccuracy
Resnet50-upernetNone0.651Resnet50None0.893
HRnet-c1None0.671Resnet101None0.893
Resnet50-upernetImageNet0.861Resnet50ImageNet0.907
HRnet-c1ImageNet0.875Resnet101ImageNet0.913
VIT-base-upernetNone0.562cascade-VITNone0.887
VIT-base-upernetImageNet0.870 (−0.005)cascade-VITImageNet0.913 (+0)
VIT-base-upernetMAE0.895 (+0.020)+patch selectionImageNet0.927 (+0.014)
Note: the bold texts show the best result for each task.
Table 3. Results of the evaluation module.
Table 3. Results of the evaluation module.
RegressiveStableProgressiveAll
Sensitivity0.9360.8920.939
Specificity0.9610.9650.964
Youden0.8970.8560.904
Accuracy0.9530.9470.9530.927
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, S.; Wang, H.; Xiao, Y.; Zhang, M.; Yu, N.; Zeng, A.; Wang, X. A Workflow for Computer-Aided Evaluation of Keloid Based on Laser Speckle Contrast Imaging and Deep Learning. J. Pers. Med. 2022, 12, 981. https://doi.org/10.3390/jpm12060981

AMA Style

Li S, Wang H, Xiao Y, Zhang M, Yu N, Zeng A, Wang X. A Workflow for Computer-Aided Evaluation of Keloid Based on Laser Speckle Contrast Imaging and Deep Learning. Journal of Personalized Medicine. 2022; 12(6):981. https://doi.org/10.3390/jpm12060981

Chicago/Turabian Style

Li, Shuo, He Wang, Yiding Xiao, Mingzi Zhang, Nanze Yu, Ang Zeng, and Xiaojun Wang. 2022. "A Workflow for Computer-Aided Evaluation of Keloid Based on Laser Speckle Contrast Imaging and Deep Learning" Journal of Personalized Medicine 12, no. 6: 981. https://doi.org/10.3390/jpm12060981

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop