Next Article in Journal
PGDS-YOLOv8s: An Improved YOLOv8s Model for Object Detection in Fisheye Images
Previous Article in Journal
Research on Ensemble Learning-Based Feature Selection Method for Time-Series Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fabric Defect Detection Method Using SA-Pix2pix Network and Transfer Learning

1
School of Mechanical Engineering and Automation, Wuhan Textile University, Wuhan 430200, China
2
Hubei Digital Textile Equipment Key Laboratory, Wuhan 430200, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(1), 41; https://doi.org/10.3390/app14010041
Submission received: 24 October 2023 / Revised: 9 December 2023 / Accepted: 13 December 2023 / Published: 20 December 2023

Abstract

:
This paper proposes a fabric defect detection algorithm based on the SA-Pix2pix network and transfer learning to address the issue of insufficient accuracy in detecting complex pattern fabric defects in scenarios with limited sample data. Its primary contribution lies in treating defects as disruptions to the fabric’s texture. It leverages a generative adversarial network to reconstruct defective images, restoring them to images of normal fabric texture. Subsequently, the reconstituted images are subjected to dissimilarity calculations against defective images, leading to image segmentation for the purpose of defect detection. This approach addresses the issues of poor defect image reconstruction accuracy due to the limited ability of remote dependency modeling within the generator’s convolutional neural network. It also tackles deficiencies in the generative adversarial network’s loss function in handling image details. To enhance the structure and loss function of the generative adversarial network, it introduces self-attention mechanisms, L1 loss, and an improved structural loss, thus mitigating the problems of low defect image reconstruction accuracy and insufficient image detail handling by the network. To counteract the issue of declining model training accuracy in the face of sparse complex fabric defect samples, a channel-wise domain transfer learning approach is introduced. This approach constrains the training of the target network through feature distribution, thereby overcoming the problem of target network overfitting caused by limited sample data. The study employs three methods to experimentally compare and investigate five distinct complex pattern fabric defects. The results demonstrate that, when compared to two other defect detection methods, the approach advocated in this paper exhibits superior detection accuracy in scenarios with limited sample data.

1. Introduction

Fabric defect detection stands as a pivotal component within textile quality control processes. Manual inspection methods fall short of meeting the demands of actual production. The integration of machine vision technology has emerged as a promising avenue, offering high-precision automatic inspection capabilities to supplant manual scrutiny. This transition holds substantial significance for enhancing textile production quality and efficiency [1].
In the realm of machine vision technology, the efficiency of fabric defect detection algorithms plays a decisive role in enabling automated detection. Current fabric defect detection methodologies can be broadly categorized into three types [2]. Firstly, frequency domain algorithms, such as wavelet transform [3,4,5] and Gabor filtering [6,7], leverage Fourier transformations [8] of images to extract spectral features for defect identification, primarily effective for simple background features, like solid-color fabrics. However, their efficacy diminishes when detecting defects in fabrics with intricate textures. Secondly, model-based algorithms, including Gaussian mixture modeling [9,10], utilize autoregressive modeling to extract error regions in fabric images. While effective for simpler textures, these methods often falter in detecting defects within more complex fabric textures. Lastly, spatial domain-based methods like grayscale symbiotic matrix algorithms [11,12] attempt to discern normal texture from defects by learning texture features. However, their extensive computational demands and lack of robustness hinder practical application. Despite significant progress in traditional fabric defect detection algorithms, limitations persist in terms of detection accuracy, generalization, and real-time performance. These shortcomings impede their practical deployment in production settings.
In recent years, the evolution of deep learning has propelled convolutional neural networks (CNNs) to remarkable heights in the domain of image recognition, fostering their swift integration across diverse industries. Within the sphere of defect detection, CNNs manifest through two predominant avenues: target detection algorithms and semantic segmentation.
Defect detection based on target detection algorithms unfolds along two principal trajectories, characterized by Regional Convolutional Neural Networks [13] (R-CNN) and models rooted in You Only Look Once [14] (YOLO) architectures. R-CNN, initially designed for specific target determination, has found application in fabric defect detection, notably through improved iterations like Faster R-CNN [15]. Notably, Chen et al. [16] introduced the Faster GG R-CNN model, employing a two-stage training approach that integrates Gabor filters into the Faster R-CNN framework via genetic algorithms and backpropagation. This innovation effectively addresses the challenge posed by complex background textures in fabric defect detection. In a parallel development, Jin et al. [17] enhanced YOLOv5 using attention mechanisms to augment defect-related features. They proposed a lightweight fabric defect detection method exhibiting promising results when tested on fabric blemish images. Despite the expediency and high accuracy of target detection-based defect detection methods, their efficacy diminishes when faced with fabrics adorned with intricate patterns and when training samples are scarce. The reliance of these methods on the volume of training samples poses a significant limitation, especially given the challenges associated with collecting defect samples. In scenarios where the number of training samples is insufficient, these methods may encounter limitations and potential failures.
The defect detection approach based on image segmentation involves dividing an image into distinct regions and extracting areas of interest. In this method, the fabric image to be inspected is reconstructed using a pre-trained reconstruction model, resulting in a reconstructed image. Subsequently, image segmentation is performed on the reconstructed image in comparison with the original fabric image to obtain a defect map containing defect information. This image reconstruction algorithm only requires training with normal and defect samples, eliminating the need to label defect samples and thus reducing the manual effort involved in annotating defects. Within image segmentation methods, autoencoder networks [18] or generative adversarial networks [19] are commonly employed for the pixel-level segmentation of fabric defect images. Autoencoders are typically trained in an unsupervised manner, using defect-free samples, avoiding the necessity to collect a large number of defect samples for network training. Liu et al. [20] introduced the CU-Net model, which incorporates attention mechanisms into the classical U-Net architecture and employs a novel composite loss function for training. They evaluated the model using the publicly available AITEX fabric defect dataset, demonstrating an enhancement in fabric defect detection accuracy. Meanwhile, Yu et al. [21] proposed an unsupervised defect detection algorithm based on a reconstruction network (ReNet-D). This approach involves dividing images into equally sized patches for training, enabling defect detection in texture images of varying complexities. Although autoencoder-based defect detection methods require only normal samples during network training, they can only learn the features of normal samples. During defect reconstruction, since defects may resemble the texture background of the original image, it can lead to the retention of excessive defect features in the reconstructed image, thereby reducing detection accuracy. Furthermore, autoencoders lose a significant amount of image information during training and cannot reconstruct clear images, making it challenging to apply this method to the detection of complex pattern fabric defects.
Since the inception of Generative Adversarial Networks (GANs), their application has been extensive across diverse domains including steel [22], photoplates [23], and medicine [24] for image defect detection. Liu et al. [25] introduced a GAN-based framework specifically tailored for fabric defect detection. This framework commences by training a GAN to explore the conditional distribution of defects across varied texture backgrounds, utilizing a multilevel GAN structure to synthesize imperceptible defects within defect-free fabric samples. Subsequently, the synthesized defects are integrated into specific locations within new fabric instances. The final stage involves training a deep semantic segmentation network tasked with detecting defects within the generated defect images. Zhang et al. [26] proposed a framework for generating adversarial networks based on attentional feature fusion for unsupervised detection of defects in dyed fabrics, which enhances the model’s attention to texture details through multilevel information fusion and attentional mechanisms to improve the detection of defects in yarn-dyed fabrics.
The utilization of GAN networks presents augmented image feature extraction capabilities due to their inherent architecture. Moreover, their competitive adversarial-based training mechanism proves particularly adept in processing intricate patterned images [27], effectively diminishing the requisite number of training samples [28]. In contrast to self-encoders, GANs exhibit greater suitability in processing fabric images characterized by complex textures.
The intricate nature of fabric production, characterized by a diversity of species, small batch sizes, and high product yields, poses a significant challenge in acquiring ample sample data. This scarcity severely constrains the accuracy of detection model training. Introducing migration learning [29,30], a technique that migrates feature extraction capabilities to a designated network by weight sharing within the network model, stands as a promising solution. This approach demonstrates potential in amplifying model training efficiency and mitigating the reliance on extensive data during training. Presently, migration learning methods for blemish detection encompass two primary categories: pre-trained model methods and domain adaptive methods. Pre-trained model approaches [31,32] often overlook the intrinsic similarity between source and target domain data, leading the target network to acquire task-irrelevant features, thereby impeding the effectiveness of migration. Conversely, domain adaptive methods facilitate the transfer of pertinent features learned by the source domain model to the target task by identifying shared features or similarities between the domains [33]. Early domain adaptive techniques predominantly leveraged shallow machine learning methodologies, such as Support Vector Machines (SVM) [34] and Maximum Mean Discrepancy (MMD) [35], aiming to mitigate distributional disparities between source and target domains by altering feature spaces or re-weighting source domain data. Nevertheless, these methods exhibited limitations in handling high-dimensional data and complex tasks. As research evolved, methodologies centered on feature selection and transformation emerged to better accommodate diverse domain data. These encompassed techniques like feature selection [36] and principal component analysis (PCA) [37] aimed at reducing feature space dimensionality or extracting discriminative features. Advancements in the field witnessed a fusion of deep neural networks with domain-adaptive tasks. Li et al. [38] proposed a domain-adaptive DAYOLOv5 model specifically tailored for surface defect detection in tiles, exhibiting commendable performance even with limited datasets. Similarly, Zhao S et al. [39] introduced a fabric defect detection algorithm founded on two-stage deep migration learning, successfully transferring defect features from solid-color cotton linen to a checkered fabric detection model. This migration encapsulated defect size features and notably reduced the requisite training samples. Despite the strides made by these migration methods in enhancing detection performance and model training efficiency, challenges persist, particularly in achieving high detection accuracy when confronted with intricately patterned fabric defects.
This research considers blemishes as distortions to the original fabric, where regardless of their specific type, the texture within the blemished region diverges from the original fabric texture. To address the challenges posed by limited sample sizes and intricate patterns affecting blemish reconstruction accuracy, a novel fabric blemish detection algorithm combining the SA-Pix2pix network and migration learning is proposed. The primary objective is the restoration of defective regions to their pristine state. Subsequently, dissimilarity computations between the reconstructed and defective images are performed, followed by image segmentation to facilitate defect detection. To mitigate the poor reconstruction accuracy attributed to the Pix2pix network generator’s limited sensory field within convolutional layers and its inadequacy in establishing long-range dependencies within images, the concept of image self-attention is integrated into the network architecture. This innovation leads to the formulation of the SA-Pix2pix network, effectively remedying the network’s inability to extract comprehensive global features from images, thus enhancing reconstruction accuracy. Moreover, recognizing the inefficacy of the adversarial loss function within generative adversarial neural networks for processing fabric images with complex pattern textures, we introduce an improved structural loss function. This, combined with the original loss function, forms the target loss function, aimed at alleviating the network’s struggles with intricate fabric textures. A systematic investigation into the impact of each loss function on defect detection accuracy reveals that the fusion of L1 loss and the enhanced structural loss significantly enhances the network’s capacity to process image intricacies, thereby improving overall image reconstruction accuracy. Building upon this foundation, migration learning is introduced to constrain the training of the target network by aligning feature distributions, effectively addressing potential overfitting issues stemming from the limited influence of small sample sizes during training. This holistic approach aims to enhance the accuracy and robustness of fabric blemish detection by synergizing the SA-Pix2pix network, an adapted loss function, and migration learning to surmount challenges posed by small sample sizes and complex fabric textures in blemish reconstruction.

2. Materials and Methods

2.1. The SA-Pix2pix Model

2.1.1. Network Structure

The Pix2pix network [40], comprising a generator and discriminator, features a generator employing a U-Net model [41] that prioritizes direct feature fusion over pooled indexing methods. This design choice avoids memory-intensive full connectivity between downsampling and upsampling, thereby preserving crucial image structural information. Particularly beneficial for non-periodic and complex patterned texture fabrics, this framework efficiently shares essential underlying information between inputs and outputs, expediting image reconstruction.
To augment the network’s ability to extract global features crucial for defect detection within intricate patterned fabrics, this study integrates the image Self-Attention mechanism (Self-Attention) [42] into the generator network structure. Focused on reconstructing the blemished region necessitating less information, the addition of the Self-Attention mechanism selectively incorporates it solely within the input and intermediate layers of the generator’s upsampling and downsampling layers. This strategic placement enables the network to associate global features of the patterned fabric image with local features surrounding the blemish, enhancing the network’s capacity to comprehend both the broader fabric context and localized defect characteristics. This targeted integration aims to bolster the network’s ability to extract comprehensive features crucial for detecting defects within complex patterned fabrics while curbing computational complexity and training time.
The generator and discriminator components of the architecture employ the “convolution-batch-activation function” module for both upsampling and downsampling operations. Figure 1 illustrates the specific structure of the generator network. The discriminator, derived from the Pix2pix network, is structured as a “Patch GAN” and depicted in Figure 2.

2.1.2. Loss Function

In addressing the intricacies of image reconstruction, this section adopts a composite approach involving multiple loss functions to train the model effectively. Initially, the traditional pixel-by-pixel loss, known as the averaged absolute value error (L1 loss function) [43], is employed in this study. This loss function serves to constrain the generated images by the generator, ensuring minimal deviation from true values. Compared to the mean squared loss function (L2 loss function), the L1 loss function offers advantages in terms of rapid convergence and stable gradients, rendering it the preferred choice as the pixel-by-pixel loss function for the network.
Simultaneously, to empower the generator to process images with a higher level of detail and to ensure proficient reconstruction of defective regions within fabric images, the structural loss function (SSIM loss function) is introduced. Recognizing that L1 loss fails to encapsulate regional structural characteristics in images, especially in scenarios involving irregular textures, the SSIM loss function emerges as a critical addition. Leveraging luminance, contrast, and structure (structural) elements, the SSIM loss function operates in tandem with the L1 loss function to optimize the network model. It specifically accentuates image details more effectively than the L1 loss alone, as shown in Equation (1):
G * = arg min G max D L C G A N ( D , G ) + λ ( L 1 + L S S I M )
In the equation, λ represents a weight parameter regulating the balance between the generator’s Conditional Generative Adversarial Network (CGAN) loss function and the L1 and SSIM loss functions. LCGAN (D, G) denotes the CGAN target loss [44], as expressed in Equation (2):
arg min G max D L C G A N ( D , G ) = Ε x p d a t a ( x ) [ log D ( x | y ) ] + Ε z p d a t a ( z ) [ log ( 1 D ( G ( z | y ) ) ) ]
In the equation, x represents the label, which corresponds to normal sample images, y stands for the additional input information (in this context, the input condition is defect images), and z denotes the defect images. D represents the discriminator, while G represents the generator. pdata(x) signifies the dataset of normal samples, and pdata(z) represents the dataset of defect samples.
Typically, the SSIM loss function [45] is formulated as follows:
L S S I M ( x , y ) = l ( x , y ) α c ( x , y ) β s ( x , y ) γ
In the equation, x and y represent normal samples and images generated by the generator, respectively. α, β, and γ are constants. Here, l(x, y) denotes the comparison of the brightness of the two images, c(x, y) signifies the contrast comparison between the two images, and s(x, y) represents the structural comparison of the two images.
The reconstruction process yields images with a subtle difference in brightness and contrast compared to the originals, leading to a diminished value of the Structural Similarity Index (SSIM) loss function. This diminutive value, as encountered in Equation (1), curtails the efficacy of the SSIM loss function and renders it susceptible to noise. To address this limitation, an enhancement is introduced to the SSIM loss function, as delineated in Equation (4). This modification serves to amplify the weight of the SSIM loss function within Equation (1), thereby augmenting the sensitivity of the Conditional Generative Adversarial Network (CGAN) to image disparities. The refined SSIM loss function, as per Equation (4), is instrumental in elevating the network’s discernment of subtle variations, promoting improved performance in capturing nuanced image differences during the reconstruction process.
L S S I M = 1 min ( l ( x , y ) , c ( x , y ) , s ( x , y ) )

2.2. Channel Domain Transfer Learning

The technical roadmap for achieving complex pattern defect reconstruction using the SA-Pix2pix network and transfer learning is depicted in Figure 3. It primarily comprises two stages:
Channel Weight Estimation: Artificial defect patterns, referred to as “artificial defects”, are created to resemble real defects closely. These artificial defects are utilized to train the SA-Pix2pix model in the source domain. The channel weights are estimated by calculating the variation in the image reconstruction loss function. This process helps identify the channels and features that have the most significant impact on defect image reconstruction accuracy.
Construction of Metric Criteria: Real defect images are treated as target domain data and are input into both the source domain network (the pre-trained SA-Pix2pix model) and the target domain network (the SA-Pix2pix model under training). These networks map the source domain data to the same feature space and measure the distance between the corresponding features of the source and target domain networks. The feature distance is then multiplied by the channel weights. A smaller sum of the feature distance and channel weight product indicates greater consistency in feature distribution between the source and target domains, facilitating the transfer of similar features between the two domains. However, relying solely on “consistency in feature distribution” cannot completely address the issue of inadequate defect reconstruction accuracy in the target domain network. To improve the reconstruction accuracy of complex pattern defect images, both the feature distance and the channel weight product sum must be incorporated into the loss function of the target domain network along with the image reconstruction loss function.

2.2.1. Channel Weight Estimation

Each convolutional layer comprises multiple kernels, with each kernel corresponding to a channel. To assess the impact of each channel on defect image reconstruction accuracy, we set all the elements of each kernel to zero separately. The change in the loss function (Equation (1)) is computed to gauge the significance of each channel. This loss can be interpreted as the error between the generator’s output and the real labels, indicating the extent to which the images generated by the generator are mistakenly recognized as real images by the discriminator.
The Pix2pix network generators are separately trained using source domain data, one with the j-th channel, and one without. The loss function with the j-th channel is denoted as l j , and without the j-th channel as l j . To normalize the change in loss functions, the softmax function is applied, ensuring that the weight for the j-th channel is non-negative, as presented in Equation (3). From Equation (5), it is evident that if the removal of a specific channel leads to a larger change in the loss function, the channel weight becomes greater, signifying a more substantial impact of that channel on the defect image reconstruction accuracy.
w j = s o f t max ( l j l j )
Z j = l j l j
soft max ( z j ) = exp ( z j ) t = 1 T exp ( z t )
In Equation (7), T represents the total number of convolution kernels in the network.

2.2.2. Construction of Metric Criteria

For any defect image xi in the target domain (1 ≤ in), it is input into the generator of the source domain network. After undergoing computation by the j-th convolution kernel of the k-th convolutional layer, it produces a feature map F j k . Subsequently, xi is input into the generator of the target domain network, and after undergoing computation by the j-th convolution kernel of the k-th convolutional layer, it yields a feature map F / j k . The consistency in feature distribution between the source and target domains is quantified using Equation (8).
L C = k = 1 M j = 1 N ( w j | | F j k F / j k | | 2 2 )
In this context, w j represents the weight allocated to the j-th convolution kernel, N denotes the total number of convolution kernels, and M represents the total number of convolutional layers. F j k F / j k 2 2 signifies the Euclidean distance between corresponding features of the source and target networks.
From Equation (8), it can be observed that when the features in the source and target domains are dissimilar, despite having a large Euclidean distance between features, the channel weights corresponding to dissimilar features are small. Channel weights effectively mitigate the influence of dissimilar features on the measure of feature distribution consistency. When the features in the source and target domains are similar, although the channel weights corresponding to similar features are relatively large, the Euclidean distance between similar features is very small, resulting in a minimal value for the measure of feature distribution consistency. Therefore, Equation (8) is applicable for assessing feature distribution consistency, ensuring convergence, and facilitating the transfer of similar features between the source and target domains, thereby reducing the impact of a small sample size on defect image reconstruction accuracy.
However, the measure of feature distribution consistency, Lc, is still influenced by dissimilar features. To further enhance defect image reconstruction accuracy, Equations (1) and (8) are both incorporated as the loss function of the target domain network, as illustrated in Equation (9). Equation (9) not only accounts for the transfer learning of similar features but also considers the impact of the generator’s output error on reconstruction accuracy, ultimately improving defect reconstruction precision.
min ( G * + α L C )
In the equation, the recommended value for α is 0.1.

2.3. Defect Detection

The dissimilarity computation of reconstructed images and the defect localization method are as follows.

2.3.1. Image Dissimilarity Calculation

Defect images, as depicted in Figure 4a, are fed into the defect reconstruction network, yielding images as shown in Figure 4b. The subtraction of these two images on a per-pixel basis results in defective images with the texture background removed, as illustrated in Figure 4c. These images encapsulate information about the location of defect regions.

2.3.2. Denoising and Defect Localization

The dissimilarity computation of reconstructed images and the defect localization method are as follows:
In Figure 4c, a significant amount of noise is present, giving rise to spurious defects, necessitating image denoising and grayscale conversion. This paper employs the enhanced FT algorithm [46] for denoising and grayscale conversion, replacing the FT algorithm’s filter with a mean filter. The processing outcome is depicted in Figure 4d. Finally, by binarizing the image, the defect detection results are obtained, as illustrated in Figure 4e.

2.4. Experimental Environment and Dataset

The experimental data were acquired using a CCD camera, and a dataset was generated by shifting a cropping box of size 256 × 256 on the captured images. To augment the dataset, the cropping box had a stride of less than 256. We collected five different types of fabric defect patterns, as depicted in Figure 5. Samples 1 and 2 are of cotton and linen fabric, while samples 3, 4, and 5 are made of cotton silk fabric. All five samples are woven fabrics, and the defect types primarily include oil stains, abrasions, foreign objects, tears, and missing warp threads. Both the test and training set images are obtained from different positions on the same fabric roll. Within the training set, each sample category was obtained through cropping and contains a small number of positive samples. According to statistics, the approximate ratio of positive to negative samples is 1:4.
A significant number of artificial defect images were created individually on the flawless images of five distinct patterns, as illustrated in Figure 6. In the transfer learning experiments, only 200 genuine defect images per pattern class were employed as the target domain dataset, with 1000 artificial defect images used as the source domain dataset.
The computer system configuration is detailed in Table 1. In the experiments, the Adam optimizer [47] was utilized using the parameters β1 = 0.5 and β2 = 0.999. During training, the initial learning rate was set to 0.0002, which linearly decayed to 0 over the last 200 epochs.

3. Results

3.1. Impact of Self-Attention Mechanism on Defect Detection Accuracy

In order to assess the impact of the self-attention mechanism and loss functions on defect detection accuracy, separate experimental studies were conducted to investigate the effects of the self-attention mechanism and loss functions on defect detection accuracy.
The convolutional neural network’s local receptive field within GAN networks has limitations, restricting its focus to local image features, thereby diminishing image reconstruction accuracy. As depicted in Figure 7a, the defect appears as a relatively large black stain. In the absence of the self-attention mechanism, the reconstructed image (Figure 7b) retains small black areas, causing missed defect regions. Therefore, the self-attention mechanism was introduced into the generator network. To reduce the network’s complexity and training time, the self-attention mechanism was added only to the input and intermediate layers of the generator’s upsampling and downsampling layers, as depicted in Figure 1. This enables the network to capture global features of fabric patterns and local features of the defect’s surrounding area. The introduction of the self-attention mechanism empowers the generator with the ability to extract global features, effectively addressing the issue of inadequate image reconstruction accuracy. The reconstructed image with the self-attention mechanism, as shown in Figure 7d, exhibits significant improvement in reconstruction accuracy compared to the version without the self-attention mechanism. Furthermore, it enables the complete detection of defect areas, as demonstrated in Figure 7e.

3.2. Impact of Loss Function on Defect Detection Accuracy

Existing GAN networks have limited capability in handling image details, resulting in incorrect reconstruction of non-defective areas. This issue leads to increased dissimilarity errors in images and a decrease in defect segmentation accuracy, potentially causing missed and false defect detection, as illustrated in Figure 8b. In CGAN training, we introduced both the L1 loss function and an enhanced SSIM loss function as the target loss functions. As depicted in Figure 8d, when using only the L1 loss function, the edge regions of defect images are not entirely detected. This might be due to the L1 loss function’s capability to perform pixel-wise comparisons, adapting to the complex requirements of reconstructing fabric patterns in images, but it requires an enhancement in image detail reconstruction ability. When using only the improved SSIM loss function, the results, as shown in Figure 8e, indicate that some defect regions are missed. Combining both loss functions can address the aforementioned limitations, as illustrated in Figure 8f. The weight parameter λ = 100 balances the CGAN loss function with the L1 and SSIM loss functions effectively.

3.3. Comparison of Experimental Results for Different Defect Detection Methods

Under the same testing conditions, comparative experiments were conducted between the ReNet-D [21] method, the SDDM-PS [48] method, and the SA-Pix2pix model. Initially, these three methods were employed to reconstruct defect images, followed by pixel-wise dissimilarity calculations between the reconstructed and defect images. Finally, the dissimilarity results underwent image segmentation, as illustrated in Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13.
In Figure 8, which displays the experimental results for oil stain defects using the three methods, the second column reveals the ReNet-D method’s significantly poor detection accuracy for oil stain defects. In the third column, it can be observed that the SDDM-PS method provides incomplete detection of oil stain defect regions, and in some cases, it fails to detect the defect in sample 4. However, the fourth column demonstrates that the method recommended in this paper effectively detects oil stain defects.
Figure 10 displays the experimental results for various methods of detecting foreign object defects. In the second column, it is evident that the ReNet-D method exhibits notably poor accuracy in detecting scratch defects. In the third column, the SDDM-PS method yields results with a substantial amount of noise points and even misinterprets the image texture background as defects, failing to effectively detect scratch defects. However, in the fourth column, our proposed approach demonstrates effective scratch defect detection. Nevertheless, for sample 4, the detection results exhibit a tendency for smaller detection areas.
Figure 11 displays the experimental results for foreign object defects using various methods. From the second column, it is evident that the ReNet-D method exhibits significantly poor detection accuracy for foreign object defects. The third column reveals that the SDDM-PS method can detect most of the defect areas but performs poorly in the detection of defects in samples 1, 4, and 5. However, in the fourth column, the method recommended in this paper effectively detects foreign object defects, with only minor undetected areas in sample 1.
Figure 12 depicts the experimental results for pinhole defects using various methods. From the second column, it is evident that the ReNet-D method exhibits significantly poor detection accuracy for pinhole defects. The third column reveals that the SDDM-PS method detects pinhole defect areas but with some noise and incomplete detection areas. In the fourth column, the method recommended in this paper effectively detects pinhole defects.
Figure 13 illustrates the experimental results for missing warp defects using various methods. From the second column, it is evident that the ReNet-D method exhibits significantly poor detection accuracy for the missing warp defects. The third column reveals that the SDDM-PS method fails to detect the missing warp defects completely. For samples 3, 4, and 5, it only detects a small portion of the defects, and for sample 1, it does not detect any defects. Sample 2 also experiences a few cases of false detection. In the fourth column, it is evident that the method recommended in this paper can detect missing warp defects almost entirely, although discontinuous results are observed for sample 3 and sample 4.
In summary, the ReNet-D methodologies exhibit poor accuracy in blemish detection, occasionally even incorporating the original image background. This may be attributed to the self-encoder’s loss of critical image information during training, hindering the ReNet-D method’s ability to effectively reconstruct images and subsequently perform precise blemish image segmentation.
While the SDDM-PS method demonstrates a degree of proficiency in detecting oil stains, foreign objects, and broken holes, limitations persist. This may be attributed to its reliance solely on a GAN network for sample training, as GANs exhibit limited capacity in processing image details. Their focus tends to be confined to local areas within images, lacking the capability to detect defects closely resembling the original fabric texture background.
In contrast, the method advocated in this paper showcases notable completeness and accuracy in detecting various defect types compared to the aforementioned methodologies. Notably, it achieves high accuracy in detecting oil stains, foreign objects, and holes. However, it does encounter errors when identifying scuffs and warp defects. This is primarily due to the minor damage degree within the edge regions of scuff defects, which bears a slight difference from the fabric background. Consequently, the subtle dissimilarity between the reconstructed and original images in these areas impedes effective segmentation, leading to smaller detected regions. Similarly, warp defects present challenges akin to scuffing defects. The low contrast between defective areas and the fabric background further diminishes differences between the reconstructed and original images. Consequently, this results in intermittent segmentation outcomes.
In addition, we conducted an efficiency comparison of the algorithms using an in-house dataset. Table 2 illustrates the processing elapsed time of the three methods under uniform computational performance. The method advocated in this paper demonstrates an average detection time of 46.15 ms, slightly exceeding the processing times of the other two methods. However, it leads to higher detection accuracy.

4. Discussion

This paper proposes a novel blemish detection methodology leveraging an SA-Pix2pix network combined with migration learning. This approach synthesizes the strengths of GAN networks, self-attention mechanisms, and migration learning to notably enhance the reconstruction accuracy of blemish images, especially when working with limited samples. Addressing the limitations of generative adversarial neural networks in processing image details, we introduce the L1 loss function and an enhanced structural loss function to form a comprehensive target loss function, augmenting the CGAN network’s capacity to handle image intricacies.
Through comparative analysis between networks employing and omitting the self-attention mechanism and distinct loss functions, our findings verify that the self-attention mechanism significantly amplifies the network’s global feature extraction capabilities. Its introduction enables the network to reconstruct defects based on global image features, markedly improving reconstruction accuracy. Conversely, networks lacking the self-attention mechanism rely solely on local features for defect reconstruction. Consequently, they may fail to reconstruct intermediate regions, leading to errors in the final detection results.
Moreover, our experiments confirm that integrating both the L1 loss function and the improved SSIM loss function notably enhances the network’s ability to process intricate fabric images with complex pattern textures. However, employing either loss function in isolation or without introducing any additional loss function diminishes the network’s defect detection accuracy.
Comparative experimental studies highlight the superior performance of our method over ReNet-D and SDDM-PS methodologies in enhancing both the completeness of defective regions and the precision of their localization, particularly in scenarios with limited sample sizes. While the incorporation of the self-attention mechanism and improved loss functions into the Pix2pix network elevates its image reconstruction capabilities, further enhancements are needed to improve detection accuracy for abrasions and missing warp defects.
While this paper successfully achieves defect detection through image reconstruction, there exist certain limitations in the obtained results, leading to errors in defect detection in some images. To further enhance defect detection accuracy, additional research avenues warrant exploration in the following areas:
Firstly, investigating the implementation of deformable convolutions instead of conventional convolutions for image sampling presents a promising direction. Deformable convolutions introduce positional offset parameters to each sampling point, allowing the convolution to autonomously learn sampling point selection within the network. This transformation alters the receptive field from rectangular to polygonal, enabling better fitting to object contours with multi-layer polygonal receptive fields. This enhancement could significantly aid in learning fabric pattern features, potentially playing a pivotal role in image reconstruction. However, leveraging deformable convolutions might present challenges in model training and convergence. Subsequent research could focus on optimizing the integration of deformable convolutions within the model to improve image reconstruction accuracy and consequently enhance defect detection performance.
Secondly, augmenting the network by integrating Transformer models, renowned for their capacity to directly focus on global image features, holds promise. Properly integrating Transformer models with CNNs could further elevate image reconstruction accuracy, especially in scenarios with limited sample sizes. This integration might bolster the network’s ability to capture comprehensive global features crucial for defect detection. Investigating the synergy between Transformer models and CNNs could be instrumental in advancing defect detection methodologies.

5. Conclusions

This paper presents a novel defect detection approach leveraging the SA-Pix2pix network and migration learning to address the issue of insufficient accuracy in detecting blemishes within complex patterned fabrics, especially when dealing with limited sample sizes. The central concept involves treating blemishes as damage to fabric texture, utilizing a conditional generative adversarial neural network to reconstruct the blemished regions within images, aiming to restore them to their normal fabric texture. Subsequently, dissimilarity computation between the reconstructed and defective images is conducted, followed by image segmentation based on the dissimilarity result, ultimately facilitating accurate blemish detection.
Our contributions revolve around three key points: Firstly, we address the challenge of establishing remote dependencies in images due to the limited receptive field of the convolutional neural network within the generator. We introduce the image self-attention mechanism into the Pix2pix neural network, exploring its impact on the network’s image reconstruction accuracy. This leads to the proposal of the SA-Pix2pix neural network model, significantly enhancing the accuracy of defective image reconstruction.
Secondly, to combat the Pix2pix neural network’s limited capacity in processing image details, we introduce the L1 loss function and an improved structural loss function. This exploration assesses the impact of each loss function on image reconstruction accuracy, culminating in the construction of a comprehensive loss function tailored specifically for defective image reconstruction, overcoming the network’s limitations in processing image details.
Thirdly, we introduce migration learning to regulate the training of the target network through feature distribution, mitigating potential overfitting issues arising from small sample sizes.
Furthermore, a comparative experimental study is conducted, pitting the ReNet-D model, SDDM-PS model, and our proposed model against each other. The study involves the detection of five different complex pattern fabric blemishes, demonstrating that our method outperforms both the ReNet-D and SDDM-PS models in terms of blemish detection accuracy.

Author Contributions

F.H.: Project administration, Funding acquisition, and Writing—Review and editing. J.G.: Conceptualization, Methodology, Formal analysis, and Writing—Original draft. H.F.: methodology. W.L.: data curation. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge the financial support from Digital textile equipment Key laboratory open fund DTL2023006.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request. The data are not publicly available due to privacy.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kumar, A. Computer-vision-based fabric defect detection: A survey. IEEE Trans. Ind. Electron. 2008, 55, 348–363. [Google Scholar] [CrossRef]
  2. Chen, J.; Jain, A.K. A structural approach to identify defects in textured images. In Proceedings of the 1988 IEEE International Conference on Systems, Man, and Cybernetics, Beijing, China, 8–12 August 1988; pp. 29–32. [Google Scholar]
  3. Ngan, H.Y.T.; Pang, G.K.H.; Yung, S.P.; Ng, M.K. Wavelet based methods on patterned fabric defect detection. Pattern Recognit. 2005, 38, 559–576. [Google Scholar] [CrossRef]
  4. Li, P.; Zhang, H.; Jing, J.; Li, R.; Zhao, J. Fabric defect detection based on multi-scale wavelet transform and Gaussian mixture model method. J. Text. Inst. 2015, 106, 587–592. [Google Scholar] [CrossRef]
  5. Yang, C.; Liu, P.; Yin, G.; Jiang, H.; Li, X. Defect detection in magnetic tile images based on stationary wavelet transform. Ndt E Int. 2016, 83, 78–87. [Google Scholar] [CrossRef]
  6. Kumar, A.; Pang, G.K.H. Defect detection in textured materials using Gabor filters. IEEE Trans. Ind. Appl. 2002, 38, 425–440. [Google Scholar] [CrossRef]
  7. Raheja, J.L.; Kumar, S.; Chaudhary, A. Fabric defect detection based on GLCM and Gabor filter: A comparison. Opt. Int. J. Light Electron Opt. 2013, 124, 6469–6474. [Google Scholar] [CrossRef]
  8. Chan, C.; Pang, G.K.H. Fabric defect detection by Fourier analysis. IEEE Trans. Ind. Appl. 2000, 36, 1267–1276. [Google Scholar] [CrossRef]
  9. Li, M.; Cui, S.; Xie, Z. Application of Gaussian mixture model on defect detection of print fabric. J. Text. Res. 2015, 36, 94–98. [Google Scholar]
  10. Zong, B.; Song, Q.; Min, M.R.; Cheng, W.; Lumezanu, C.; Cho, D.; Chen, H. Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  11. Latif-Amet, A. An efficient method for texture defect detection: Sub-band domain co-occurrence matrices. Image Vis. Comput. 2000, 18, 543–553. [Google Scholar] [CrossRef]
  12. Li, F.; Yuan, L.; Zhang, K.; Li, W. A defect detection method for unpatterned fabric based on multidirectional binary patterns and the gray-level co-occurrence matrix. Text. Res. J. 2020, 90, 776–796. [Google Scholar] [CrossRef]
  13. Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  14. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  15. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
  16. Chen, M.; Yu, L.; Zhi, C.; Sun, R.; Zhu, S.; Gao, Z.; Ke, Z.; Zhu, M.; Zhang, Y. Improved faster R-CNN for fabric defect detection based on Gabor filter with Genetic Algorithm optimization. Comput. Ind. 2022, 134, 103551. [Google Scholar] [CrossRef]
  17. Jin, R.; Niu, Q. Automatic Fabric Defect Detection Based on an Improved YOLOv5. Math. Probl. Eng. 2021, 2021, 7321394. [Google Scholar] [CrossRef]
  18. Hinton, G.E.; Salakhutdinov, R.R. Reducing the Dimensionality of Data with Neural Networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef]
  19. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. Adv. Neural. Inf. Process. Sys. 2014, 27, 10. [Google Scholar]
  20. Liu, R.Q.; Li, M.H.; Shi, J.C.; Liang, Y.B. Fabric Defect Detection Method Based on Improved U-Net. In Proceedings of the 2021 2nd International Conference on Internet of Things, Hangzhou, China, 14–16 May 2021; p. 12160. [Google Scholar]
  21. Yu, W.Y.; Zhang, Y.; Yao, H.M.; Shi, H. Visual Inspection of Surface Defects Based on Lightweight Reconstruction Network. Zidonghua Xuebao 2022, 48, 2175–2186. [Google Scholar]
  22. Zhang, Y.; Han, J.; Jing, L.; Wang, C.; Zhao, L. Intelligent Fault Diagnosis of Broken Wires for Steel Wire Ropes Based on Generative Adversarial Nets. Appl. Sci. 2022, 12, 11552. [Google Scholar] [CrossRef]
  23. Lu, F.; Niu, R.; Zhang, Z.; Guo, L.; Chen, J. A Generative Adversarial Network-Based Fault Detection Approach for Photovoltaic Panel. Appl. Sci. 2022, 12, 1789. [Google Scholar] [CrossRef]
  24. Vaccari, I.; Orani, V.; Paglialonga, A.; Cambiaso, E.; Mongelli, M. A Generative Adversarial Network (GAN) Technique for Internet of Medical Things Data. Sensors. 2021, 21, 3726. [Google Scholar] [CrossRef]
  25. Liu, J.; Wang, C.; Su, H.; Du, B.; Tao, D. Multistage GAN for Fabric Defect Detection. IEEE Trans. Image. Process. 2020, 29, 3388–3400. [Google Scholar] [CrossRef]
  26. Zhang, H.; Qiao, G.; Lu, S.; Yao, L.; Chen, X. Attention-based Feature Fusion Generative Adversarial Network for yarn-dyed fabric defect detection. Text. Res. J. 2023, 93, 1178–1195. [Google Scholar] [CrossRef]
  27. Rui, J.; Qiang, N. Research on textile defects detection based on improved generative adversarial network. J. Eng. Fibers Fabr. 2022, 17, 15589250221101382. [Google Scholar] [CrossRef]
  28. Zhang, G.; Pan, Y.; Zhang, L. Semi-supervised learning with GAN for automatic defect detection from images. Autom. Constr. 2021, 128, 103764. [Google Scholar] [CrossRef]
  29. Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A comprehensive survey on transfer learning. Proc. IEEE Inst. Electr. Electron. Eng. 2020, 109, 43–76. [Google Scholar] [CrossRef]
  30. Pan, S.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data. Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
  31. Li, H.; Fu, X.; Huang, T. Research on surface defect detection of solar pv panels based on pre-training network and feature fusion. In Proceedings of the 3rd International Conference on Green Energy and Sustainable Development, Shenyang, China, 14–15 November 2020; p. 22071. [Google Scholar]
  32. Şeker, A. Evaluation of fabric defect detection based on transfer learning with pre-trained AlexNet. In Proceedings of the 2018 International Conference on Artificial Intelligence and Data Processing (IDAP), Malatya, Turkey, 28–30 September 2018; pp. 1–4. [Google Scholar]
  33. Wang, Q.; Michau, G.; Fink, O. Domain adaptive transfer learning for fault diagnosis. In Proceedings of the 2019 Prognostics and System Health Management Conference (PHM-Paris), Paris, France, 2–5 May 2019; pp. 279–285. [Google Scholar]
  34. Hearst, M.A.; Dumais, S.T.; Osuna, E.; Platt, J.; Scholkopf, B. Support vector machines. IEEE Intell. Syst. 1998, 13, 18–28. [Google Scholar] [CrossRef]
  35. Wang, W.; Li, H.; Ding, Z.; Nie, F.; Chen, J.; Dong, X.; Wang, Z. Rethinking Maximum Mean Discrepancy for Visual Domain Adaptation. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 264–277. [Google Scholar] [CrossRef]
  36. Kumar, V.; Minz, S. Feature selection: A literature review. Smart Comput. Rev. 2014, 4, 211–229. [Google Scholar] [CrossRef]
  37. Abdi, H.; Williams, L.J. Principal component analysis. Wiley. Interdiscip. Rev. Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
  38. Li, C.; Yan, H.; Qian, X.; Zhu, S.; Zhu, P.; Liao, C.; Tian, H.; Li, X.; Wang, X.; Li, X. A domain adaptation YOLOv5 model for industrial defect inspection. Measurement 2023, 213, 112725. [Google Scholar] [CrossRef]
  39. Zhao, J.; Zhou, S.; Zheng, Q.; Mei, S. Fabric defect detection based on transfer learning and improved Faster R-CNN. J. Eng. Fiber. Fabr. 2022, 17, 15589250221086647. [Google Scholar]
  40. Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, Hawaii, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
  41. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  42. Liu, W.; Cao, J.; You, J.; Wang, H. Vector Decomposition of Elastic Seismic Wavefields Using Self-Attention Deep Convolutional Generative Adversarial Networks. Appl. Sci. 2023, 13, 9440. [Google Scholar] [CrossRef]
  43. Zhao, H.; Gallo, O.; Frosio, I.; Kautz, J. Loss Functions for Image Restoration With Neural Networks. IEEE Trans. Comput. Imaging. 2017, 3, 47–57. [Google Scholar] [CrossRef]
  44. Liu, X.; Qiao, Y.; Xiong, Y.; Cai, Z.; Liu, P. Cascade conditional generative adversarial nets for spatial-spectral hyperspectral sample generation. Sci. Chi. Inf. Sci. 2020, 63, 77–92. [Google Scholar] [CrossRef]
  45. Zhou, W.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image. Process. 2004, 13, 600–612. [Google Scholar]
  46. Xu, Q.; Hu, F.; Wang, C.; Wu, Y. Segmentation of fabric defect images based on improved frequency-tuned salient algorithm. J. Text. Res. 2018, 39, 125–131. [Google Scholar]
  47. Makhzani, A.; Shlens, J.; Jaitly, N.; Goodfellow, I. Adversarial Autoencoders. arXiv 2015, arXiv:1511.05644. [Google Scholar]
  48. Zhao, Z.; Li, B.; Dong, R.; Zhao, P. A surface defect detection method based on positive samples. In Proceedings of the PRICAI 2018: Trends in Artificial Intelligence: 15th Pacific Rim International Conference on Artificial Intelligence, Nanjing, China, 28–31 August 2018; pp. 473–481. [Google Scholar]
Figure 1. Network structure of generators.
Figure 1. Network structure of generators.
Applsci 14 00041 g001
Figure 2. Network structure of the discriminator.
Figure 2. Network structure of the discriminator.
Applsci 14 00041 g002
Figure 3. Technical roadmap.
Figure 3. Technical roadmap.
Applsci 14 00041 g003
Figure 4. Flow of defect detection process. (a) Original input image; (b) reconstruction map; (c) defect map; (d) denoising map using FT algorithm; (e) defect location map using the binary method.
Figure 4. Flow of defect detection process. (a) Original input image; (b) reconstruction map; (c) defect map; (d) denoising map using FT algorithm; (e) defect location map using the binary method.
Applsci 14 00041 g004
Figure 5. The real data set of defects used in the experiment. (a) Sample 1; (b) sample 2; (c) sample 3; (d) sample 4; (e) sample 5.
Figure 5. The real data set of defects used in the experiment. (a) Sample 1; (b) sample 2; (c) sample 3; (d) sample 4; (e) sample 5.
Applsci 14 00041 g005
Figure 6. Artificial dataset of defects used in the experiment. (a) Sample 1; (b) sample 2; (c) sample 3; (d) sample 4; (e) sample 5.
Figure 6. Artificial dataset of defects used in the experiment. (a) Sample 1; (b) sample 2; (c) sample 3; (d) sample 4; (e) sample 5.
Applsci 14 00041 g006
Figure 7. Comparison chart with and without self-attention mechanism. (a) Original defect image; (b) Reconstructed image without self-attention mechanism; (c) Reconstructed image with self-attention mechanism; (d) Detection result without self-attention mechanism; (e) Detection result with self-attention mechanism.
Figure 7. Comparison chart with and without self-attention mechanism. (a) Original defect image; (b) Reconstructed image without self-attention mechanism; (c) Reconstructed image with self-attention mechanism; (d) Detection result without self-attention mechanism; (e) Detection result with self-attention mechanism.
Applsci 14 00041 g007
Figure 8. Comparison of detection results of different loss functions. (a) Image of the original defect; (b) detection results of CGAN loss; (c) detection results of original SSIM loss; (d) detection results of L1 loss; (e) detection results of SSIM loss; (f) L1 + SSIM loss detection results.
Figure 8. Comparison of detection results of different loss functions. (a) Image of the original defect; (b) detection results of CGAN loss; (c) detection results of original SSIM loss; (d) detection results of L1 loss; (e) detection results of SSIM loss; (f) L1 + SSIM loss detection results.
Applsci 14 00041 g008
Figure 9. Comparison of experimental results of oil defects by each method. (a) Defect sample; (b) detection result of ReNet-D; (c) detection result of SDDM-PS; (d) detection result of SA-Pix2pix.
Figure 9. Comparison of experimental results of oil defects by each method. (a) Defect sample; (b) detection result of ReNet-D; (c) detection result of SDDM-PS; (d) detection result of SA-Pix2pix.
Applsci 14 00041 g009aApplsci 14 00041 g009b
Figure 10. Comparison of experimental results of bruise defects by each method. (a) Defect sample; (b) detection result of ReNet-D; (c) detection result of SDDM-PS; (d) detection result of SA-Pix2pix.
Figure 10. Comparison of experimental results of bruise defects by each method. (a) Defect sample; (b) detection result of ReNet-D; (c) detection result of SDDM-PS; (d) detection result of SA-Pix2pix.
Applsci 14 00041 g010
Figure 11. Comparison of experimental results of foreign matter defects by each method. (a) Defect sample; (b) detection result of ReNet-D; (c) detection result of SDDM-PS; (d) detection result of SA-Pix2pix.
Figure 11. Comparison of experimental results of foreign matter defects by each method. (a) Defect sample; (b) detection result of ReNet-D; (c) detection result of SDDM-PS; (d) detection result of SA-Pix2pix.
Applsci 14 00041 g011
Figure 12. Comparison of experimental results of the hole defect by each method. (a) Defect sample; (b) detection result of ReNet-D; (c) detection result of SDDM-PS; (d) detection result of SA-Pix2pix.
Figure 12. Comparison of experimental results of the hole defect by each method. (a) Defect sample; (b) detection result of ReNet-D; (c) detection result of SDDM-PS; (d) detection result of SA-Pix2pix.
Applsci 14 00041 g012
Figure 13. Comparison of experimental results of missing warp pointsby each method. (a) Defect sample; (b) detection result of ReNet-D; (c) detection result of SDDM-PS; (d) detection result of SA-Pix2pix.
Figure 13. Comparison of experimental results of missing warp pointsby each method. (a) Defect sample; (b) detection result of ReNet-D; (c) detection result of SDDM-PS; (d) detection result of SA-Pix2pix.
Applsci 14 00041 g013
Table 1. Computer system configuration.
Table 1. Computer system configuration.
SystemMemoryGPUCPUDeep Learning Framework
Windows 1064 GNVIDIA GTX-1080TiIntel Core i7-8700K @ 3.70 GHzPytorch, CUDA10.1, CUDNN7.6
Table 2. Comparison of processing time.
Table 2. Comparison of processing time.
MethodsReNet-DSDDM-PSSA-Pix2pix
Time (ms)38.2543.8246.15
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, F.; Gong, J.; Fu, H.; Liu, W. Fabric Defect Detection Method Using SA-Pix2pix Network and Transfer Learning. Appl. Sci. 2024, 14, 41. https://doi.org/10.3390/app14010041

AMA Style

Hu F, Gong J, Fu H, Liu W. Fabric Defect Detection Method Using SA-Pix2pix Network and Transfer Learning. Applied Sciences. 2024; 14(1):41. https://doi.org/10.3390/app14010041

Chicago/Turabian Style

Hu, Feng, Jie Gong, Han Fu, and Wenliang Liu. 2024. "Fabric Defect Detection Method Using SA-Pix2pix Network and Transfer Learning" Applied Sciences 14, no. 1: 41. https://doi.org/10.3390/app14010041

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop