Next Article in Journal
Sum Rate Maximization for Intelligent Reflecting Surface-Assisted UAV-Enabled NOMA Network
Previous Article in Journal
Enhancing QoS with LSTM-Based Prediction for Congestion-Aware Aggregation Scheduling in Edge Federated Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lightweight Reconstruction Network for Surface Defect Detection Based on Texture Complexity Analysis

School of Mechanical and Electronic Engineering, Wuhan University of Technology, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(17), 3617; https://doi.org/10.3390/electronics12173617
Submission received: 1 August 2023 / Revised: 20 August 2023 / Accepted: 23 August 2023 / Published: 27 August 2023

Abstract

:
Deep learning networks have shown excellent performance in surface defect recognition and classification of certain industrial products. However, most industrial product defect samples are scarce and have a wide variety of defect types, making methods that require a large number of defect samples for training unsuitable. In this paper, a lightweight surface defect detection network (LRN-L) based on texture complexity analysis is proposed. Only a large number of defect-free samples, which can be easily obtained, are needed to detect defects. LRN-L includes two stages: texture reconstruction stage and defect localization stage. In the texture reconstruction phase, a lightweight reconstruction network (LRN) based on convolutional autoencoder is designed, which can reconstruct defect-free texture images; a loss function combining structural loss and L1 loss is proposed to improve the detection effect; we built a calculation model for image complexity, calculated the texture complexity for texture samples, and divided textures into three levels based on complexity. In the defect localization stage, the residual between the reconstructed image and the original image is taken as the possible region of the defect, and the defect localization is realized via a segmentation algorithm. In this paper, the network structure, loss function, texture complexity and other factors of LRN-L are analyzed in detail and compared with other similar algorithms on multiple texture datasets. The results show that LRN-L has strong robustness, accuracy and generalization ability, and is more suitable for industrial online detection.

1. Introduction

Traditional machine learning methods can effectively solve the problem of defect detection of a variety of industrial products, such as bearings [1], mobile screen [2], coiled materials [3], rails [4], steel beams [5], etc. These methods can manually design feature extractors to adapt to the specific product image dataset and input product features into classifiers such as SVM (support vector machines) [6] and NN (neural network) [7] to determine whether the product has defects. However, when the surface defects of the products have problems such as a complex background texture, large variation of defect feature scale, and similarity of defect region features and background features (as shown in Figure 1), the traditional machine learning method cannot meet the needs of this kind of detection.
Since AlexNet [8] was proposed, the deep learning method based on convolutional neural network (CNN) has become the mainstream method in the field of surface defect detection [9,10,11,12]. CNN can not only automatically learn image features, but also extract more abstract image features through the superposition of multiple convolution layers, which has better feature representation ability than the manually designed feature extraction algorithm. According to the results of network output, the defect detection algorithm based on deep learning can be divided into the defect classification method, defect recognition method and defect segmentation method.
The algorithm based on defect classification usually uses some classical classification networks to train the samples, and the trained model can classify the defective and defective-free samples. Tian [13] used two CNN networks to detect defects in six types of images; Xu [14] proposed a CNN classification network integrating VGG (Visual Geometry Group) and ResNet to detect and classify the surface defects of rollers; Weimer [15] also use CNN to identify defect categories. Such methods usually do not involve the location of defect areas.
In order to accurately locate the defect area, some researchers have improved the network with excellent performance in target recognition task and applied it to surface defect detection. Such algorithms are mostly based on R-CNN [16], SSD (single-shot multibox detector) [17], YOLO (You Only Look Once) [18] and other networks. Chen [19] applied deep CNN (DCNN) to accelerate defect detection.
In order to achieve the accuracy of pixel-level detection, some researchers have used segmentation networks, such as the detection network constructed by Huang [20] using U-Net to transform defect detection tasks into semantic segmentation tasks, which improves the accuracy of magnetic tile surface detection. Long [21] uses a full convolutional network (FCN) to segment the defect area. These methods all rely on a certain number of defect samples.
On many occasions, the type of product defect is unpredictable, and it is difficult to collect a large number of defect samples. To solve these problems, researchers began to pay attention to small samples or unsupervised learning methods. For examples, Yu [22] used the Yolo V3 network to train a small number of defective samples to achieve high accuracy detection. Methods based on autoencoder (AE) are used for surface defect detection tasks, such as the convolutional autoencoder (CAE) [23], stacked noise reduction autoencoder based on fisher criterion (FCSDA) [24], robust autoencoder (RCAE) [25], sparse denoising autoencoder network fused with gradient difference information [26], etc. Mei [27] proposed using the multi-scale convolution autoencoder network (MSCDAE) to reconstruct the image and generate the detection result by using the reconstruction residual. Compared with the traditional unsupervised algorithms, such as PHOT (phase-only transform) [28] and DCT (discrete cosine transformation) [29], MSCDAE has greatly improved the model evaluation index. Yangh [30] used feature clustering on the basis of MSCDAE to improve the reconstruction accuracy of texture background. The data samples used in the above reconstruction network are mostly regular textures, without considering the differences in image textures, so the detection accuracy obtained via such detection methods cannot fully reflect the performance of detection methods, nor can it measure the generalization ability of detection methods.
In addition to the autoencoder, the generic adversarial network (GAN) [31] is also applied to the unsupervised defect detection method. By learning a large number of normal samples, GAN enables the generator in the network to learn the texture features of normal samples. Zhao [32] combined GAN and autoencoder to put defects into defect-free samples and trained GAN network to restore images. He [33] used SGAN and autoencoder to train unmarked steel surface defect samples, extract fine-grained image features and classify them. Schlegl [34] proposed the AnoGAN network to solve the abnormal detection of lesion images under unsupervised conditions, while GAN has the problem of unstable performance [35] in applications.
Considering the scarcity of defect samples in application, this paper proposes a method based on lightweight reconstruction network for low-complexity texture (LRN-L). This method uses only a small number of defect-free samples to train the reconstructed network, so that the network has the ability to reconstruct the samples. When abnormal samples are inputted, the trained network model can detect the abnormal region of the samples. In addition to the experimental analysis of the network structure, loss function, algorithm efficiency and other aspects, this paper also introduces the index of texture complexity, and uses the calculation model of texture complexity to grade the texture samples, to evaluate the detection ability and application of LRN-L.

2. LRN-L

LRN-L is divided into two stages: texture reconstruction stage and defect location stage. In the texture reconstruction stage, the reconstruction network (LRN) is designed based on CAE, and only a small number of defect-free samples are used for training, so that the reconstruction network can generate defect-free images; In the defect location stage, the residual image between the reconstructed image and the original image is taken, and the defect is located by the segmentation algorithm. The LRN-L model is shown in Figure 2.

2.1. Texture Complexity

Texture complexity reflects the difficulty of some operations (such as image enhancement, defect detection, etc.). One of the functions of texture complexity is to measure the performance of the algorithm; Second is to classify textures or measure the similarity between textures. The structure of the reconstructed network is closely related to the texture complexity, so for textures with different complexity level, the network structure should be different. Texture complexity can be measured in different ways [36,37,38,39,40]. The GLCM (gray level co-occurrence matrix) [41] is used to statistically analyze the features of texture to reflect the complexity.
If the image gray has N levels, then the gray level co-occurrence matrix P is a N-order matrix, where the element in the i-th row and j-th column represents the probability that two pixels with gray i and j, respectively, separated by a distance δ = (Δx, Δy), occur simultaneously in the image. δ determines the distance and direction between two pixels. There are four commonly used directions θ: 0° direction, δ = (Δx, 0); 45° direction, δ = (Δx, Δy); 90° direction, δ = (0, Δy); and 135° direction, δ = (−Δx, −Δy).
Generally, five most commonly used parameters are extracted from GLCM to describe texture features: Energy J, Entropy H, Contrast G, Deficit Q and Correlation COV, which are defined as follows:
J = i = 0 N 1 j = 0 N 1 P i j 2 ,
H = i = 0 N 1 j = 0 N 1 P i j log 2 P i j ,
G = i = 0 N 1 j = 0 N 1 ( i j ) 2 P i j   ,
Q = i = 0 N 1 j = 0 N 1 P i j 1 + ( i j ) 2 ,  
C O V = i = 0 N 1 j = 0 N 1 i j P i j μ 1 μ 2 σ 1 2 σ 2 2 ,
μ 1 = i = 0 N 1 i j = 0 N 1 P i j ,   μ 2 = j = 0 N 1 j i = 0 N 1 P i j ,   σ 1 2 = i = 0 N 1 ( i μ 1 ) 2 j = 0 N 1 P i j ,   σ 2 2 = j = 0 N 1 ( j μ 2 ) 2 j = 0 N 1 P i j
GLCMs of four directions is extracted from texture image, and J, H, G, Q and COV in the four directions are calculated, respectively, denoted as Ji, Hi, Gi, Qi and COVi, where i = 1, 2, 3, 4. To make the texture features independent of direction, the harmonic average is calculated for the above feature parameters by Formula (6). Taking the parameter J as an example, the energy values of the four directions are J1, J2, J3 and J4, respectively, and the energy value J of the texture image is obtained from Formula (6).
J = 4 i = 1 4 1 / J i ,
Among the five parameters, J, H and G were positively correlated with texture complexity, while Q and COV were negatively. Inspired by SSIM [42], G, Q and COV are selected as indicators of texture complexity based on the texture features of industrial product surface images. The mean square error (MSE) is used to assign weights to the parameters of G, Q, and COV, and the formula of texture complexity f is constructed, as shown in Formula (7). In the Formula (7), PCi represents G, Q and COV, respectively, and i = 1, 2 and 3, ā, MSEi and wi represent the average, the variance and the weight assigned to parameters, respectively.
ā = ( G + Q + C O V ) / 3 M S E i = ( P C i ā ) 2   i = 1 , 2 , 3 ω i = M S E i i = 1 3 M S E i   i = 1 , 2 , 3 f = ω 1 P C 1 + ω 2 1 P C 2 + ω 3 ( 1 P C 3 ) ,
Mario [44] divided image textures into three levels according to the complexity: low-complexity texture, medium-complexity texture, and complexity textures, represented by L, M and H, as shown in Figure 3.

2.2. Lightweight Reconstruction Network Model (LRN)

The core of lightweight reconstruction network model is to comprehensively design the network from two aspects, namely, network structure and detection speed, while maintaining accuracy. According to the characteristics of industrial product texture samples, some improvements are made on the basis of CAE. The structure of LRN is shown in Figure 4.
First, input the original image into the network, and use three convolution kernels of size 1 × 1, 3 × 3 and 5 × 5 to obtain multiscale features; then, input them to CAE module. The output of the decoding module is then deconvoluted by different kernels to obtain three scales of the reconstructed images, and the final reconstructed image is obtained via feature fusion. Compared with the MSCDAE [29], multi-scale features can also be obtained, but the computational cost is reduced.
The CAE module of LRN includes four convolution sub-modules and four deconvolution sub-modules. Each convolution sub-module includes a convolution layer, a BN layer [43] and a nonlinear activation layer. The first three convolution sub-modules also include a pool layer that can change the image scale. The activation function adopts Relu6. Use a 5 × 5 convolution kernel for the first three convolution layers, and the last layer uses a 3 × 3 convolution kernel.
The depth of the reconstruction network determines the reconstruction ability of the autoencoder. If a model with complex network structure is used, the ability of texture feature extraction can be improved, but at the same time, the ability of feature extraction of defect region is also improved. When the residual operation is carried out, the detection will fail because the difference between reconstruction image and origin image is not obvious enough. LRN uses a lightweight network structure, which has limited ability to reconstruct textures. However, through the design of multi-scale features and the loss function, the network can not only fully learn the characteristics of normal texture but also perform the restorative reconstruction of the defective areas.

2.3. Loss Function

The LRN takes the reconstruction error between the original image and the reconstructed image as the loss function to promote the convergence of the network. Set the input image as x and the reconstructed image as y.
1.
L1 Loss
L1 loss is also known as MAE (mean absolute error) loss, which is defined as:
L 1 = x y + λ ω F ,
where ω represents the set of weight matrices in the reconstructed network, λ represents the penalty factor of the regularization term, and 0 < λ < 1.
2.
L2 Loss
L2 loss is also called MSE (mean squared error) loss, which is a common loss function to evaluate the difference between the reconstructed image and original image, and it is defined as follows:
L 2 = x y 2 + λ ω F ,
Compared with L1, L2 is more sensitive to abnormal areas and will over punish large loss errors, such as MSCDAE [29], so LRN introduces L1 loss.
3.
Structural Loss
Both L1 and L2 do not consider the structural characteristics of texture, so LRN introduces SSIM (structural similarity index) [44] to build a loss function. SSIM optimizes the model from brightness, contrast and structure [45], as shown in Formulas (10) and (11). The larger the SSIM is, the more similar the images are. When the two images are identical, SSIM = 1. Therefore, to use it as a loss function, we add a minus sign.
S S I M x , y = 2 μ x μ y + C 1 μ x 2 + μ y 2 + C 1 × 2 σ x y + C 2 σ x 2 + σ y 2 + C 2 × σ x y + C 3 σ x σ y + C 3 ,
L S S I M x , y = 1 S S I M x , y ,
where μ x and μ y are the average brightness of x and y, σ x and σ y is the standard deviation of the pixel value, the covariance of x and y is σ x y , and C1, C2 and C3 are constant terms that are added to avoid situations where the denominator is zero.
4.
Loss Function of LRN
The loss function designed in this paper, LLRN, combines the advantages of L1 and LSSIM and adopts a combined form, as shown in Formula (12), where α is a weight factor with the range of (0, 1) to balance the proportion of L1 loss and LSSIM.
L L R N = α L 1 + 1 α L S S I M ,

2.4. Defect Location

1.
Residual Image
The difference between the original image (as shown in Figure 5a, the red circle area is the defect area) and the reconstructed image by LRN (as shown in Figure 5b, the red circle area is the reconstruction area for defects) is made by using Formula (13). The residual image is shown in Figure 5c, which contains the location information of the abnormal area.
r = ( x y ) 2 ,
2.
Noise Removal
The residual image shows a lot of noise, forming pseudo defects that affect the positioning of the real defect area. The average filter is used to denoise, and the result is shown in Figure 5d.
3.
Threshold Segmentation and Defect Location
The adaptive threshold method is used to locate the defect, and the final result is obtained, as shown in Figure 5e.

3. Experiment

In this paper, LRN-L is tested on the surface texture dataset of industrial products. The influence factors of LRN-L, including loss function, network structure, and texture complexity, are analyzed in detail. Finally, LRN-L is compared with other similar unsupervised algorithms. The implementation of this program was executed by using Python 3.6 and PyTorch framework. Performance testing was carried out using CUDA 9.0 and CUDNN 5.1. The CPU of the workstation is Intel Xeon X5 @2.9 GHz, accompanied by 128 GB DDR4 memory, Ubuntu 16.04. Furthermore, the GPU employed was the NVIDIA GTX-1080Ti with 11 GB of single card video memory.

3.1. Dataset Introduction

The texture samples are shown in Figure 6. Figure 6a–j are from the dataset DAGM2007 [46], which contains 10 kinds of texture sample. Figure 6k–n are from the dataset MVTech [35]. Figure 6o is from AITEX [47]. As to the 15 kinds of texture, each kind contains 100 defect-free positive samples for training and 10 defect samples for testing. The image size is 512 × 512.

3.2. Evaluation Index

This paper uses Recall, Precision and F1 Measure to evaluate the performance of LRN-L, which is defined as follows:
R e c a l l = T P T P + F N × 100 % ,
P r e c i s i o n = T P T P + F P × 100 % ,
F 1 M e a s u r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l ,
where TP is the defect sample with correct defect segmentation, FP is the defect sample with no defect area detected, and FN is the normal sample with defect area detected.

3.3. Network Structure Comparison Experiment

The network structure affects the training results of the reconstructed network. In this experiment, the structure of LRN is compared with classic networks such as FCN [21] and U-Net [48]. The experimental results are shown in Figure 7.
The results show that the number of layers of the network cannot be too many when the reconstruction network is used to detect texture surface defects. Although the deep network structure has a strong ability of feature extraction, it is easy to reconstruct the defect area, resulting in the residual error between the reconstructed image and the original image being almost equal to zero, and the defect location cannot be realized. When using lightweight structure to reconstruct the network, it can not only fully learn the texture features of positive samples, but also reconstruct the defect area into normal texture, forming an obvious reconstruction error. Therefore, LRN does not need too many layers, nor does it need a complex network structure such as GRL (Global Residual Learning) [49], sub-pixel layer [50] and residual connection [51].

3.4. Loss Function Comparison Experiment

In LLRN, L1, L2, LSSIM and their combination are selected for comparative experiments. During the training, the size of the image block (patch) is 32 pixels, the size of the batch is 256, and after 1000 iterations, the model output results are entered into the defect location module.
Figure 8a,b are the experimental results of two types of surface defect samples under various loss functions, the red circle area is the defect area. Figure 8a shows the defect samples with irregular surface texture. From the comparison of residual results, it can be seen that the residual results obtained by using L2 as the loss function have more noise points in other areas except the real defect area, forming pseudo defects; using LSSIM as the loss function alone, the detected defect area is slightly smaller than the real defect area; compared with other loss functions, LLRN achieves a better result. Figure 8b shows the defect samples with regular surface texture. From the comparison of residual results, it can be seen that the integrity of the defect area obtained by using L2 is poor, which is similar to the detection result obtained by using L2 + LSSIM. LLRN achieves a good result, and the result is similar to that using only L1.
Table 1 shows the Precision, Recall and F1 Measure for LRN to use different loss function. For the defect samples with an irregular surface texture in Figure 8a, LLRN achieved maximum values of 0.75 and 0.82 for Recall and F1 Measure, respectively, and is slightly inferior to LSSIM in terms of Precision. For the defect samples with a regular surface texture, shown in Figure 8b, when only L1 is used, Recall achieves the highest value of 0.76, followed by LRN, which is 0.71. Precision achieved by using only LSSIM is the highest, which is 0.96, and LLRN is second with a slightly lower value of 0.87. For F1 Measure, L1 performs best.
The results show the following: (1) For a regular texture, using L1 alone and LSSIM alone, or using L1 and LSSIM in combination (LLRN), can achieve better results with slight differences. (2) For an irregular texture, it is suggested to use LLRN, which can obtain better results. (3) The LLRN can solve the detection task of more types of texture surface abnormalities, and it is the best loss function.

3.5. Experiment of Texture Complexity

In the face of defect detection tasks with different texture complexities, it is necessary to evaluate the applicability of LRN-L. The characteristic parameters of texture samples shown in Figure 6 are calculated according to Formulas (6) and (7) and are shown in Table 2. The experimental results are shown in Table 3.
From the results presented in Table 3, LRN-L performs admirably in reconstructing images for low-complexity and medium-complexity textures, yielding a higher defect detection rate. However, this algorithm’s efficacy diminishes when dealing with complex textures. Most notably, there does not appear to be a direct linear relationship between the magnitude of the evaluation index and the texture complexity for low-complexity and medium-complexity textures. For instance, samples (d) and (j), despite being simplistic in their texture complexity, were deemed undetectable, as they exhibit irregular and inhomogeneous texture structures, thereby exhibiting low values across all three indices.
Overall, LRN-L yields superior results when applied to samples with low-complexity and medium-complexity textures, particularly those with relatively uniform texture structures. On the other hand, samples with low-complexity and medium-complexity textures but non-uniform texture structures have low detection indices. LRN-L is unsuited to deal with high-complexity textures.

3.6. Experiment of Loss Function under Different Weight Factors

LLRN is a combination of L1 and LSSIM, as shown in Formula (12). Weight factor α was used to balance the relative importance of these two components. Using sample (g) in Figure 6, with α from 0.15 to 0.85, we conducted a series of comparative experiments in increments of 0.1. The results are shown in Figure 9 and Table 4.
As illustrated in Figure 9, the results vary significantly with changes in α. As α increases, the LSSIM proportion decreases, resulting in reduced the structural influence. The results obtained at α = 0.15 exhibit the least amount of noise and yield more accurate defect localization. Table 4 demonstrates that α = 0.15 produces the highest Recall and F1 Measure, which is 0.79 and 0.73, respectively, as well as the second-highest Precision among the evaluation indices.

3.7. Comparison Experimental of Related Algorithms

In this experiment, LRN-L is compared with the traditional unsupervised method (LCA [52], PHOT [28]) and the unsupervised method based on autoencoder (MSCDAE [27]), and it has been proven in the literature that the performance of MSCDAE is superior to other autoencoding methods such as ACAE [9] and RCAE [25]. The experiment uses the texture samples (b), (e), (j), (n) and (o) in Figure 5. These five types of textures belong to low-complexity textures and medium-complexity textures. As to the five kinds of texture, each kind contains 100 defect-free positive samples for training and 10 defect samples for testing. The default network parameters are as follows: block size is 32 × 32, batch size is 256, number of epochs is 1000 and weight α is 0.15. The results are shown in Figure 10.
LCA can eliminate the high-frequency part, which represents the background, while retaining the low-frequency part, which represents the defect, which is not suitable for irregular texture detection, as shown in No. 3 in Figure 10. For PHOT, only No. 3 detection is effective. MSCDAE can detect the defect areas of all samples, but also detect some defect-free areas as suspected defects, such as No. 1, No. 3 and No. 5. LRN-L achieves good detection results on all types of defects and textures.
In addition, Recall, Precision and F1 Measure are used to quantitatively analyze the experimental results of the above four methods, as shown in Table 5 (the optimal result is highlighted in bold font).
As can be seen from Table 5, the three metrics of LRN-L are superior to other algorithms in almost all types of samples. The Recall on sample No. 3 is slightly lower than that of MSCDAE, but MSCDAE will simultaneously detect defect-free areas and generate pseudo defects.
The efficiency of the algorithms is also compared. Sample images measuring 1024 × 1024 pixels were used in the experiment. Under the same computational performance, the processing time of the four methods is compared, as shown in Table 6. The average detection time of LRN-L is 2.82 ms, which can meet the requirements of industrial real-time detection. Other methods are time consuming, which limit the promotion of their practical applications.

4. Conclusions

In this paper, a method of texture defect detection based on the reconstruction network (LRN-L) is proposed. LRN uses CAE with a lightweight structure to design the reconstruction network. In the phase of texture reconstruction, only the defect-free samples are used for training, which can solve the problem of shortage of defective samples in the industry. In the defect location stage, the accurate location of the defect region is achieved by segmentation algorithm. In this paper, the LLRN loss function is designed for defect detection, which improves the detection efficiency. The evaluation index of image complexity is established, the texture complexity of texture samples is calculated, and the texture complexity level is divided. This paper discusses the influence of network structure, loss function, texture complexity and other factors on the defect detection task in the unsupervised algorithm, and it conducts a comparative experiment between the proposed LRN-L and other unsupervised algorithms on multiple types of texture samples. The results show that LRN-L has strong robustness, accuracy and generalization ability, and is more suitable for transplantation to the industrial detection. Because of the lightweight characteristics of the network, LRN-L is more suitable for the detection of surface defects of industrial products with low-complexity and medium-complexity textures.

Author Contributions

Conceptualization, G.L. and H.B.; methodology, H.S.; software, H.S.; validation, H.S. and H.B.; formal analysis, H.S.; investigation, H.S.; resources, H.S.; data curation, H.S.; writing—original draft preparation, H.S.; writing—review and editing, H.S. and G.L.; visualization, H.S.; supervision, G.L.; project administration, G.L.; funding acquisition, H.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank Gangyan Li for his helpful suggestions with regard to this paper. We also thank Wenyong Yu for his helpful analysis of the methodology. We also thank Haiming Yao for his helpful collaboration on and corrections to this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Deng, S.; Cai, W.; Xu, Q.; Liang, B. Defect detection of bearing surfaces based on machine vision technique. In Proceedings of the International Conference on Computer Application and System Modeling (ICCASM 2010), Taiyuan, China, 22 October 2010. [Google Scholar]
  2. Jian, C.; Gao, J.; Ao, Y. Automatic surface defect detection for mobile phone screen glass based on machine vision. Appl. Soft Comput. 2017, 52, 348–358. [Google Scholar] [CrossRef]
  3. Bulnes, F.G.; Usamentiaga, R.; Garcia, D.F.; Molleda, J. An efficient method for defect detection during the manufacturing of web materials. J. Intell. Manuf. 2016, 27, 431–445. [Google Scholar] [CrossRef]
  4. Jin, X.T.; Wang, Y.N.; Zhagn, H.; Liu, L.; Zhong, H.; Hei, Z.D. Deep Rail: Automatic visual detection system for railway surface defect using Bayesian CNN and attention network. Acta Autom. Sin. 2019, 45, 2312–2327. [Google Scholar]
  5. Li, L.F.; Ma, W.F.; Li, L.; Lu, C.J. Research on detection algorithm for bridge cracks based on deep learning. Acta Autom. Sin. 2019, 45, 1727–1742. [Google Scholar]
  6. Chen, S.; Hu, T.; Liu, G.; Pu, Z.; Li, M.; Du, L. Defect classification algorithm for IC photomask based on PCA and SVM. In Proceedings of the Congress on Image and Signal Processing, Sanya, China, 27 May 2008. [Google Scholar]
  7. Huang, J.X.; Li, D.; Ye, F.; Zhang, W.J. Detection of surface defection of solder on flexible printed circuit. Opt. Precis. Eng. 2010, 18, 2443–2453. [Google Scholar]
  8. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  9. Napoletano, P.; Piccoli, F.; Schettini, R. Anomaly detection in nanofibrous materials by CNN-based self-similarity. Sensors 2018, 18, 209. [Google Scholar] [CrossRef]
  10. Cha, Y.J.; Choi, W.; Suh, G.; Mahmoudkhani, S.; Büyüköztürk, O. Autonomous structural visual inspection using region-based deep learning for detecting multiple damage types. Comput.-Aided Civ. Infrastruct. Eng. 2018, 33, 731–747. [Google Scholar] [CrossRef]
  11. Gao, Y.; Gao, L.; Li, X.; Yan, X. A semi-supervised convolutional neural network-based method for steel surface defect recognition. Robot. Comput.-Integr. Manuf. 2020, 61, 1018–1025. [Google Scholar] [CrossRef]
  12. Zhao, Z.; Xu, G.; Qi, Y.; Liu, N.; Zhang, T. Multi-patch deep features for power line insulator status classification from aerial images. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24 July 2016. [Google Scholar]
  13. Wang, T.; Chen, Y.; Qiao, M.; Snoussi, H. A fast and robust convolutional neural network-based defect detection model in product quality control. Int. J. Adv. Manuf. Technol. 2018, 94, 3465–3471. [Google Scholar] [CrossRef]
  14. Xu, X.; Zheng, H.; Guo, Z.; Wu, X.; Zheng, Z. SDD-CNN: Small Data-Driven Convolution Neural Networks for Subtle Roller Defect Inspection. Appl. Sci. 2019, 9, 1364. [Google Scholar] [CrossRef]
  15. Weimer, D.; Scholz, R.B.; Shpitalni, M. Design of deep convolutional neural network architectures for automated feature extraction in industrial inspection. Manuf. Technol. 2016, 65, 417–420. [Google Scholar] [CrossRef]
  16. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  17. Berg, A.C.; Fu, C.Y.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar] [CrossRef]
  18. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. Comput. Vis. Pattern Recognit. 2016, 6, 779–788. [Google Scholar]
  19. Chen, J.; Liu, Z.; Wang, H.; Núñez, A.; Han, Z. Automatic defect detection of fasteners on the catenary support device using deep convolutional neural network. IEEE Trans. Instrum. Meas. 2017, 67, 257–269. [Google Scholar] [CrossRef]
  20. Huang, Y.; Qiu, C.; Guo, Y.; Wang, X.; Yuan, K. Surface defect saliency of magnetic tile. In Proceedings of the IEEE 14th International Conference on Automation Science and Engineering, Munich, Germany, 20 August 2018. [Google Scholar]
  21. Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 39, 640–651. [Google Scholar]
  22. Yu, W.; Zhang, Y.; Shi, H. Surface Defect Inspection Under a Small Training Set Condition. In Proceedings of the International Conference on Intelligent Robotics and Applications, Shenyang, China, 8 August 2019. [Google Scholar]
  23. Masci, J.; Meier, U.; Cireşan, D.; Schmidhuber, J. Stacked convolutional auto-encoders for hierarchical feature extraction. In Proceedings of the International Conference on Artificial Neural Networks, Torremolinos, Spain, 8 June 2011. [Google Scholar]
  24. Li, Y.; Zhao, W.; Pan, J. Deformable patterned fabric defect detection with fisher criterion-based deep learning. IEEE Trans. Autom. Sci. Eng. 2016, 14, 1256–1264. [Google Scholar] [CrossRef]
  25. Chalapathy, R.; Menon, A.K.M.; Chawla, S. Robust, Deep and Inductive Anomaly Detection. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases; Springer: Berlin/Heidelberg, Germany, 2017; pp. 36–51. [Google Scholar]
  26. Yuan, J.; Zhang, Y.J. Application of sparse denoising autoencoder network with gradient difference information for abnormal action detection. Acta Autom. Sin. 2017, 43, 604–610. [Google Scholar]
  27. Mei, S.; Yang, H.; Yin, Z. An Unsupervised-Learning-Based Approach for Automated Defect Inspection on Textured Surfaces. IEEE Trans. Instrum. Meas. 2018, 67, 1266–1277. [Google Scholar] [CrossRef]
  28. Aiger, D.; Talbot, H. The phase only transform for unsupervised surface defect detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 295–302. [Google Scholar]
  29. Lin, H.D. Tiny surface defect inspection of electronic passive components using discrete cosine transform decomposition and cumulative sum techniques. Image Vis. Comput 2008, 26, 603–621. [Google Scholar] [CrossRef]
  30. Yang, H.; Chen, Y.; Song, K.; Yin, Z. Multiscale Feature-Clustering-Based Fully Convolutional Autoencoder for Fast Accurate Visual Inspection of Texture Surface Defects. IEEE Trans. Autom. Sci. Eng. 2019, 16, 1450–1467. [Google Scholar] [CrossRef]
  31. Makhzani, A.; Shlens, J.; Jaitly, N.; Goodfellow, I.; Frey, B. Adversarial autoencoders. arXiv 2015, arXiv:1511.05644. [Google Scholar]
  32. Zhao, Z.; Li, B.; Dong, R.; Zhao, P. A Surface Defect Detection Method Based on Positive Samples. In Proceedings of the International Conference on Artificial Intelligence, Nanjing, China, 28–31 August 2018; Pacific Rim. Springer: Cham, Switzerland, 2018; pp. 473–481. [Google Scholar]
  33. Di, H.; Ke, X.; Peng, Z.; Dongdong, Z. Surface defect classification of steels with a new semi-supervised learning method. Opt. Lasers Eng. 2019, 117, 40–48. [Google Scholar] [CrossRef]
  34. Schlegl, T.; Seeböck, P.; Waldstein, S.M.; Schmidt-Erfurth, U.; Langs, G. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In Proceedings of the International Conference on Information Processing in Medical Imaging, Boone, NC, USA, 25–30 June 2017; Springer: Cham, Switzerland, 2017; Volume 6, pp. 146–157. [Google Scholar]
  35. Bergmann, P.; Fauser, M.; Sattlegger, D.; Steger, C. A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Los Angeles, CA, USA, 15 June 2019; pp. 9592–9600. [Google Scholar]
  36. Chen, Y.Q.; Duan, J.; Zhu, Y.; Qian, X. Research on the Image Complexity Based on Texture Features. Chin. Opt. 2015, 8, 407–413. [Google Scholar] [CrossRef]
  37. Zou, J.; Liu, C.C. Texture classification by matching co-occurrence matrices on statistical manifolds. In Proceedings of the 10th IEEE International Conference on Computer and Information Technology (CIT 2010), Bradford, UK, 29 June 2010; pp. 1–7. [Google Scholar]
  38. Gao, Z.Y.; Yang, X.M.; Gong, J.M.; Jin, H. Research on Image Complexity Description Methods. J. Image Graph. 2010, 15, 129–135. [Google Scholar]
  39. Guo, X.Y.; Li, W.S.; Qian, Y.H.; Bai, R.Y.; Jia, C.H. Computational Evaluation Methods of Visual Complexity Perception for Images. Acta Electron. Sin. 2020, 48, 819–826. [Google Scholar]
  40. Yang, L.; Zhou, Y.; Yang, J.; Chen, L. Variance WIE based infrared images processing. Electron. Lett. 2006, 42, 857–859. [Google Scholar] [CrossRef]
  41. Haralick, R.M.; Shanmugam, K. Texture features for image classification. IEEE Trans. Syst. Man Data Hiding Based Pixel Value Ordering Cybern. 1973, 3, 610–621. [Google Scholar]
  42. Bergmann, P.; Löwe, S.; Fauser, M.; Sattlegger, D.; Steger, C. Improving unsupervised defect segmentation by applying structural similarity to autoencoders. arXiv 2018, arXiv:1807.02011. [Google Scholar]
  43. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
  44. Zhao, H.; Gallo, O.; Frosio, I.; Kautz, J. Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 2016, 3, 47–57. [Google Scholar] [CrossRef]
  45. Lv, C.; Zhang, Z.; Shen, F.; Zhang, F.; Su, H. A Fast Surface Defect Detection Method Based on Background Reconstruction. Int. J. Precis. Eng. Manuf. 2019, 21, 363–375. [Google Scholar]
  46. Jager, M.; Knoll, C.; Hamprecht, F.A. Weakly supervised learning of a classifier for unusual event detection. IEEE Trans. Image Process. 2019, 17, 1700–1708. [Google Scholar]
  47. Silvestre, B.J.; Albero, A.T.; Miralles, I.; Pérez-Llorens, R.; Moreno, J. A Public Fabric Database for Defect Detection Methods and Results. Autex Res. J. 2019, 19, 363–374. [Google Scholar]
  48. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Istanbul, Turkey, 17 October 2016; pp. 234–241. [Google Scholar]
  49. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June 2016; pp. 770–778. [Google Scholar]
  50. Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-time single image and video super-resolution using an efficient sub-pixel Convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June 2016; pp. 1874–1883. [Google Scholar]
  51. Huang, G.; Liu, Z.; Van, D.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June 2016; pp. 4700–4708. [Google Scholar]
  52. Tsai, D.M.; Huang, T.Y. Automated surface inspection for statistical textures. Image Vis. Comput. 2003, 21, 307–323. [Google Scholar]
Figure 1. Various surface defects. (a) Dark defects. (b) Bright defects. (c) Large-scale defects covering the image. (d) Minor defects. (e) Defects with small color difference. (f,g) Defects similar to texture. (h) Fuzzy defects.
Figure 1. Various surface defects. (a) Dark defects. (b) Bright defects. (c) Large-scale defects covering the image. (d) Minor defects. (e) Defects with small color difference. (f,g) Defects similar to texture. (h) Fuzzy defects.
Electronics 12 03617 g001
Figure 2. LRN-L model.
Figure 2. LRN-L model.
Electronics 12 03617 g002
Figure 3. Classification of texture complexity.
Figure 3. Classification of texture complexity.
Electronics 12 03617 g003
Figure 4. The structure of LRN.
Figure 4. The structure of LRN.
Electronics 12 03617 g004
Figure 5. Defect location operation process. (a) The original image. (b) Reconstruction image via LRN. (c) The residual image obtained via Formula (13). (d) Filtered residual map. (e) Defect location.
Figure 5. Defect location operation process. (a) The original image. (b) Reconstruction image via LRN. (c) The residual image obtained via Formula (13). (d) Filtered residual map. (e) Defect location.
Electronics 12 03617 g005
Figure 6. Defect location operation process. (aj) DAGM2007 dataset. (kn) MVTech dataset. (o) AITEX dataset.
Figure 6. Defect location operation process. (aj) DAGM2007 dataset. (kn) MVTech dataset. (o) AITEX dataset.
Electronics 12 03617 g006
Figure 7. Residual images of the network structure comparison experiment.
Figure 7. Residual images of the network structure comparison experiment.
Electronics 12 03617 g007
Figure 8. Results under different loss functions. (a) Irregular texture samples. (b) Regular texture samples.
Figure 8. Results under different loss functions. (a) Irregular texture samples. (b) Regular texture samples.
Electronics 12 03617 g008
Figure 9. Comparison under different weight factors.
Figure 9. Comparison under different weight factors.
Electronics 12 03617 g009
Figure 10. Comparison results of multiple methods.
Figure 10. Comparison results of multiple methods.
Electronics 12 03617 g010
Table 1. Results under different loss functions (A: irregular texture sample; B: regular texture sample).
Table 1. Results under different loss functions (A: irregular texture sample; B: regular texture sample).
Loss FunctionL1L2LSSIML2 + LSSIMLLRN
Index
PrecisionA0.930.350.930.520.89
B0.840.650.960.700.87
RecallA0.510.380.590.50.75
B0.760.700.590.670.71
F1 MeasureA0.660.360.720.510.82
B0.800.670.730.690.78
Table 2. Characteristic parameters of texture samples.
Table 2. Characteristic parameters of texture samples.
SamplesJHGQCOVf
a0.0253.9835.7200.3910.0323.344
b0.0094.6532.1730.2730.0011.586
c0.0433.4390.8190.6920.2120.8035
d0.1482.3430.7380.7650.6000.569
e0.0353.6491.4080.6010.2071.1005
f0.0134.7556.2850.4150.0483.6185
g0.1002.6820.5580.7810.4740.542
h0.0423.4510.6480.7310.1720.738
i0.0453.3830.7020.7160.2090.7465
j0.0633.2271.1310.6750.2950.918
k0.0355.2731.1600.6640.1660.997
l0.1213.5550.2900.8450.5130.3885
m0.1882.9690.2980.8541.0070.1455
n0.0215.8082.2150.5250.1231.546
o0.0744.2031.3860.7030.2541.066
Table 3. Results under different texture complexity.
Table 3. Results under different texture complexity.
SamplesfLevelPrecisionRecallF1 Measure
a3.344H0.0010.0010.001
b1.586M0.8550.7990.822
c0.8035L0.680.9080.777
d0.569L0.0340.3370.062
e1.1005M0.9250.8830.904
f3.6185H0.0010.0010.001
g0.542L0.9370.7420.828
h0.738L0.7390.8540.792
i0.7465L0.8240.9460.881
j0.918L0.2910.4310.348
k0.997L0.5960.0640.116
l0.3885L0.7540.8230.787
m0.1455L0.8070.4920.612
n1.546M0.9350.9480.941
o1.066M0.8840.7720.824
Table 4. Comparison of test results under different weight factors.
Table 4. Comparison of test results under different weight factors.
IndexWeight Factor α
00.150.250.350.450.550.650.750.851
Precision0.710.690.580.280.460.530.230.890.540.62
Recall0.720.790.620.730.650.670.520.550.720.45
F1 Measure0.710.730.600.410.540.600.320.680.620.52
Table 5. Comparison of detection effects of different algorithms.
Table 5. Comparison of detection effects of different algorithms.
AlgorithmsLCAPHOTMSCDAELRN-L
Index
Recall10.4780.1330.2030.799
20.6120.3180.3590.946
30.1170.3410.9660.707
40.6410.4140.8810.948
50.6630.1550.5620.772
Precision10.0240.1120.1430.855
20.4120.3670.6960.824
30.0020.4780.4440.793
40.8990.0060.9200.935
50.4360.3240.4630.884
F1 Measure10.0450.1220.1680.822
20.4920.3410.6620.881
30.0040.3980.6080.732
40.7480.0120.9000.941
50.5260.2100.5080.824
Table 6. Comparison of processing time.
Table 6. Comparison of processing time.
AlgorithmsPHOTLCAMSCDAELRN-L
Time (ms)4504309746.592.82
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shi, H.; Li, G.; Bao, H. Lightweight Reconstruction Network for Surface Defect Detection Based on Texture Complexity Analysis. Electronics 2023, 12, 3617. https://doi.org/10.3390/electronics12173617

AMA Style

Shi H, Li G, Bao H. Lightweight Reconstruction Network for Surface Defect Detection Based on Texture Complexity Analysis. Electronics. 2023; 12(17):3617. https://doi.org/10.3390/electronics12173617

Chicago/Turabian Style

Shi, Hui, Gangyan Li, and Hanwei Bao. 2023. "Lightweight Reconstruction Network for Surface Defect Detection Based on Texture Complexity Analysis" Electronics 12, no. 17: 3617. https://doi.org/10.3390/electronics12173617

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop