Next Article in Journal
Electrochemical Impedance Spectrum Equivalent Circuit Parameter Identification Using a Deep Learning Technique
Previous Article in Journal
Multi-Wavelength Path Loss Model for Indoor VLC with Mobile Human Blockage
Previous Article in Special Issue
Taxonomy and Survey of Current 3D Photorealistic Human Body Modelling and Reconstruction Techniques for Holographic-Type Communication
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An LCD Detection Method Based on the Simultaneous Automatic Generation of Samples and Masks Using Generative Adversarial Networks

School of Mechanical Engineering, Anhui University of Technology, Ma’anshan 243002, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(24), 5037; https://doi.org/10.3390/electronics12245037
Submission received: 22 November 2023 / Revised: 15 December 2023 / Accepted: 15 December 2023 / Published: 18 December 2023
(This article belongs to the Special Issue Neural Networks and Deep Learning in Computer Vision)

Abstract

:
When applying deep learning methods to detect micro defects on low-contrast LCD surfaces, there are challenges related to imbalances in sample datasets and the complexity and laboriousness of annotating and acquiring target image masks. In order to solve these problems, a method based on sample and mask auto-generation for deep generative network models is proposed. We first generate an augmented dataset of negative samples using a generative adversarial network (GAN), and then highlight the defect regions in these samples using the training method constructed by the GAN to automatically generate masks for the defect images. Experimental results demonstrate the effectiveness of our proposed method, as it can simultaneously generate liquid crystal image samples and their corresponding image masks. Through a comparative experiment on the deep learning method Mask R-CNN, we demonstrate that the automatically obtained image masks have high detection accuracy.

1. Introduction

With the widespread adoption of information technology, digital devices, such as portable laptops, smartphones, and tablets, have experienced significant growth. Liquid Crystal Display (LCD) monitors, known for their low power consumption and lack of radiation pollution, have found extensive applications in these domains. However, in the LCD industry, manual inspection is predominantly employed to comprehensively detect defects in finished LCD products, resulting in time wastage, missed detections, and reduced production efficiency [1,2].
This research is focused on the automatic visual inspection on low-contrast surface micro defects of LCDs [3,4,5]. Figure 1 illustrates the brightness non-uniformity defects on the low-contrast LCD surface. The left section (a1–a3) depicts the defects, while the right section (b1–b3) shows the enhanced effects of these defects. It is evident that the surrounding areas of these defects possess low-contrast characteristics, making it challenging to identify these brightness non-uniform defects.
In recent years, deep learning methods have gained prominence and have gradually been employed for LCD defect detection [6,7,8]. However, the drawback of current deep learning methods is their reliance on a significant number of positive and negative samples for model training. Additionally, the laborious and time-consuming process of labeling defective samples (i.e., creating masks) poses a challenge. Furthermore, the issue of imbalanced samples in production arises, where it is challenging to gather a sufficient quantity of defective samples. Therefore, it is crucial to research a new method based on generative adversarial networks to solve these problems.
We propose a method to automatically generate samples and masks simultaneously using deep generative network models. The method can accomplish the complex and laborious task of labeling and obtaining image masks while solving the difficult problem of positive and negative sample imbalance.

2. Related Work

Common machine-vision-based methods for surface defect detection can be categorized into the following classes [9]: statistical methods; feature-based methods; spectral-based methods; subspace-based methods [10,11]; and the emerging deep learning-based methods.
Statistical methods require the collection of a certain number of qualified samples and use statistical models to perform calculations to establish a fixed template of qualified samples. During inspection, the sample to be inspected is matched with a fixed template, and the differences between the sample and the template are highlighted and defined as defects.
Zhong [12] analyzed a certain number of defect samples and calculated the grayscale threshold of the defect and background image; this threshold was used to threshold the subsequent detection objects to enhance the image contrast. Calculating the probability between the defect and the background edge pixels enables the detection of impurity defects in flexible integrated circuit packaging substrates. Since the threshold value of this method relies on manual calculation, there is a need to repeatedly calculate the threshold value when facing multiple categories of inspection objects or multiple defect types.
The feature-based method involves processing image pixels to obtain defect information and has a certain degree of simplicity when facing detection objects with obvious defect characteristics and high recognition. When detecting low-contrast surface defects, it is usually impossible to calculate an effective threshold due to the high randomness of the defect appearance and the complex background image.
Tu [13] proposed a printed circuit board inspection and sorting method. The PCB images collected by the camera are processed in sub-pixels and then registered with the corresponding template based on grayscale information to detect mis-soldering and missing soldering of surface components. The PCB samples are automatically positioned and sorted. Since this type of method only requires qualified samples to construct a fixed template, it solves the problems of difficulties in defect feature extraction and sample imbalances in feature detection algorithms, and can effectively detect targets with too many defect types and insufficient defect samples.
However, the defects encountered in LCD manufacturing exhibit localized brightness non-uniformity and smooth brightness variations. Traditional methods are inadequate for detecting low-contrast surface brightness non-uniform defects, as investigated in this study. Consequently, in addition to employing approaches such as adaptive thresholding [14], sophisticated machine learning algorithms [15,16,17], including deep convolutional neural networks (CNN) [18,19], have been introduced. In recent years, deep learning methods have made great advancements in classification, detection [20,21], and instance segmentation [22,23]. Consequently, deep learning methods are increasingly used in LCD defect detection.
Shuang Mei et al. [18] proposed a Mura defect identification method based on feature-level fusion of unsupervised learning. This method is a defect identification method based on joint feature representation. This method fuses hand-crafted and unsupervised learning of features to obtain useful features. Experimental results show that this method realizes the identification of Mura defects in thin-film-transistor LCD panels using visual inspection equipment and has strong robustness and accuracy.
In the latest deep learning methods, Faster R-CNN [20,21] and instance segmentation method Mask R-CNN [22], a pivotal element is the region proposal network (RPN). The fundamental concept underlying RPN is the dense sampling of the entire input image using a multitude of overlapping bounding boxes of various shapes and sizes. Subsequently, the network is trained to generate multiple object proposals, also referred to as regions of interest (RoI). This architectural choice enables RPN to effectively explore features across diverse scales. RPN comprises a convolutional neural network that takes feature images as input and produces output bounding boxes along with associated probabilities for contained objects.
Ramya et al. [24] introduced the utilization of the state-of-the-art Single Shot Multibox Detector (SSD) network for both classifying and localizing Mura defects, achieving simultaneous defect classification and localization. In comparison, the Mask R-CNN method [25,26,27] offers higher accuracy compared to the aforementioned deep learning-based object detection methods. Moreover, it possesses the advantage of simultaneous classification, localization, and instance segmentation. Therefore, improving and applying the Mask R-CNN method in the detection of micro defects in LCDs will make it possible to identify defect categories and segment defect shapes. However, the utilization of the aforementioned deep learning methods faces challenges in collecting a substantial number of defect samples for training and testing, as well as in the complex and time-consuming task of annotating defect samples with masks. To address these issues, a deep network model capable of automatically generating samples and masks is proposed as follows.
In practical LCD manufacturing processes, a large quantity of qualified samples can be easily obtained, while gathering a lot of defect samples within a short time is challenging. Data augmentation techniques are used to augment the defect dataset [28,29]. This involves synthesizing defect regions onto normal images through operations, such as rotation, cropping, and duplication, to generate defect samples.
As an unsupervised network model, generative adversarial networks (GAN) can adaptively generate similar samples through the input unlabeled dataset by setting the generator and discriminator. A precisely designed GAN network can be trained with a small number of defect samples and automatically generate a large number of similar samples for subsequent training of the recognition network. In defect detection, the problem of insufficient defect samples can be solved.
Yi [30] used a variational self-generator to improve the adversarial generation network, successfully expanded the MINST dataset and verified that there was no significant difference between the generated samples and the original samples. Li [31] performed 3D modeling of the belt conveyor and then input it into the CycleGAN [32] network to expand the fault samples. The expanded samples were used to fully train the Yolo v5 target detection network, achieving an improvement in network detection accuracy. Liu [33] used CycleGAN to expand the defect samples of sample LCD screens to achieve the construction of a balanced sample dataset. Mask R-CNN, when fully trained on this dataset, achieved an improvement in detection accuracy. These previous studies amply demonstrate the effectiveness of utilizing deep generative networks in sample dataset expansion. However, when the original CycleGAN network faces welding images with complex backgrounds, the visual effects and authenticity of the samples it generates have not yet been verified. Thus, it is proposed to employ a Cycle-Consistent Generative Adversarial Network (CycleGAN) to address the issue of sample imbalance.
After a significant number of samples have been generated, the time-consuming and labor-intensive process of annotating and acquiring defect image masks persists when training the aforementioned deep learning-based surface defect detection methods such as Mask R-CNN. Existing annotation tools like LabelMe [34] add to the complexity and effort required for mask annotation. To overcome this challenge, a method based on generative adversarial networks (GANs) [35,36,37] is proposed to automatically annotate and acquire image masks. Specifically, the generated defect sample images and their corresponding input defect-free images are fed into a CycleGAN model. The defect sample images serve as the target images, while the defect-free input images are treated as the input images during the training process. Through iterative steps, the defect information is progressively accumulated in the defect-free input images until the generation of defect samples is achieved. This iterative process represents the generation of sample masks.

3. Simultaneous Generation of Training Samples and Masks Based on the GAN Model

We propose a technique that leverages a generative adversarial network (GAN) to autonomously generate defect samples. This method requires only a limited quantity of real samples to enhance and expand the LCD sample dataset. Subsequently, the generated dataset is employed to enhance the detection results.
After a sufficient defect sample dataset has been generated, the labeling and acquisition of masks of defect images is time-consuming and laborious. A new method of automatically acquiring image masks is proposed in which the generated defect sample image and the corresponding input non-defective image are input into the new CycleGAN model and trained as target images and input images in the training process of this model. During the training process, the superimposed defect information is accumulated in the defect-free input image in a step-by-step iterative manner until the defect sample is generated, and this superposition process is the generation process of the sample mask.

3.1. CycleGAN Model

The CycleGAN model is an image style transformation technique, and the ultimate goal of this model is to complete the image style transformation between two domains without the one-to-one corresponding training data. Image style conversion refers to the conversion of a picture from one style to another.
As shown in Figure 2, the CycleGAN model maps from the X domain to the Y domain by mapping G. DY is the discriminator corresponding to the generator, which is used to distinguish between real data and generate G(x), forming a single-generation adversarial process. In order to avoid invalid conversion effects, the authors of the CycleGAN model propose cyclic consistency loss. Another mapping relationship F maps from the Y domain to the X domain, and the discriminator of the denoted generator is DX, which distinguishes the real data and generates F(y). The CycleGAN model learns both the G mapping and F mapping relationships, while satisfying the cyclic consensus requirement: G(F(x)) ≈ x; after two opposite mappings, it returns from the x domain back to the x domain.

3.2. CycleGAN Loss Function

CycleGAN combines adversarial loss and periodic consistency loss to create an output image, which measures adversarial loss where the generation distribution does not match the target. The consistency loss is used to avoid contradictory pairs of mappings. It is trained with unpaired samples and is ideal for defect detection. The process involves two types of loss: adversarial losses and cycle consistency losses.
  • Adversarial loss
To bring the generated data distribution closer to the real data distribution:
L G A N X Y = E y p d a t a y l o g D Y + E x p d a t a x l o g 1 D Y
Like GAN, G is used to achieve XY, and G(x) should be as close to Y as possible during training, and the discriminator DY is used to determine the true and false of the sample. The same formula as GAN is used:
m i n G m a x D Y L G A N X Y
Similarly, Y→X is implemented for F:
m i n F m a x D X L G A N Y X
2.
Cycle consistency loss
Adversarial loss only ensures that the generated sample is homogeneous with the real sample, but it also requires a one-to-one correspondence of images in the corresponding domain.
We want x ^ x, called forward cycle consistency; y ^ y, called backward cycle consistency.
To ensure consistency as much as possible, set the corresponding loss as:
L c y c G F = E x p d a t a x   x ^ x 1 + E y p d a t a y   y ^ y 1
3.
Overall loss
Generator G tries to achieve the migration of X to Y, generator F tries to achieve the migration of Y to X, and at the same time, it is hoped that the generators of the two generators can achieve mutual inversion, that is, iterate back to themselves:
L   G , F , D X , D Y = L G A N X Y + L G A N Y X + λ × L c y c G , F
where λ is the weight between control, resistance loss, and periodic consistency loss.
The background of the defective image generated by CycleGAN is similar to the real image with the defect. CycleGAN can generate synthetic defective samples by simply entering new defect-free samples.

3.3. The Proposed Automatic Sample and Mask Generation Method

The proposed method’s workflow is depicted in Figure 3, in which the input, the defect-free LCD sample image x + , is used to generate a large number of defective LCD sample images x ~ through CycleGAN1.
The non-defective sample x + and the corresponding generated defective sample x ~ are trained as the input and output of CycleGAN2, and the effect part of x ̿ , the defect, will be gradually superimposed during the training process, and when enough defect information is superimposed, the image can be obtained by simple image processing and a binarization operation to obtain the mask x m a s k of the image.
In this investigation, a learning rate of 0.0002 was utilized for CycleGAN1. This exceptionally low value was chosen to ensure that the synthetic defects closely resemble real defects. In CycleGAN2, CycleGAN2 takes a large λ value (e.g., 100) so that the texture background of the defect-free input and the output are as close as possible.
In order to identify defect regions in the image mask generated by CycleGAN2, the difference between the generated image in t iterations and the final image generated at the last T iteration is the accumulation of intermediate iterations. namely:
Δ E x , y = t = 3 T 1 x ̿ T x , y x ̿ t x , y
where x ̿ t x , y is the output generated by CycleGAN2 at t epochs, for t = 3, 4, …, T − 1. Since the background texture was not reconstructed very well during the first two cycles, t = 1,2 was discarded. Empirical studies have shown that ten iterations (T = 10) are usually sufficient to segment the defect regions in the generated image. Pixel background areas that share common ground produce small differences Δ E x , y , while defective pixels make large differences. A simple image processing operation and binary threshold processing were applied to segment defects in a differential image Δ E x , y .

4. Experimental Results and Analysis

The experiment aims to evaluate the performance of a GAN-based sample and mask generation scheme by using LCD images with and without defects. First, the experiment validates the generation of samples to augment the dataset using a CycleGAN1 model. Then, the second experiment generates masks using the CycleGAN2 model, and finally, its performance is evaluated using Mask R-CNN. The hardware and software configuration for the experiment includes Nvidia RTX4000 GPU and Python 3.

4.1. Dataset Augmentation Using CycleGAN to Generate Image Samples

To address the limited number of images in the original dataset, which is insufficient for effective training, data augmentation is necessary. Initially, common techniques such as rotation and mirroring are applied to the original images for data augmentation, as frequently used in deep learning. However, even with these techniques, the dataset remains limited in size. Therefore, a GAN-based data sample augmentation method is employed.
Since the number of existing defective LCD sample images is limited, conducting effective experiments poses a challenge. CycleGAN is utilized to expand the sample dataset by leveraging the capabilities of generative adversarial networks (GANs). CycleGAN can generate additional datasets based on the features extracted from a small amount of existing data, thus compensating for the scarcity of the original dataset.
Here, λ is the weight in the loss function that controls the adversarial loss and the consistency loss. The results shown in Figure 4 show that smaller λ is more inclined to generate images that highlight local defects while the background is similar. An excessively large λ value proves ineffective in generating defects, while an excessively small λ value hinders the accurate reconstruction of background texture. To strike a balance that aligns with practical requirements and ensures overall performance in both defect synthesis and background preservation, this study employs λ = 45 for CycleGAN1.

4.2. Image Masks Generating Using the CycleGAN2 Model

Through the existing defect-free image and the defect image corresponding to this defect-free image generated from the CycleGAN1 model, these two images become the input pair of the CycleGAN2 model, and CycleGAN2 trains each pair of images separately. As shown in Figure 5, (a1–d1) are non-defective sample images, while a2–d2 are corresponding defective sample images generated by CycleGAN1. The corresponding generated defect sample image and the defect-free image into the CycleGAN2 model were input as the target image and input image in the training process of this model, respectively. During the training process, the defect-free input image gradually iterates, accumulating and superimposing defect information until the defect samples are generated, and this superposition process is the generation process of sample masks.
In terms of CycleGAN2’s loss function parameter selection, since the larger λ value in CycleGAN2 pays more attention to consistency loss, it tends to preserve the global background texture of the object surface. Figure 6 shows the comparison results of different λ . Our purpose is to make the input image without defects and the background of the generated defect image as similar as possible so that CycleGAN2 needs a larger regularization value. T is the generation period of a pair of control samples in the process of training CycleGAN2 to generate a process graph, corresponding to the Formula (6). The larger the value of T, the closer the comparison image in the process is to the comparison image generated by the final call model; λ is the weight in the loss function that controls the adversarial loss and the consistency loss. The experimental results show that a smaller value of λ does not reconstruct the background in the early stage. Defects only appear in the later stage. A larger value of λ has a better background reconstruction effect and synthesizes defects in the early stage. Therefore, CycleGAN2 uses λ = 100 in this paper.
In order to highlight defective pixels in the synthetic images in CycleGAN2, we integrated the process image and accumulated the differences in the generated images. Since the background texture is not well reconstructed in the early stage, a floating integral lower bound was chosen to compare the effect of different integral lower bounds on segmenting defect regions in the synthesized image.

4.3. Segmentation Results

Simple image processing and binarization on the superimposed process image of defects was carried out to obtain the segmentation results. The experimental results are as shown in Figure 7: from the segmentation image, it can be seen that the position information of the defects is relatively obvious, which can be used to generate a mask.

4.4. Employing the Mask RCNN Model for Recognition

The Mask R-CNN model was employed for training and testing the LCD dataset. Initially, the dataset was prepared to include defect sample images obtained previously and their corresponding segmented mask images. Subsequently, the Mask R-CNN model was used for training and testing. For a single detection target, we compared the surface area of the detected defect with an empirically determined surface area to calculate the recognition rate for a single image. Figure 8 illustrates the test results using 30 samples, demonstrating a varying recognition rate ranging from 0.730 to 0.999. Notably, the majority of these recognition rates surpass 0.95. We tested the two sets of data, the pre-expansion sample set, and the GAN method to generate samples to expand the dataset. The detailed results are shown in Table 1.
As shown in Table 1, the initial Group1 experiments were performed on a set of unexpanded defective image samples. This group of experiments was trained with 30 random samples and tested with 14 samples. Its recognition rate mainly exceeded 0.99, albeit with one instance falling below 0.9. The average recognition rate across all test samples was 0.988. The model demonstrated excellent performance on real samples, but the limitations imposed by the limited sample size must be recognized.
Group2 experiments were performed on LCD samples generated by an expanded defect image dataset. To ensure that the model was trained to the same extent, we similarly used 30 random samples for training and then tested on 50 samples. From the experimental results, it can be seen that most of the test results are above 0.95; however, 11 images were less than 0.9. This was related to the increase in the number of sample sets. The average recognition rate of all the test samples was 0.9463. Overall, the performance of the detection results of the generated sample dataset decreased; however, it is easy to obtain a sufficient data sample set.
In order to quantitatively evaluate the performance of our proposed unsupervised automatic mask generation method, we compared it with a manually labeled mask generation method (LableMe). We input the masks generated by these two methods to Mask R-CNN separately. As shown in Table 2, the final test results were used to compare the impact of various mask generation methods on the algorithm’s recognition performance.

4.5. The Comparing Segmentation Results

Figure 9 illustrates the experimental results that juxtapose our proposed method with the Gabor filter method, wavelet method, and U-Net segmentation method. The experiments reveal that the Gabor and wavelet methods struggle to effectively segment images of defective LCD surfaces, whereas our proposed method excels in successfully detecting these defects. The U-Net segmentation method can successfully segment defect areas, but the segmentation results are not very accurate. In addition, the U-Net method only belongs to semantic segmentation and does not mark specific instance information of pixels. Our proposed method can effectively solve this problem.

5. Conclusions

In the application of deep learning techniques to detect micro defects on LCD surfaces with low contrast, the challenges of imbalanced positive and negative samples, as well as the complex and laborious task of annotating and acquiring image masks, can be addressed by our proposed method of simultaneously auto-generating samples and masks using a deep generative network model. Our proposed deep generative network approach simplifies the acquisition of samples and their masks to a great extent and is applicable to all supervised target detection networks which require mask labeling. The experimental findings in the detection of micro defects on low-contrast LCD surfaces substantiate the high detection accuracy achieved with the obtained image samples and image masks, highlighting the applicability of our proposed method to other domains requiring image sample augmentation and annotation.
However, the method may suffer from unsatisfactory generation quality when faced with targets with complex backgrounds for automatic generation. We will subsequently explore more powerful methods to handle the automatic generation and automatic detection of diverse targets.

Author Contributions

Conceptualization, H.W. and Y.L.; methodology, H.W.; software, Y.L. and Y.X.; validation, Y.L. and Y.X.; formal analysis, H.W.; investigation, Y.L.; resources, H.W.; data curation, Y.X.; writing—original draft preparation, H.W.; writing—review and editing, Y.X. and Y.L.; visualization, H.W.; supervision, H.W.; project administration, H.W.; funding acquisition, H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the China National Key Research and Development project (2017YFE0113200), Anhui Provincial Natural Science Foundation (2108085ME166).

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy issues.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ren, Z.; Fang, F.; Yan, N.; Wu, Y. State of the art in defect detection based on machine vision. Int. J. Precis. Eng. Manuf.-Green Technol. 2022, 9, 661–691. [Google Scholar] [CrossRef]
  2. Singh, S.A.; Desai, K.A. Automated surface defect detection framework using machine vision and convolutional neural networks. J. Intell. Manuf. 2023, 34, 1995–2011. [Google Scholar] [CrossRef]
  3. Tao, J.; Zhu, Y.; Jiang, F.; Liu, H.; Liu, H. Rolling surface defect inspection for drum-shaped rollers based on deep learning. IEEE Sens. J. 2022, 22, 8693–8700. [Google Scholar] [CrossRef]
  4. Dong, H.; Song, K.; He, Y.; Xu, J.; Yan, Y.; Meng, Q. PGA-Net: Pyramid feature fusion and global context attention network for automated surface defect detection. IEEE Trans. Ind. Inform. 2019, 16, 7448–7458. [Google Scholar] [CrossRef]
  5. Lee, M.; Jeon, J.; Lee, H. Explainable AI for domain experts: A post Hoc analysis of deep learning for defect classification of TFT–LCD panels. J. Intell. Manuf. 2021, 33, 1747–1759. [Google Scholar] [CrossRef]
  6. Pratt, W.K.; Sawkar, S.S.; O’Reilly, K. Automatic blemish detection in liquid crystal flat panel displays. In Machine Vision Applications in Industrial Inspection VI; International Society for Optics and Photonics: Bellingham, WA, USA, 1998; Volume 3306, pp. 2–13. [Google Scholar]
  7. Lu, H.P.; Su, C.T. CNNs Combined with a Conditional GAN for Mura Defect Classification in TFT-LCDs. IEEE Trans. Semicond. Manuf. 2021, 34, 25–33. [Google Scholar] [CrossRef]
  8. Kim, M.; Lee, M.; An, M.; Lee, H. Effective automatic defect classification process based on CNN with stacking ensemble model for TFT-LCD panel. J. Intell. Manuf. 2020, 31, 1165–1174. [Google Scholar] [CrossRef]
  9. Xie, X. A review of recent advances in surface defect detection using texture analysis techniques. ELCVIA Electron. Lett. Comput. Vis. Image Anal. 2008, 7, 1–22. [Google Scholar] [CrossRef]
  10. Shu, Y.; Zuo, D.; Zhang, J.; Li, J.; Gan, H.; Chen, T.; Luo, L. Analysis of textile defects based on PCA-NLM. J. Intell. Fuzzy Syst. 2020, 38, 1463–1470. [Google Scholar]
  11. Ahmad, J.; Akula, A.; Mulaveesala, R.; Sardana, H.K. An independent component analysis based approach for frequency modulated thermal wave imaging for subsurface defect detection in steel sample. Infrared Phys. Technol. 2019, 98, 45–54. [Google Scholar] [CrossRef]
  12. Zhong, Z.; Ma, Z. A Novel Defect Detection Algorithm for Flexible Integrated Circuit Package Substrates. IEEE Trans. Ind. Electron. 2022, 69, 2117–2126. [Google Scholar] [CrossRef]
  13. Tu, Z.; Wang, S.; Shen, Z. Printed circuit board inspection and sorting system based on machine vision. Mod. Electron. Technol. 2022, 45, 5–9. [Google Scholar]
  14. Kim, S.Y.; Song, Y.C.; Jung, C.D.; Park, K.H. Effective defect detection in thin film transistor liquid crystal display images using adaptive multi-level defect detection and probability density function. Opt. Rev. 2011, 18, 191–196. [Google Scholar] [CrossRef]
  15. Zhang, Y.; Zhang, Y.; Gong, J. A LCD Screen Mura Defect Detection Method Based on Machine Vision. In Proceedings of the 2020 Chinese Control and Decision Conference (CCDC), Hefei, China, 22–24 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 4618–4623. [Google Scholar]
  16. Chen, C.S.; Weng, C.M.; Tseng, C.C. An efficient detection algorithm based on anisotropic diffusion for low-contrast defect. Int. J. Adv. Manuf. Technol. 2018, 94, 4427–4449. [Google Scholar] [CrossRef]
  17. Yang, H.; Song, K.; Mei, S.; Yin, Z. An accurate mura defect vision inspection method using outlier-prejudging-based image background construction and region-gradient-based level set. IEEE Trans. Autom. Sci. Eng. 2018, 15, 1704–1721. [Google Scholar] [CrossRef]
  18. Mei, S.; Yang, H.; Yin, Z. Unsupervised-Learning-Based Feature-Level Fusion Method for Mura Defect Recognition. IEEE Trans. Semicond. Manuf. 2017, 30, 105–113. [Google Scholar] [CrossRef]
  19. Yang, H.; Mei, S.; Song, K.; Tao, B.; Yin, Z. Transfer-Learning-Based Online Mura Defect Classification. IEEE Trans. Semicond. Manuf. 2018, 31, 116–123. [Google Scholar] [CrossRef]
  20. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef]
  21. Fang, F.; Li, L.; Zhu, H.; Lim, H. Combining Faster R-CNN and Model-Driven Clustering for Elongated Object Detection. IEEE Trans. Image Process. 2020, 29, 2052–2065. [Google Scholar] [CrossRef]
  22. Huang, Z.; Huang, L.; Gong, Y.; Huang, C.; Wang, X. Mask scoring r-cnn. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 6409–6418. [Google Scholar]
  23. Bolya, D.; Zhou, C.; Xiao, F.; Lee, Y.J. Yolact: Real-time instance segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27–28 October 2019; pp. 9157–9166. [Google Scholar]
  24. Singh, R.B.; Kumar, G.; Sultania, G.; Agashe, S.S.; Sinha, P.R.; Kang, C. Deep Learning based Mura Defect Detection. EAI Endorsed Trans. Cloud Syst. 2019, 5, e6. [Google Scholar] [CrossRef]
  25. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
  26. Chen, X.; Girshick, R.; He, K.; Dollár, P. Tensormask: A foundation for dense object segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27–28 October 2019; pp. 2061–2069. [Google Scholar]
  27. Cheng, B.; Misra, I.; Schwing, A.G.; Kirillov, A.; Girdhar, R. Masked-attention mask transformer for universal image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–22 June 2022; pp. 1290–1299. [Google Scholar]
  28. Çelik, A.; Küçükmanisa, A.; Sümer, A.; Çelebi, A.T.; Urhan, O. A real-time defective pixel detection system for LCDsusing deep learning based object detectors. J. Intell. Manuf. 2020, 2022, 985–994. [Google Scholar]
  29. Lin, H.; Li, B.; Wang, X.; Shu, Y.; Niu, S. Automated defect inspection of LED chipusing deep convolutional neural network. J. Intell. Manuf. 2019, 30, 2525–2534. [Google Scholar] [CrossRef]
  30. Yin, Y.; Xiao, Q. Image generation based ondeep convolutional generative adversarial network. Comput. Technol. Dev. 2021, 31, 86–92. [Google Scholar]
  31. Liu, Z.; Xie, Q.; Wang, C.; Zhang, Y.; Li, J.; Xie, J.; Dai, Z.; Hou, J. Partial discharge data enhancement and pattern recognition method based on Cycle GAN and deep residual network. High Voltage Electr. Appl. 2022, 58, 106–113. [Google Scholar]
  32. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks: 2017 IEEE International Conference on Computer Vision (ICCV). arXiv 2017, arXiv:1703.10593. Available online: https://arxiv.org/abs/1703.105931 (accessed on 26 October 2017).
  33. Liu, J.; Wu, H.; Liu, Y.; Wang, J. Automatic Generation and Detection Method of LCD Samples Based on Deep Learning. In Proceedings of the 2022 5th World Conference on Mechanical Engineering and Intelligent Manufacturing (WCMEIM), Ma’anshan, China, 18–20 November 2022. [Google Scholar]
  34. Torralba, A.; Russell, B.C.; Yuen, J. Labelme: Online image annotation and applications. Proc. IEEE 2010, 98, 1467–1484. [Google Scholar] [CrossRef]
  35. Schlegl, T.; Seeböck, P.; Waldstein, S.M.; Schmidt-Erfurth, U.; Langs, G. Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery. In Proceedings of the International Conference on Information Processing in Medical Imaging, Boone, NC, USA, 25–30 June 2017; Springer: Cham, Switzerland, 2017; pp. 146–157. Available online: https://arxiv.org/abs/1703.05921 (accessed on 23 May 2017).
  36. Kwon, D.; Kim, H.; Kim, J.; Suh, S.C.; Kim, I.; Kim, K.J. A survey ofdeep learning-based network anomaly detection. Clust. Comput. 2019, 22, 949–961. [Google Scholar] [CrossRef]
  37. Kalantar, R.; Messiou, C.; Winfield, J.M.; Renn, A.; Latifoltojar, A.; Downey, K.; Sohaib, A.; Lalondrelle, S.; Koh, D.M.; Blackledge, M.D. CT-Based Pelvic T1-Weighted MR Image Synthesis Using UNet, UNet++ and Cycle-Consistent Generative Adversarial Network (Cycle-GAN). Front. Inoncology 2021, 11, 665807. [Google Scholar] [CrossRef]
Figure 1. Defects on the low-contrast LCD surface: (a1a3) depicting the defects; and (b1b3) the enhanced effects of these defects, the dotted line outlines the location of the defect in (b3).
Figure 1. Defects on the low-contrast LCD surface: (a1a3) depicting the defects; and (b1b3) the enhanced effects of these defects, the dotted line outlines the location of the defect in (b3).
Electronics 12 05037 g001
Figure 2. The CycleGAN model diagram.
Figure 2. The CycleGAN model diagram.
Electronics 12 05037 g002
Figure 3. Automatic sample and mask generation method, ( x + : defect-free images; x ~ : generated defect image).
Figure 3. Automatic sample and mask generation method, ( x + : defect-free images; x ~ : generated defect image).
Electronics 12 05037 g003
Figure 4. The comparison involves various λ values for CycleGAN1: (a1a4) exhibit generated defect images corresponding to λ values of 1, 10, 45, and 200, respectively; and (b1b4) display defect-free images generated for the same λ values of 1, 10, 45, and 200, respectively.
Figure 4. The comparison involves various λ values for CycleGAN1: (a1a4) exhibit generated defect images corresponding to λ values of 1, 10, 45, and 200, respectively; and (b1b4) display defect-free images generated for the same λ values of 1, 10, 45, and 200, respectively.
Electronics 12 05037 g004
Figure 5. Paired image sample for CycleGAN2 training, (a1d1) are non-defective sample images, (a2d2) are corresponding defective sample images generated by CycleGAN1.
Figure 5. Paired image sample for CycleGAN2 training, (a1d1) are non-defective sample images, (a2d2) are corresponding defective sample images generated by CycleGAN1.
Electronics 12 05037 g005
Figure 6. When λ = 100, CycleGAN2 generates samples between T = 1–10 with different λ values.
Figure 6. When λ = 100, CycleGAN2 generates samples between T = 1–10 with different λ values.
Electronics 12 05037 g006
Figure 7. Segmentation results corresponding to defect images (a1d1); and defective sample (a2d2). The segmentation results were obtained by simple image processing and binarization of the defective image.
Figure 7. Segmentation results corresponding to defect images (a1d1); and defective sample (a2d2). The segmentation results were obtained by simple image processing and binarization of the defective image.
Electronics 12 05037 g007
Figure 8. The detection results using the Mask RCNN method.
Figure 8. The detection results using the Mask RCNN method.
Electronics 12 05037 g008
Figure 9. (a1,b1) Visualization of the defect surface image alongside its corresponding colored gray value display; (a2,b2) three-dimensional representation depicting the grayscale values within the defect image (a3); detection results obtained using our proposed Mask R-CNN method (b3); detection outcomes achieved through the Gabor method (a4); and detection results generated by the wavelet method (b4). Detection outcomes obtained using the U-Net method.
Figure 9. (a1,b1) Visualization of the defect surface image alongside its corresponding colored gray value display; (a2,b2) three-dimensional representation depicting the grayscale values within the defect image (a3); detection results obtained using our proposed Mask R-CNN method (b3); detection outcomes achieved through the Gabor method (a4); and detection results generated by the wavelet method (b4). Detection outcomes obtained using the U-Net method.
Electronics 12 05037 g009
Table 1. Recognition rate distribution in each group of test results.
Table 1. Recognition rate distribution in each group of test results.
Recognition Rate<0.9<0.95<0.99>0.99
Group110211
Group21101128
Table 2. Comparison recognition accuracy and object detection outcomes across diverse mask generation techniques (The best results are in bold).
Table 2. Comparison recognition accuracy and object detection outcomes across diverse mask generation techniques (The best results are in bold).
MethodmAPbbox
Mask R-CNN with LabelMe (Group 1)99.99%
Mask R-CNN with LabelMe (Group 2)96.8%
Mask R-CNN with Our proposed method (Group 1)98.8%
Mask R-CNN with Our proposed method (Group 2)94.63%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, H.; Liu, Y.; Xu, Y. An LCD Detection Method Based on the Simultaneous Automatic Generation of Samples and Masks Using Generative Adversarial Networks. Electronics 2023, 12, 5037. https://doi.org/10.3390/electronics12245037

AMA Style

Wu H, Liu Y, Xu Y. An LCD Detection Method Based on the Simultaneous Automatic Generation of Samples and Masks Using Generative Adversarial Networks. Electronics. 2023; 12(24):5037. https://doi.org/10.3390/electronics12245037

Chicago/Turabian Style

Wu, Hao, Yulong Liu, and Youzhi Xu. 2023. "An LCD Detection Method Based on the Simultaneous Automatic Generation of Samples and Masks Using Generative Adversarial Networks" Electronics 12, no. 24: 5037. https://doi.org/10.3390/electronics12245037

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop