Next Article in Journal
ConTra Preference Language: Privacy Preference Unification via Privacy Interfaces
Previous Article in Journal
Design and Implementation of a Subnanometer Heterodyne Interference Signal Processing Algorithm with a Dynamic Filter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Crosstalk Defect Detection Method Based on Salient Color Channel Frequency Domain Filtering

1
Department of Resources and Environment, University of Electronic Science and Technology of China, Chengdu 611731, China
2
Novel Product R & D Department, Truly Opto-Electronics Co., Ltd., Shanwei 516600, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(14), 5426; https://doi.org/10.3390/s22145426
Submission received: 14 June 2022 / Revised: 13 July 2022 / Accepted: 19 July 2022 / Published: 20 July 2022
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)

Abstract

:
Display crosstalk defect detection is an important link in the display quality inspection process. We propose a crosstalk defect detection method based on salient color channel frequency domain filtering. Firstly, the salient color channel in RGBY is selected by the maximum relative entropy criterion, and the color quaternion matrix of the displayed image is formed with the Lab color space. Secondly, the image color quaternion matrix is converted into the logarithmic spectrum in the frequency domain through the hyper-complex Fourier transform. Finally, Gaussian threshold band-pass filtering and hyper-complex inverse Fourier transform are used to separate the low-contrast defects and background of the display image. The experimental results show that the accuracy of the proposed algorithm reaches 96% for a variety of crosstalk defect detection. Compared with the current advanced defect detection algorithms, the effectiveness of the proposed method for low-contrast crosstalk defect detection is confirmed.

1. Introduction

Display quality inspection plays a key role in the display production process. The existing display quality detection process still relies on manual detection, which is affected by the subjective feelings of the inspectors along with the problems of low efficiency and unstable accuracy [1]. Particularly on small wearable devices, the defect is imperceptible to the human eye when it is close to the background. Therefore, the use of machine vision and digital image processing technology for display defect detection has become an urgent problem to be solved.
The existing display defect detection methods are mainly divided into three types: methods based on image registration [2,3,4], background reconstruction [5,6,7,8,9,10], and deep learning [11,12,13,14,15,16,17]. Shuai et al. [2] proposed the method of histogram equalization to adjust the brightness of the registered image, which can effectively suppress the problem of edge afterimages caused by the unaligned edges, and extract multiscale defects. However, this method didn’t perform image registration on a solid-color background. Yang et al. [6] proposed a method based on abnormal region detection and level set segmentation for Mura defects. They analyzed the shortcomings of polynomial fitting, used polynomial fitting to obtain candidate abnormal regions, and then applied the level set method for accurate defect segmentation. This method can detect weak contrast defects, but is unable to obtain defect-free areas due to the large defect area, and it fails to detect abnormal areas or overcome edge problems. Zhu et al. [11] proposed a hierarchical multi-frequency-based channel attention network, which utilizes the attention mechanism to weight scratch defects with different aspect ratios, effectively realizing the detection of defects of different shapes. Zhu et al. [12] proposed to use the Yolov3 [18] for detecting point-like and abnormally displayed defects, which can effectively detect defects in multiple backgrounds simultaneously. Chang et al. [13] proposed a method combining image preprocessing and the convolutional neural network (CNN), and designed a strategy to solve the problem of sample imbalance due to small defect areas and large backgrounds in the image. The deep learning method requires the construction of a large data set to guarantee the detection capability, and thus fails to deal with small sample data effectively. Lo et al. [17] proposed to use the Zernike moments to extract image features and elliptic basis function neural networks for classification, which can effectively detect multiple defects.
The crosstalk picture is a specific display pattern during the display quality detection process, which is designed for testing the crosstalk display capability. A crosstalk picture that cannot be displayed properly by a defective display screen is called a crosstalk defect. There are two main situations where the detection of crosstalk defects is especially difficult: (1) when the defect covers a large area of nearby half of the screen—it appears as continuous color lines with variable colors; (2) when the contrast between the defect area and the background is not fixed—low contrast makes it difficult to detect the defect area.
The existing methods present some difficulties in the detection of crosstalk defects, so we propose a crosstalk defect detection method based on salient color channel and frequency domain filtering. For the problem of a large crosstalk defect area, we adopted the salient color feature extraction method [19], used relative entropy to adaptively select the salient color channel, and combined it with the Lab color space [20] to form the image color quaternion matrix [21]. For the problem of low contrast in defect areas, we adopted the frequency-domain Gaussian threshold screening and band-pass filtering (GTB) method, which significantly enhances the saliency of low-contrast defects.
Our contributions are as follows:
(1)
A new crosstalk defect detection method is proposed, which combines color feature extraction and frequency-domain GTB filtering to achieve efficient and accurate detection of crosstalk defects under low contrast and strong background noise.
(2)
An adaptive salient color channel selection method is proposed, which can retain salient color features for large defects and solve the problem of difficult feature extraction.
(3)
The GTB frequency-domain filtering method is proposed, which enhances the salient regions of defects and suppresses the interference of background noise, and realizes the effective separation of low-contrast crosstalk defects and background noise.
This article is organized as follows: Section 2 presents related work, Section 3 describes the proposed color saliency channel selection method and frequency-domain GTB filtering method, Section 4 discusses the experimental results, and Section 5 summarizes the content of this paper.

2. Related Works

In the past ten years, with the rapid development of display technology, a large number of methods for detecting weak contrast defects in display screens have appeared. Ngo et al. [5] used low-pass filtering on the input image, polynomial fitting, and discrete cosine transform to reconstruct the background, obtained multiple defect shadow maps, and used threshold segmentation for defect detection. Jin et al. [22] proposed a method for Mura defect detection using discrete cosine transform (DCT) background reconstruction and bi-segment exponential transform. The bi-segment exponential transform effectively enhances the contrast of low-contrast defects, and the Otsu’s method is used to achieve accurate segmentation of defects. Fan et al. [23] used polynomial fitting for background reconstruction and threshold segmentation to obtain defect candidate regions, which could efficiently detect low-contrast defects. Cui et al. [24] adopted the Otsu’s method to select defect candidate regions, then used variance and meshing to detect Mura and edge defects. In addition to the above methods, there are defect detection methods based on defect features, such as color feature [25], similarity of histogram [26], and the dictionary learning method [27].
The saliency target detection method is mainly aimed at the target area that the human eye is most interested in natural scenes. Itti et al. [19] proposed to fuse objects by color feature, brightness feature, and scale feature to obtain saliency object regions. Guo et al. [28] proposed to use a combination of color features and motion features to highlight salient regions using the phase spectrum in the frequency domain. Li et al. [29] proposed to use of a Gaussian function to convolve the amplitude spectrum in the frequency domain, which can effectively highlight the salient target area, and the parameters of the Gaussian function are determined by the scale of the salient target. The saliency method can effectively obtain the human eye’s perceptual area, and some saliency-based methods have been used for defect detection [30,31,32]. Liu et al. [33] improved the hyper-complex Fourier transform (HFT) method by adding two-dimensional entropy features to the input features, which achieved effective extraction of fabric defects.

3. Methodology

3.1. Algorithm Architecture

Figure 1 shows the framework of the crosstalk defect detection method based on significant color channel frequency domain filtering.
Firstly, the original image is converted between the RGBY color space and the Lab color space, and the relative entropy maximum criterion is used for the opposite color channels in the RGBY space to evaluate the saliency of channel defects. Then the salient color channel and Lab color space are selected to form the image color quaternion matrix. Secondly, the quaternion color matrix is transformed into a hyper-complex frequency domain space using the HFT, while the magnitude spectrum is processed using GTB filtering. Finally, the inverse hyper-complex Fourier transform (IHFT) is employed to obtain the frequency domain saliency map, which then undergoes region segmentation to obtain the defect detection result.

3.2. Salient Color Channel Selection

The RGBY color space [19] adopts the human visual competition mechanism, which hinders the effective selection of the color channel with better defect protrusion and background suppression when detecting crosstalk defects. The background information and target information contained in different channels are varied, so we design a competitive mechanism between defective targets and backgrounds for feature extraction, which retains as many targets as possible while suppressing the background.
The human visual competition color space RGBY proposed by [19] is generally used in saliency detection.
It is calculated as:
{ R = r g + b 2 , G = g r + b 2 , B = b r + g 2 , Y = r + g 2 | r g | 2 b ,
{ R G = R G , B Y = B Y , I = r + g + b 3 , .  
where r, g and b are the three channels of the original image. The RGBY color space decomposes the RGB input image into two parts: the RGBY color feature and the luminance (I). It is pointed out by [34] that human vision is used for the color competition mechanism, and the RGBY competition color space is generated. The RGBY color feature implements the aberration operation in this paper, which can roughly suppress the background and highlight the abnormal part.
Based on the original RG and BY channels, the opposite space GR and YB channels are added, which are expressed as:
{ G R = G R , Y B = Y B ,
By comparing the area difference of RG, GR, and BY, YB between the defect and the background, the opposite color channel is adaptively selected. Since it is difficult to directly calculate the area of the defect, we use the calculation of the area of the background to achieve this. When the background area contained in the opposite space is large, we should discard this channel. The comparison of the area part adopts the relative entropy calculation, and the calculation formula of the relative entropy:
H k l ( P Q ) = P ( x ) l o g P ( x ) Q ( x ) ,  
The relative entropy represents the difference between the grayscale distribution P (x) of the input feature P and the grayscale distribution Q (x) of the input feature Q. When the two are the same, H k l   = 0, it can effectively represent the feature’s distribution distance. The brightness channel contains stable background information, so we calculate the relative entropy between the RGBY color channel and the I brightness respectively, and retain the opposite feature space with a large value.
The maximum entropy criterion is described as:
{ H k l ( R G , I ) > H k l ( G R , I ) , f 1 = R G , H k l ( R G , I ) H k l ( G R , I ) , f 1 = G R ,
{ H k l ( B Y , I ) > H k l ( Y B , I ) , f 2 = B Y , H k l ( B Y , I ) H k l ( Y B , I ) , f 2 = Y B ,

3.3. Frequency Domain GTB Filtering

3.3.1. Quaternion Representation and Hypercomplex Fourier Transform

The Lab color space conforms to the perceptual properties of the human eye [20], which preserves sufficient colors within the corresponding color channels. The color channel represented by the ab color channel is relatively consistent with the description of the RGBY opposite space, so the ab color channel and the RGBY color feature are used to form the quaternion color feature matrix.
The image quaternion color matrix is represented as follows:
f ( n , m ) = f 1 + f 2 i + f 3 j + f 4 k ,
where i ,     j and k represent the imaginary axes that satisfy i 2 = j 2 = k 2 = i j k = 1 . f 3 . is the a channel of the Lab color space and f 4 is the b channel of the Lab space.
The image quaternion matrix is transformed to the frequency domain space using the hyper-complex Fourier transform, calculated as follows:
F H [ u , v ] = 1 M N m = 0 M 1 n = 0 N 1 e μ 2 π ( ( m v M ) + ( m u N ) ) f ( n , m ) ,
where μ . is a quaternion unit, μ 2 = 1 .
Inverse transform of the hyper-complex Fourier transform is calculated as follows:
f h ( n , m ) = 1 M N v = 0 M 1 u = 0 N 1 e μ 2 π ( ( m v M ) + ( m u N ) ) F H ( u , v ) .
The amplitude spectrum, phase spectrum, and Eigen-axis spectrum are calculated as follows:
A m ( u , v ) = | F H [ u , v ] | ,
P ( u , v ) = t a n 1 I m g ( F H [ u , v ] ) R e a l ( F H [ u , v ] ) ,
χ ( u , v ) = I m g ( F H [ u , v ] ) | I m g ( F H [ u , v ] ) | ,
where | · | is the modulo operation; I m g is imaginary part computation; R e a l is real part computation; A m ( u , v ) is the magnitude spectrum; P ( u , v ) is the phase spectrum; χ ( u , v ) is a pure quaternion matrix.
To compute the magnitude spectrum of an image quaternion matrix, it can be converted to a log spectrum as follows:
A ( u , v ) = log ( A m ( u , v ) + 1 ) ,

3.3.2. Gaussian Filter Parameter Optimization

To enhance the saliency of the defect region, the magnitude spectrum is convolved with a Gaussian template. The existing way to obtain saliency results is to perform an information entropy calculation on the obtained saliency map, and the goal of maximum entropy is the required saliency map. Through experimental analysis, the best saliency map of crosstalk defects can be obtained by calculating the minimum entropy of the magnitude spectrum.
Define the size of the input image as ( M , N ). Set the range of template dimensions k and σ to:
k n = min { M , N } × 0.01 × n , ( n = 1 , 2 , 3 , , 8 ) ,
σ n = 3 n 2 , ( n = 1 , 2 , 3 , , 8 ) .
The high-contrast crosstalk defect image is selected for the optimal Gaussian template parameter selection, to illustrate that the extraction of information entropy from the frequency domain amplitude spectrum can replace the information entropy extraction of the saliency map. The saliency map obtained by different Gaussian templates, for example, is evaluated using the saliency indicator NSS (Normalized Scanpath Saliency, NSS) [35] to determine the effectiveness of the optimal parameters.
NSS ( P , Q B ) = 1 N i P i ¯ × Q i B ,
N = i Q i B ,
P ¯ = P μ ( P ) σ ( P ) ,
where P denotes the saliency map of the input. Q i B is the binary map of the target area of the input saliency map. The one-dimensional entropy is calculated as follow:
H = ( p × ) ,
where H represents the information entropy of the amplitude spectrum and p represents the statistics of the gray histogram of the amplitude spectrum.
We use different Gaussian functions to perform Gaussian convolution on the original amplitude spectrum, use one-dimensional entropy to calculate the entropy value of the original image, and use NSS to evaluate all the saliency maps obtained after convolution, as shown in Figure 2.
The horizontal axes in Figure 2 represent the parameters of the standard deviation of different Gaussian templates. The vertical axis in Figure 2a is the one-dimensional entropy value of the amplitude spectrum, and the vertical axis in Figure 2b is the NSS value of the saliency map. The results of data analysis show that the minimum entropy value can obtain the best saliency map of crosstalk defects. Figure 2 also shows that the size of the Gaussian window has little effect on the saliency map, and only affects the generation of the saliency map when the standard deviation is large.

3.3.3. Frequency Domain Threshold Screening and Bandpass Filtering

To enhance the saliency of the defect area and suppress the background noise, threshold screening and band-pass filtering are performed on the magnitude spectrum after Gaussian convolution.
Threshold screening is needed to calculate the mean and standard deviation of the original amplitude spectrum. The calculation formula is as follows:
t h = μ s m a x + K δ s m i n ,
μ s = 1 M × N i = 1 M j = 1 N A ( u , v ) i , j ,
δ s = 1 M × N i = 1 M j = 1 N ( A ( u , v ) μ ) 2 .
where t h is the amplitude spectrum segmentation threshold; A ( u , v ) is the original logarithmic amplitude spectrum; μ s is the mean value of the original logarithmic amplitude spectrum; δ s is the standard deviation of the original logarithmic amplitude spectrum. K is selected according to the actual image. Following Gaussian convolution, threshold filtering is performed on the amplitude spectrum, amplitude values in the amplitude spectrum that are greater than the threshold value are retained, and the amplitude spectrum after threshold filtering can be obtained.
The filtering conditions are as follows:
{ F ( w ) , F ( w ) t h , 0 ,     e l s e ,
where F ( w ) is the amplitude value after Gaussian convolution.
The selection of the band-pass requires a comparison of the amplitude spectrum of the defect-free area and the threshold-screened amplitude spectrum of the defect.
A suitable band-pass filter needs to be designed to filter the amplitude spectrum filtered by the threshold. The filter is described as:
H ( u , v ) = { 1 , D 0 + W 2 D ( u , v ) D 0 W 2 0 ,                 e l s e . ,
where H ( u , v ) is a band-pass filter, and the pass-band range is ( D 0 + W 2 , D 0 W 2 ) and its range is determined by the actual situation. The defect amplitude spectral information is obtained after using band-pass filtering. An inverse hyper-complex Fourier transform is performed on the defect magnitude spectrum:
S = F H 1 { exp ( F ( w )   1 ) P ( u , v ) χ ( u , v ) } ,
where F ( w ) represents the amplitude spectrum retained after bandpass filtering, S represents the obtained defect saliency map, P ( u   , v ) is the original phase, and χ ( u   , v ) is the characteristic axis spectrum.

4. Experimental Results

4.1. Crosstalk Defect Data and Image Quality Evaluation

The three main types of crosstalk defect are shown in Figure 3. In Type 1, the defected part has high contrast and there is low speckle noise in the background; Type 2 has low contrast between the gray level of the defect and the background, and there is less speckle noise in the background; in Type 3, the defected part has high contrast, and the background contains a lot of noise. In the following discussion, we use Type 1 for high-contrast, low-noise defect maps, Type 2 for low-contrast, low-noise defect maps, and Type 3 for high-contrast, high-noise defect maps. All compared display defect detection methods were programmed using MATLABR2018b and all experiments were performed on the same computer with Intel Core i7-7700 CPU@3.60 GHz, 16 GB RAM, and Windows 7 64-bit operation system.
To quantitatively evaluate the relationship between defects and background in the original image, PSNR (Peak Signal to Noise Ratio) [36] and MSE (Mean Square Error, MSE) [37] metrics are used.
PSNR = 10 log 10 ( 2 n 1 ) M S E ,
MSE = | s t d b s t d t | ,
where s t d b is the standard deviation of the background, and s t d t is the standard deviation of the defect.
The image quality evaluation results are shown in Table 1:
The value of MSE in Table 1 changes more in the three types of images, and the value of PSNR changes less. Among them, Type 3 has the largest background noise, so its MSE value is also the largest. In Type 2, the contrast and noise of background and defects are both low, so its MSE value is small but the PSNR value is the largest. The MSE and PSNR values in Type 1 are in an intermediate state compared to the other two types.

4.2. Color Channel Significance Analysis

In HFT [27] and PQFT [26], the input image is transformed into a quaternion space composed of various features, and then the frequency domain saliency analysis is performed using the hyper-complex Fourier transform. We analyzed the effect of several commonly used features on crosstalk defects and finally concluded that only color features can effectively represent the features of crosstalk defects; using other types of defect information results in defects remaining unextractable. The features analyzed in this paper are color feature space, two-dimensional information entropy feature, and brightness feature. Color feature spaces include RGBY color space, Lab color space, and HSV color space.
As shown in Figure 4, after the feature decomposition of the original image, the average brightness feature in RGB space and the V channel in HSV space are consistent with the original image, without suppressing the background or enhancing the defects. However, the two-dimensional information entropy feature does not describe the defect feature well, which over-enhances the edge information and drowns the defect information. It can be seen that the more effective defect feature descriptions are mainly in the RGBY space and the ab channel of the Lab color channel, as well as the H and S channels of the HSV space.
We use both SCRG (Signal-to-clutter Ratio Gain) and BSF (Background Suppression Factor) [38] to calculate the performance of the feature space.
SCRG = SCR o u t SCR i n ,
SCR = | μ T μ B | σ B ,
BSF = σ B i n σ B o u t ,
SCR i n and SCR o u t represent the image signal-to-clutter ratios (SCRs) of the input image and the modulo image. μ T is the gray mean of the defect area, and the mean and standard deviation of the target neighborhood of μ B . and σ B . The signal-to-noise ratio gain represents the signal-to-noise ratio of the output feature map in the feature space, and the background suppression factor represents the degree of difference between the defect and the background.
We use SCRG and BSF in RGB space as benchmarks for comparison. When the above two parameters are close to the effect of RGB space, this type of feature can’t effectively separate background and defect information.
As shown in Table 2, the BSF parameter value of HSV space is the largest, but its SCRG parameter is close to the value of RGB space, so it cannot effectively separate defect information. The Lab color space and the RGBY color feature can deviate effectively from the RGB space in terms of the two parameters of SCRG and BSF, so we choose these two color spaces as the input features to construct the color quaternion matrix.

4.3. GTB Experiment Comparison and Result Analysis

To illustrate the effectiveness of using the GTB approach, we compare the saliency maps obtained using GTB with only Gaussian template convolution.
As shown in Figure 5 and Figure 6, only the Gaussian convolution method can significantly enhance the crosstalk defect with strong contrast, while the saliency calculation method of the Gaussian convolution cannot suppress the point-like noise, and thus cannot effectively separate the defect and the background. After using the GTB method, the defect information is further enhanced, and weak contrast defects can be effectively detected. The NSS indicator of the saliency map shows that the saliency for Type 1 is improved by 45%, the saliency of Type 2 is increased by 162%, and the saliency of Type 3 is increased by 327%, thus demonstrating that our method can detect faults more effectively.
We evaluated the detection performance of our algorithm. Two evaluation metrics are used: TDR and FDR. TDR is defined as the sum of correctly detected pixels in the test image divided by the sum of true crosstalk defect pixels, and FDR is defined as the ratio of falsely detected pixels to total detected pixels [6], as shown in Table 3.
The TDR of the three types of defect detection results achieved by the method in this paper is more than 90 percent, and the FDR can be controlled within an acceptable range. This shows that our method can detect crosstalk defects effectively and stably, and can achieve accurate detection of low-contrast defects.

4.4. Comparison of Different Methods

4.4.1. Channel Selection Comparison

We compare the detection effects of the commonly used input feature combinations including Lab [20], RGBYI [29] and HRGBYI [33], which constitute the image quaternion matrix, with our proposed method, as shown in Figure 7.
It can be seen that the combination of Lab and RGBY cannot detect crosstalk defects, while the saliency map focuses on the edge parts. The detection results of HRGBYI, which is proposed to detect fabric defects, is also concentrated on the edge parts, keeping only a small amount of actual defect information. Compared with the above input feature combinations, which cannot effectively obtain the saliency map of defects, the combination of input features we proposed can effectively achieve the separation of defects and backgrounds.

4.4.2. Algorithm Detection Effect Comparison

Our proposed method is compared with the current state-of-the-art defect detection methods to analyze crosstalk defect detection capabilities, including polynomial fitting [23] and discrete cosine fitting [22] based methods.
As shown in Figure 8, the polynomial fitting method has a poor fitting ability for the cross-test picture, with the detection results concentrated in the edge part, which is seriously inconsistent with the actual defect position. The DCT method also has poor performance on the edge parts, and fails to overcome the special shape of the crosstalk pictures. It can be seen that the background reconstruction method has high requirements on the image and cannot have edge information. Our method can detect the defect areas more effectively, regardless of how strong or weak the contrast is, and succeed in overcoming the pollution caused by background noise.

5. Conclusions

We propose a crosstalk defect detection method based on salient color channel frequency-domain filtering. Firstly, the feature extraction and combination of images are analyzed and verified, and an effective feature extraction method for crosstalk defects is realized. For frequency domain filtering, we propose the GTB filtering method, which realizes the detection of low-contrast defects. We demonstrate the effectiveness of our method with detailed experiments and comparisons, and finally, show that our method can detect display crosstalk defects more accurately than the mainstream detection methods.

Author Contributions

Formal analysis, W.X., H.C., Z.W., B.L. and L.S.; investigation, W.X. and H.C.; methodology, W.X. and H.C.; software, W.X.; validation, W.X., H.C. and X.L.; resources, Z.W.; writing—original draft preparation, W.X.; writing—review and editing, W.X., H.C., X.L. and B.L.; visualization, W.X.; supervision, H.C.; project administration, H.C. and Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This paper is supported by the “Yang Fan” major project (No. [2020]05) in Guangdong Province, China.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ming, W.; Shen, F.; Li, X.; Zhang, Z.; Du, J.; Chen, Z.; Cao, Y. A comprehensive review of defect detection in 3C glass components. Measurement 2020, 158, 107722. [Google Scholar] [CrossRef]
  2. Lingyu, S.; Huaixin, C.; Zhixi, W. Defect Detection Method of LCD Complex Display Screen Combining Feature Matching and Color Correction. In Proceedings of the 2021 18th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, China, 17–19 December 2021. [Google Scholar]
  3. Li, C.; Zhang, X.; Huang, Y.; Tang, C.; Fatikow, S. A novel algorithm for defect extraction and classification of mobile phone screen based on machine vision. Comput. Ind. Eng. 2020, 146, 106530. [Google Scholar] [CrossRef]
  4. Zhang, J.; Li, Y.; Zuo, C.; Xing, M. Defect detection of mobile phone screen based on improved difference image method. In Proceedings of the 2019 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), Shanghai, China, 21–24 November 2019. [Google Scholar]
  5. Ngo, C.; Park, Y.J.; Jung, J.; Hassan, R.U.; Seok, J. A new algorithm on the automatic TFT-LCD mura defects inspection based on an effective background reconstruction. J. Soc. Inf. Disp. 2017, 25, 737–752. [Google Scholar] [CrossRef]
  6. Yang, H.; Song, K.; Mei, S.; Yin, Z. An accurate mura defect vision inspection method using outlier-prejudging-based image background construction and region-gradient-based level set. IEEE Trans. Autom. Sci. Eng. 2018, 15, 1704–1721. [Google Scholar] [CrossRef]
  7. Ma, Z.; Gong, J. An automatic detection method of Mura defects for liquid crystal display. In Proceedings of the 2019 Chinese Control Conference (CCC), Guangzhou, China, 27–30 July 2019. [Google Scholar]
  8. Chen, L.-C.; Kuo, C.-C. Automatic TFT-LCD mura defect inspection using discrete cosine transform-based background filtering and ‘just noticeable difference’quantification strategies. Meas. Sci. Technol. 2007, 19, 015507. [Google Scholar] [CrossRef] [Green Version]
  9. Sun, Y.; Li, X.; Xiao, J. A cascaded Mura defect detection method based on mean shift and level set algorithm for active-matrix OLED display panel. J. Soc. Inf. Disp. 2019, 27, 13–20. [Google Scholar] [CrossRef] [Green Version]
  10. Sun, Y.; Xiao, J. A Region-Scalable Fitting Model Algorithm Combining Gray Level Difference of Sub-image for AMOLED Defect Detection. In Proceedings of the 2018 IEEE International Conference on Computer and Communication Engineering Technology (CCET), Beijing, China, 18–20 August 2018. [Google Scholar]
  11. Zhu, Y.; Ding, R.; Huang, W.; Wei, P.; Yang, G.; Wang, Y. HMFCA-Net: Hierarchical multi-frequency based Channel attention net for mobile phone surface defect detection. Pattern Recognit. Lett. 2022, 153, 118–125. [Google Scholar] [CrossRef]
  12. Zhu, H.; Huang, J.; Liu, H.; Zhou, Q.; Zhu, J.; Li, B. Deep-Learning-Enabled Automatic Optical Inspection for Module-Level Defects in LCD. IEEE Internet Things J. 2021, 9, 1122–1135. [Google Scholar] [CrossRef]
  13. Chang, Y.-C.; Chang, K.-H.; Meng, H.-M.; Chiu, H.-C. A Novel Multicategory Defect Detection Method Based on the Convolutional Neural Network Method for TFT-LCD Panels. Math. Probl. Eng. 2022, 2022, 6505372. [Google Scholar] [CrossRef]
  14. Ming, W.; Cao, C.; Zhang, G.; Zhang, H.; Zhang, F.; Jiang, Z.; Yuan, J. Application of Convolutional Neural Network in Defect Detection of 3C Products. IEEE Access 2021, 9, 135657–135674. [Google Scholar] [CrossRef]
  15. Li, Z.; Li, J.; Dai, W. A two-stage multiscale residual attention network for light guide plate defect detection. IEEE Access 2020, 9, 2780–2792. [Google Scholar] [CrossRef]
  16. Pan, J.; Zeng, D.; Tan, Q.; Wu, Z.; Ren, Z. EU-Net: A novel semantic segmentation architecture for surface defect detection of mobile phone screens. IET Image Process 2022, 16, 2568–2576. [Google Scholar] [CrossRef]
  17. Lo Sciuto, G.; Capizzi, G.; Shikler, R.; Napoli, C. Organic solar cells defects classification by using a new feature extraction algorithm and an EBNN with an innovative pruning algorithm. Int. J. Intell. Syst. 2021, 36, 2443–2464. [Google Scholar] [CrossRef]
  18. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  19. Itti, L.; Koch, C.; Niebur, E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1254–1259. [Google Scholar] [CrossRef] [Green Version]
  20. Shangwang, L.; Ming, L.; Wentao, M.; Guoqi, L. Improved HFT model for saliency detection. Comput. Eng. Des. 2015, 36, 2167–2173. [Google Scholar]
  21. Ell, T.A.; Sangwine, S.J. Hypercomplex Fourier transforms of color images. IEEE Trans. Image Process. 2006, 16, 22–35. [Google Scholar] [CrossRef]
  22. Jin, S.; Ji, C.; Yan, C.; Xing, J. TFT-LCD mura defect detection using DCT and the dual-γ piecewise exponential transform. Precis. Eng. 2018, 54, 371–378. [Google Scholar] [CrossRef]
  23. Fan, S.-K.S.; Chuang, Y.-C. Automatic detection of Mura defect in TFT-LCD based on regression diagnostics. Pattern Recognit. Lett. 2010, 31, 2397–2404. [Google Scholar] [CrossRef]
  24. Cui, Y.; Wang, S.; Wu, H.; Xiong, B.; Pan, Y. Liquid crystal display defects in multiple backgrounds with visual real-time detection. J. Soc. Inf. Disp. 2021, 29, 547–560. [Google Scholar] [CrossRef]
  25. Wenqiang, X.; Huaixin, C.; Zhixi, W. Method for Detecting Gypsophila Defect of Display Screen Based on Human Visual Perception. In Proceedings of the 2021 18th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, China, 17–19 December 2021. [Google Scholar]
  26. Torres, G.M.; Souza, A.S.; Ferreira, D.A.; Júnior, L.C.; Ouchi, K.Y.; Valadão, M.D.; Silva, M.O.; Cavalcante, V.L.; Mattos EV, U.; Pereira, A.M. Automated Mura Defect Detection System on LCD Displays using Random Forest Classifier, In Proceedings of the 2021 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 10–12 January 2021.
  27. Liang, L.-Q.; Li, D.; Fu, X.; Zhang, W.-J. Touch screen defect inspection based on sparse representation in low resolution images. Multimed. Tools Appl. 2016, 75, 2655–2666. [Google Scholar] [CrossRef]
  28. Guo, C.; Ma, Q.; Zhang, L. Spatio-temporal saliency detection using phase spectrum of quaternion fourier transform. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008. [Google Scholar]
  29. Li, J.; Levine, M.D.; An, X.; Xu, X.; He, H. Visual saliency based on scale-space analysis in the frequency domain. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 996–1010. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Guan, S.; Shi, H. Fabric defect detection based on the saliency map construction of target-driven feature. J. Text. Inst. 2018, 109, 1133–1142. [Google Scholar] [CrossRef]
  31. Zhou, X.; Wang, Y.; Xiao, C.; Zhu, Q.; Lu, X.; Zhang, H.; Ge, J.; Zhao, H. Automated visual inspection of glass bottle bottom with saliency detection and template matching. IEEE Trans. Instrum. Meas. 2019, 68, 4253–4267. [Google Scholar] [CrossRef]
  32. Guan, S. Fabric defect delaminating detection based on visual saliency in HSV color space. J. Text. Inst. 2018, 109, 1560–1573. [Google Scholar] [CrossRef]
  33. Liu, G.; Zheng, X. Fabric defect detection based on information entropy and frequency domain saliency. Vis. Comput. 2021, 37, 515–528. [Google Scholar] [CrossRef]
  34. Engel, S.; Zhang, X.; Wandell, B. Colour tuning in human visual cortex measured with functional magnetic resonance imaging. Nature 1997, 388, 68–71. [Google Scholar] [CrossRef]
  35. Bylinskii, Z.; Judd, T.; Oliva, A.; Torralba, A.; Durand, F. What do different evaluation metrics tell us about saliency models? IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 740–757. [Google Scholar] [CrossRef] [Green Version]
  36. Turaga, D.S.; Chen, Y.; Caviedes, J. No reference PSNR estimation for compressed pictures. Signal Processing Image Commun. 2004, 19, 173–184. [Google Scholar] [CrossRef]
  37. Marmolin, H. Subjective MSE measures. IEEE Trans. Syst. Man Cybern. 1986, 16, 486–489. [Google Scholar] [CrossRef]
  38. Zhang, H.; Zhang, L.; Yuan, D.; Chen, H. Infrared small target detection based on local intensity and gradient properties. Infrared Phys. Technol. 2018, 89, 88–96. [Google Scholar] [CrossRef]
Figure 1. Algorithm Architecture.
Figure 1. Algorithm Architecture.
Sensors 22 05426 g001
Figure 2. (a) Entropy value obtained by Gaussian parameter template, (b) saliency evaluation result.
Figure 2. (a) Entropy value obtained by Gaussian parameter template, (b) saliency evaluation result.
Sensors 22 05426 g002
Figure 3. Different types of input images.
Figure 3. Different types of input images.
Sensors 22 05426 g003
Figure 4. Decomposed feature map of various features of the original image. (a) Original image, (b) RGB space R channel, (c) RGBY space R channel, (d) Lab space a channel, (e) Two-dimensional information entropy H feature, (f) Average luminance feature I, (g) HSV spatial H channel, (h) HSV spatial S channel, (i) HSV space V channel.
Figure 4. Decomposed feature map of various features of the original image. (a) Original image, (b) RGB space R channel, (c) RGBY space R channel, (d) Lab space a channel, (e) Two-dimensional information entropy H feature, (f) Average luminance feature I, (g) HSV spatial H channel, (h) HSV spatial S channel, (i) HSV space V channel.
Sensors 22 05426 g004
Figure 5. Comparison of GTB methods.
Figure 5. Comparison of GTB methods.
Sensors 22 05426 g005
Figure 6. NSS value results for GTB method and Gaussian method.
Figure 6. NSS value results for GTB method and Gaussian method.
Sensors 22 05426 g006
Figure 7. Combined feature saliency result graph: the first row is the quaternion modulo image, and the second row is the result saliency graph.
Figure 7. Combined feature saliency result graph: the first row is the quaternion modulo image, and the second row is the result saliency graph.
Sensors 22 05426 g007
Figure 8. Crosstalk defect detection results from different methods.
Figure 8. Crosstalk defect detection results from different methods.
Sensors 22 05426 g008
Table 1. Signal-to-noise ratio and contrast of input images.
Table 1. Signal-to-noise ratio and contrast of input images.
Type 1Type 2Type 3
MSE37.4032.5650.73
PSNR (dB)32.4033.0031.07
Table 2. Background suppression and noise immunity comparison.
Table 2. Background suppression and noise immunity comparison.
123
SCRGBSFSCRGBSFSCRGBSF
RGB2.843.022.813.012.903.01
Lab0.73224.490.50221.270.739200.90
Entropy H1.4540.861.5835.831.5520.00
HSV2.89548.113.10568.411.89639.49
RGBY0.9970.570.6236.711.0225.43
Table 3. Defect detection capabilities of our method.
Table 3. Defect detection capabilities of our method.
Type 1Type 2Type 3
TDR (%)96.710092.3
FDR (%)7.611.84.5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xie, W.; Chen, H.; Wang, Z.; Liu, X.; Liu, B.; Shuai, L. Crosstalk Defect Detection Method Based on Salient Color Channel Frequency Domain Filtering. Sensors 2022, 22, 5426. https://doi.org/10.3390/s22145426

AMA Style

Xie W, Chen H, Wang Z, Liu X, Liu B, Shuai L. Crosstalk Defect Detection Method Based on Salient Color Channel Frequency Domain Filtering. Sensors. 2022; 22(14):5426. https://doi.org/10.3390/s22145426

Chicago/Turabian Style

Xie, Wenqiang, Huaixin Chen, Zhixi Wang, Xing Liu, Biyuan Liu, and Lingyu Shuai. 2022. "Crosstalk Defect Detection Method Based on Salient Color Channel Frequency Domain Filtering" Sensors 22, no. 14: 5426. https://doi.org/10.3390/s22145426

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop