Next Article in Journal
Coarse-to-Fine Localization for Detecting Misalignment State of Angle Cocks
Previous Article in Journal
SSA Net: Small Scale-Aware Enhancement Network for Human Pose Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lightweight Network DCR-YOLO for Surface Defect Detection on Printed Circuit Boards

1
School of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan 232000, China
2
Institute of Environment-Friendly Materials and Occupational Health, Anhui University of Science and Technology, Wuhu 241003, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(17), 7310; https://doi.org/10.3390/s23177310
Submission received: 10 July 2023 / Revised: 9 August 2023 / Accepted: 13 August 2023 / Published: 22 August 2023
(This article belongs to the Section Electronic Sensors)

Abstract

:
To resolve the problems associated with the small target presented by printed circuit board surface defects and the low detection accuracy of these defects, the printed circuit board surface-defect detection network DCR-YOLO is designed to meet the premise of real-time detection speed and effectively improve the detection accuracy. Firstly, the backbone feature extraction network DCR-backbone, which consists of two CR residual blocks and one common residual block, is used for small-target defect extraction on printed circuit boards. Secondly, the SDDT-FPN feature fusion module is responsible for the fusion of high-level features to low-level features while enhancing feature fusion for the feature fusion layer, where the small-target prediction head YOLO Head-P3 is located, to further enhance the low-level feature representation. The PCR module enhances the feature fusion mechanism between the backbone feature extraction network and the SDDT-FPN feature fusion module at different scales of feature layers. The C5ECA module is responsible for adaptive adjustment of feature weights and adaptive attention to the requirements of small-target defect information, further enhancing the adaptive feature extraction capability of the feature fusion module. Finally, three YOLO-Heads are responsible for predicting small-target defects for different scales. Experiments show that the DCR-YOLO network model detection map reaches 98.58%; the model size is 7.73 MB, which meets the lightweight requirement; and the detection speed reaches 103.15 fps, which meets the application requirements for real-time detection of small-target defects.

1. Introduction

With the accelerated development of industrialization and intelligent development of electronic devices, the range of printed circuit board applications is expanding. Printed circuit boards contain many kinds of defects [1]. Warpage and shrinkage [2] are common defects of printed circuit boards that occur during the production process, and these two defect types generally lead to variations in the length, width, and thickness of printed circuit boards. Printed circuit boards containing these two kinds of defects are difficult to apply in electronic devices. Common defects in printed circuit boards include surface defects such as missing_hole, mouse_bite, open_circuit, short, spur, and spurious_copper, which may cause damage to electronic devices containing printed circuit boards. They may also lead to major safety incidents. The safety and stability of printed circuit boards have become very important. It is very important to check for defects in printed circuit boards in a timely manner to eliminate potential safety hazards and reduce the number of safety accidents.
Printed circuit board surface defects are small and varied; for the manual visual inspection method, detection efficiency is low and the rate of missed detection is high. For the automatic optical inspection [3] method, the accuracy is low and vulnerable to interference, and the detection speed is slow. With the rapid development of computer technology advances and improved deep learning in the direction of image processing [4,5,6,7,8], researchers have proposed a variety of printed circuit board defect detection methods based on deep learning [9].
One class comprises defect detection methods based on traditional convolutional neural networks [10]. A multiscale feature fusion detection method is implemented based on upsampling and layer-hopping connections [11]. Detection methods are based on basic convolutional neural networks with multiple segmentation of defect regions. These defect detection methods are based on the FasterRcnn multiattention fusion mechanism. The antidisturbance encoder–decoder [12,13,14,15] structure is the basis of the convolutional neural network defect detection methods. Such defect detection methods have complex network structures, a large number of parameters, and a slow detection speed.
The other category is the YOLO [16] family of defect detection methods. Some target detection methods use ResNet as the backbone feature extraction network of YOLOv3 [17]. Other target detection methods are based on the backbone feature extraction network of YOLOv4 [18,19], incorporating long-range attention mechanisms. A defect detection method is based on the YOLOv5 [20,21,22] network with an enhanced perceptual field by introducing a coordinate attention mechanism and enhanced multiscale feature fusion. Based on the improved MobileNetv3 as the backbone feature extraction network, ECAnet [23] is introduced to adjust the feature weights adaptively to enhance the feature extraction of the network. Such defect detection methods have a small number of parameters and a relatively fast detection speed, but for small-target [24,25] defect detection, the feature extraction strength is insufficient and the detection accuracy is low.
Currently, printed circuit board defect detection has two sets of difficulties: (1) Defect detection network models have layers, a complex network model, a large number of parameters, low lightweight [26,27] degree, and slow detection speed. (2) Printed circuit board defect targets are small, and important defect features are difficult to extract, resulting in the low accuracy of small-target defect detection.
In response to the above problems, this paper sets up a double-cross-residual [28] YOLO (DCR-YOLO) defect detection model. It meets the requirements of industrial real-time detection of small-target defects. It also solves the problem of low detection accuracy caused by multiple defect detection network layers, complex model structure, and a low network lightweight degree.

2. Approach to the Overall Design of the DCR-YOLO Network Model

Due to the small surface of defects on printed circuit boards, important defect features are difficult to extract. In order to meet the industrial requirements for the accuracy and speed of detection of small-target defects, the detection network needs to be improved for small-target defect detection accuracy and must ensure a high degree of lightness of the detection network. The single-stage YOLO target-detection series of algorithms has good performance, both in terms of detection accuracy and detection speed. The overall structure of the DCR-YOLO network is shown in Figure 1.
The DCR-YOLO network mainly consists of the double-cross-residual backbone (DCR-backbone) module, the pooling-convolution-residual (PCR) module, the same-direction-double-top feature pyramid [29] network (SDDT-FPN) module, the C5ECA module, and three prediction heads, namely, YOLO Heads.

2.1. Design of the DCR-Backbone Structure

Printed circuit board surface defects are small. In order to be able to fully extract the features of small target defects, enhance the feature extraction ability of small target defects, effectively alleviate the gradient disappearance and explosion problems, and improve the learning ability of the network, a backbone feature extraction network based on Cross-Residual blockbody (CR-blockbody) is designed. The backbone feature extraction network mainly consists of three convolutional structures: two CR-blockbodies and one ordinary Residual-blockbody (R-blockbody) structure. The deeper the backbone feature extraction network, the more feature information of small targets will be lost. The first two backbone feature extraction blocks use CR-blockbody, and the latter one uses R-blockbody, so that the small target feature information is fully extracted and retained, and the model improves the lightness.
The convolution structure is shown in Figure 2a and consists of Conv2D, BN [30] (Batch Normalization), and LeakyRelu activation function. The R-blockbody structure is shown in Figure 2b and consists of four convolution structures, two jump connections, two splicing operations, and one pooling structure. The CR-blockbody structure is shown in Figure 2b and consists of four convolution structures, two jump connections, two splicing operations, and one pooling structure. The CR-blockbody structure shown in Figure 2c consists of six convolutional structures, three jump connections, three splicing operations, and one pooling structure. There are common cross-convolutional layers in the residual block of jump connections, which effectively reduces the problem of feature information loss and depletion when feature extraction is performed on the input feature layer, especially for small target defects.
For the DCR-backbone network module, the first input is a picture feature layer of size 416 × 416 with three channels. After the first convolutional structure, the input image features are transformed from 416 × 416 × 3 to 208 × 208 × 32. After the second convolutional structure, the input features are transformed from 208 × 208 × 32 to 104 × 104 × 64. Next the feature layer is processed by two CR-blockbody structures. The first CR-blockbody structure transforms the input features from 104 × 104 × 64 to 52 × 52 × 128, and the processed feature layer is prepared for YOLO Head-P3 prediction. After the second CR-blockbody structure, the input features are transformed from 52 × 52 × 128 to 26 × 26 × 256 and the processed feature layer is prepared for YOLO Head-P4 prediction. Finally, the feature layer is processed by the R-blockbody structure and the third convolutional structure. The R-blockbody structure and the third convolutional structure transform the input features from 26 × 26 × 256 to 13 × 13 × 512, and the processed feature layer is prepared for YOLO Head-P5 prediction.

2.2. Design of the PCR Structure

To enhance the feature fusion mechanism at different scales between the CR-backbone feature extraction network and the SDDT-FPN feature fusion module, and to effectively prevent feature information loss when fusing features at different scales between different modules, the pooled convolutional residual structure CPR was designed. The PCR module mainly consists of a residual convolutional structure and two pooling channel structures. The CBS convolutional structure shown in Figure 3a mainly consists of Conv2D, BN, and Silu activation functions. Both pooling channel structures are composed of two pooling layers of the same size. The pooling layer has a pooling kernel size of 5 × 5, a step size of 1, and a padding number P of 2. The input feature layer is processed by the convolutional structure and the two pooling channels, and then the fused feature layer is passed backwards through a splicing operation. The PCR structure is shown in Figure 3b.
After being processed by the first CR-blockbody structure in the CR-backbone module, the output feature layer is 52 × 52 × 128. The first CBS structure of the PCR module transforms the input feature 52 × 52 × 128 into 52 × 52 × 64. The first pooling channel of the PCR module transforms the input feature 52 × 52 × 64 into 52 × 52 × 64. After the second pooling channel of the PCR module, the input feature 52 × 52 × 64 is transformed into 52 × 52 × 64. After the second CBS structure of the PCR module, the input feature 52 × 52 × 64 is transformed into 52 × 52 × 64. Finally, the processed feature layer is stitched into 52 × 52 × 192 by the splicing operation. Thus, the PCR module’s input feature layer is 52 × 52 × 128, and the output feature layer is 52 × 52 × 192.

2.3. Design of the SDDT-FPN Structure

YOLO-Head P5 and YOLO-Head P4 have a deeper backbone feature extraction network, which is suitable for predicting relatively large defects, but the deeper backbone feature extraction network will lead to the loss of some features in the extraction of small target defects, resulting in relatively low accuracy of small target defect prediction. It is more suitable for small-target defect prediction as less features are lost in the process. Therefore, it is necessary to enhance the feature fusion for YOLO-Head P3 to improve the accuracy of small-target defect prediction.
To address the above issues, an isotropic double-top feature pyramid SDDT-FPN structure is designed as shown in Figure 4a. This structure not only facilitates the feature fusion mechanism between the bottom-up special layers, but also re-introduces the isotropic pyramid top for the feature fusion layer where the small-target defect prediction head YOLO Head-P3 is located, further enhancing the feature information transfer in the feature layer for small-target defect prediction. The overall model after the introduction of the SDDT-FPN structure is shown in Figure 4b.
The whole module of DCR-backbone processes the features as 13 × 13 × 512. After the convolution structure on YOLO Head-P5 transforms the input features from 13 × 13 × 512 to 13 × 13 × 256. YOLO Head-P5 to YOLO Head-P4 undergoes a convolution and upsampling operation to transform the feature layer from 13 × 13 × 256 to 26 × 26 × 128. The splicing operation on YOLO Head-P4 transforms the feature layer from 26 × 26 × 128 to 26 × 26 × 384. After the first convolutional structure on YOLO Head-P4, the feature layer is transformed from 26 × 26 × 384 to 26 × 26 × 512. YOLO Head-P4 to YOLO Head-P3, and after the first convolution and upsampling operation, the feature layer is transformed from 26 × 26 × 384 to 52 × 52 × 256. YOLO Head-P4 to YOLO Head-P3. After the second convolution and upsampling operation, the feature layer is transformed from 26 × 26 × 512 to 52 × 52 × 256. After the first splicing operation on YOLO Head-P3, the feature layer is 52 × 52 × 384. After the first convolution structure on YOLO Head-P3, the feature layer is transformed from 52 × 52 × 384 to 52 × 52 × 256. After the second splicing operation on YOLO Head-P3, the feature layer is 52 × 52 × 512.

2.4. Design of the C5ECA Structure

The overall model has a large amount of feature information to extract and focus on, so to some extent the focus on minor feature information needs to be reduced and the focus on major information needs to be enhanced. The C5ECA structure is designed so that the feature extraction and fusion mechanisms are enhanced between the layer structures of the SDDT-FPN network to increase the attention paid to small-target defective feature information. The C5ECA module structure is shown in Figure 5a, which mainly consists of two convolutional structures, a residual convolution (consisting of three convolutional structures), an ECAnet [31] structure, and splicing operations.
The first two convolutional structures are mainly used for upsampling operations, and the residual convolutional structure is used for feature extraction and transfer between prediction layer structures to enhance the sensitivity of small-target information extraction. The specific structure of the Effificient Channel Attention network (ECAnet) module is shown in Figure 5b. As can be seen from the ECAnet structure, firstly, the input feature map is globally averaged to pool the h and w dimensions to one, and only the channel dimension is retained. Secondly, a 1D convolution is performed so that the channels in each layer interact with the channels in neighboring layers and share the weights. Finally, using the Sigmoid function for processing, the input feature map is multiplied with the processed feature map weights and the combined weights are assigned to the feature map. After being processed by the ECAnet module, the model is made to adaptively focus on the more important small-target defect feature information, which further improves the adaptive [32] feature extraction capability of the network model and thus improves the prediction accuracy of small-target defects.
The first two convolutional structures in the C5ECA module were responsible for upsampling. YOLO Head-P4 to YOLO Head-P3 performed two operations of the C5ECA module. For the first operation of the C5ECA module, the feature layer of the input module was 26 × 26 × 384, and the feature layer of the output module was 52 × 52 × 256. For the second operation of the C5ECA module, the feature layer of the input module was 26 × 26 × 512 and the feature layer of the output module was 52 × 52 × 256.

3. Experimental Basis and Procedure

In order to verify the detection performance and prediction performance of the DCR-YOLO model, comparison and ablation experiments were conducted. The experimental data in this paper were obtained from the Human–Computer Interaction Open Laboratory of Beijing University, Beijing, China. This open dataset [33] contains six types of printed circuit board defects required for the experiments. The computer system required for the experiment was Windows 11 operating system and the programming language was Python programming language.

3.1. Data Set for the Experiment

This experimental dataset came from the open dataset of printed circuit board defects from the Intelligent Robotics Open Lab, with a total of 10,668 images. First, 2667 images were randomly selected from these experimental data, which contain an equal number of images of six types of printed circuit board surface defects, as shown in Figure 6: missing_hole, mouse_bite, open_circuit, short, spur, spurious_copper. The training set consisted of 2160 images, the validation set consisted of 240 images, and the test set consisted of 267 images.

3.2. Evaluation Criteria

The commonly used and representative evaluation metrics used in this paper are Average Precision (AP), Mean Average Precision (mAP), Check All Rate (Recall, R), the curve of R change for each defect category, and Frame Rate (Frames Per Second, FPS).
P refers to the proportion of all objects that the model predicts correctly, also known as the accuracy rate, as shown in Equation (1). The check-all rate, R, refers to the proportion of all real objects that are correctly predicted by the model, also known as the recall rate, and is shown in Equation (2). The average precision (AP) formula is shown in Equation (3) and refers to the size of the area enclosed by the two curves, the precision P curve and the accuracy R curve, on the interval (0, 1). The mean accuracy (mAP), which is the average of the AP values for all categories, reflects the overall effectiveness and overall accuracy of the model, and is an important overall measure of the model’s performance.
Pre c i s i o n = T P T P + F P
Re c a l l = T P T P + F N
A P = 0 1 P ( R ) d ( R )
The formula TP indicates that the sample is a positive sample, representing the number of samples predicted to be positive; the FP value represents a negative sample, indicating the number of samples predicted to be positive; FN means that the sample is a positive sample, and the number of samples predicted to be negative is wrong.

3.3. Experimental Platform and Parameters

The configuration and parameters required for the experiments are as follows: the framework for deep learning is Pytorch 1.12.1 + CUDA116. Pytorch is a deep learning framework developed at Facebook’s Artificial Intelligence Research Lab in Menlo Park, San Mateo County, CA, USA. CUDA is a parallel computing platform and programming model from NVIDIA in Santa Clara, CA, USA. This study used Python version 3.8, which is a computer programming language invented by Guido van Rossum of the Netherlands Research Society for Mathematics and Computer Science. The operating system is Windows 11, which is a computer operating system invented by the Microsoft Corporation in Redmond, WA, USA. The graphics processor is an NVIDIA GeForce RTX 3050Ti GPU with 4G video memory, and the relevant parameters for training are shown in Table 1.

3.4. Model Training Process and Results

The DCR-YOLO model training process loss change curve and training process map change curve are shown in Figure 7a. After 350 epochs of training, the loss curve shows that the loss function is basically in a converged state after 200 epochs, while the map change curve shows that the average accuracy value of the model reaches about 90% at 175 epochs, and the map value is basically smooth after 260 epochs. The average accuracy curves of each category after training are shown in Figure 7b, and the average accuracy of each category is basically above 95%.

3.5. Ablation Experiments with Different Modules

Ablation experiments were designed to evaluate the degree of optimization of the performance of the algorithm with different combinations of designed modules. The results of the ablation experiments are shown in Table 2. DCR-3Head indicates a base model consisting of a CR-backbone network and three predictor heads. Px-PCR indicates the introduction of a PCR module after the backbone feature layer in which a predictor head is located. sddt-fpn indicates the introduction of an sddt-fpn module between the backbone feature extraction network and the predictor head. Meanwhile, 1-C5ECA indicates the introduction of a C5ECA module between P5 and P4. 2-C5ECA indicates the first (left) introduction of the C5ECA module between P4 and P3, while 3-C5ECA indicates the second (right) introduction of the C5ECA module between P4 and P3.
Experiments 1–9 combine different modules. Experiment 1: DCR-backbone + 3-YOLO Head, Experiment 2: DCR-backbone + SDDT-FPN + 3-YOLO Head, Experiment 3: DCR-backbone + SDDT-FPN + P3-PCR + 3-YOLO Head, Experiment 4: DCR-backbone + SDDT-FPN + P4-PCR+3-YOLO Head, Experiment 5: DCR-backbone + SDDT-FPN + P5-PCR + 3-YOLO Head, Experiment 6: DCR-backbone + SDDT-FPN + P3-PCR + 1-C5ECA + 2-C5ECA + 3-YOLO Head, Experiment 7: DCR-backbone + SDDT-FPN + P3-PCR + 1-C5ECA + 3-C5ECA + 3-YOLO Head. Experiment 8: DCR-backbone + SDDT-FPN + P3-PCR + 2-C5ECA + 3-C5ECA + 3-YOLO Head, Experiment 9: DCR-backbone + SDDT-FPN + P3-PCR + 1-C5ECA + 2-C5ECA + 3-C5ECA + 3-YOLO Head. The specific results of the experiment are shown in Table 2.
From Experiment 1, it can be seen that the network structure of the CR-backbone network-based model map = 96.76, proving that the designed cross-residual CR-blockbody has a strong capability for small-target defect feature extraction.
As can be seen from Experiments 1 and 2, the introduction of the SDDT-FPN module improved map and R by 0.5% and 1.37%, respectively, compared to Experiment 1, proving that the SDDT-FPN enhanced the feature fusion mechanism between layers and also improved the feature fusion capability for the feature fusion layer in which the small-target prediction head YOLO Head-P3 is located, further improving the small-target defect detection accuracy.
From Experiments 3, 4 and 5, it can be seen that the introduction of the PCR module after the backbone feature extraction layer in which the P3 predictor head is located gives the best results in terms of fusing the different scale feature layers, with a 0.74% improvement in MAP, compared to Experiment 2.
From Experiments 6, 7, 8 and 9, it can be seen that the introduction of two (left and right sides) C5ECA models between predictor heads P4 and P3 in the SDDT-FPN structure strengthens the feature fusion mechanism between the layer structures of the SDDT-FPN network and improves the attention to small-target defect feature information, with map and R improving by 0.58% and 0.48%, respectively, compared to Experiment 3.

3.6. Comparative Experiments with Different Models

To verify the model DCR-YOLO feasibility and effectiveness, six current mainstream target detection models were trained and tested on the printed circuit board defect dataset using YOLOv3, YOLOv4, YOLOv4-tiny, YOLOv5-s, YOLOv5-m, and YOLOv7-tiny [34] in the same experimental environment, and the results are shown in Table 3.
As can be seen from Table 3, for model detection in terms of MAP, the DCR-YOLO model reached 98.58%, which was 10.14%, 1.44%, 8.95%, 4.24%, 2.69%, and 3.26% higher than YOLOv3, YOLOv4, YOLOv4-tiny, YOLOv5-s, YOLOv5-m, and YOLOv7-tiny, respectively, 3.26%. The DCR-YOLO model R = 97.24%, which is 25.05% higher than YOLOv5-s and 5.37% higher than YOLOv4; this is a significant improvement compared to the other models. The volume of the model is 7.73 MB, which is 56.23 MB and 13.34 MB lower than the YOLOv4 and YOLOv5 m models, respectively, meeting the lightweight requirement. The inspection speed of fps = 103.15 meets the demand for real-time inspection of printed circuit board defects.
Overall, the comparison of the experimental results with several current mainstream target detection models highlights the superiority and improvement of the DCR-YOLO model. Table 3 shows that the best current target detection model is YOLOv4, which has a map value of 97.14. This is 1.41 percentage points lower than the map value of the DCR-YOLO model. In terms of R-value, the best model, YOLOv4, is 5.37 percentage points lower than the DCR-YOLO model. In terms of model size, the DCR-YOLO model is 56.23 MB smaller than the best model, YOLOv4. In terms of detection speed, the DCR-YOLO model can detect 71.79 frames more images per second than the best model, YOLOv4. The DCR-YOLO model is superior to several current target detection models in terms of map and R, and its speed of detection is greatly improved compared to current models.

4. Visualization of Results Analysis

The DCR-YOLO model was conceived based on the structure of the YOLO series of models. In order to verify the actual detection effect of the DCR-YOLO model, six types of printed circuit defect images were randomly selected for detection and also compared with the detection results of the more lightweight YOLOv4-tiny model. The detection results of the YOLOv4-tiny model are shown in Figure 8 and the detection results of the DCR-YOLO algorithm model are shown in Figure 9.
The comparison of the two models shows that for (a) missing_hole and (b) mouse_bite, both the original model and DCR-YOLO detected the defects that were present, and there were no missed detections, and the DCR-YOLO model and the YOLOv4-tiny model were equal in terms of detection accuracy. For (c) open_circuit, (d) short, (e) spur, and (f) spurious_copper, the YOLOv4-tiny model showed no leakage, while the DCR-YOLO model showed no leakage, and the average accuracy of the detected defects was higher than that of the YOLOv4-tiny model. The DCR-YOLO model is superior to the YOLOv4-tiny model.

5. Conclusions

In this paper, the DCR-YOLO model is designed. It solves the problem of difficult feature extraction due to small defect targets on printed circuit boards and improves the accuracy of surface-defect detection on printed circuit boards. The overall structure of the inspection model is simple and lightweight, while also meeting the real-time inspection speed requirements.
The experimental results show that the most basic network structure consisting of a CR-backbone backbone feature extraction network and three YOLO-Heads achieves a Map of 96.76% and a recall of R of 95.38%, indicating that the designed cross-residual CR-blockbody has a strong feature extraction capability for small-target defects. The PCR module effectively bridges and fuses the feature maps of different sizes between the DCR-backbone feature extraction network and the SDDT-FPN structure. The C5ECA module focuses on the small-target defect information of the printed circuit board, further enhancing the feature fusion and transfer capability between the SDDT-FPN structure layers, and improving the adaptive feature extraction capability of the network model, enhancing the convergence capability of the network to a certain extent and improving the prediction accuracy.
The DCR-YOLO model has significant advantages over several current mainstream target detection models in terms of detection accuracy, recall, model size, and detection speed. Compared to YOLOv4, which is the most effective of several current target detection models, the DCR-YOLO model provides a 1.41% improvement in terms of map value and a 5.37% improvement in terms of model recall R. The DCR-YOLO model reduces the size of the model by 56 MB in terms of detection speed compared to YOLOv4. The DCR-YOLO model improves the recall of the model by 5.37% and reduces the size of the model by 56.23 MB. In terms of detection speed, the DCR-YOLO model detects 71.79 frames per second more than the YOLOv4 model, making it feasible to detect small defects in real time.
The current model needs to be further improved in terms of lightness so that it can be more easily embedded in more mobile terminals. The DCR-YOLO model aims to detect surface defects in printed circuit boards. The combination of internal performance design analysis methods and detection methods for surface defects of printed circuit boards is yet to be investigated. For the improvement of comprehensive performance improvement of printed circuit boards, the combination of internal performance analysis method of pattern recognition [35] and surface-defect detection method of DCR-YOLO model offers a better outlook.

Author Contributions

Conceptualization, M.C. and D.Z.; Data curation, M.C.; Formal analysis, Y.J., M.C. and D.Z.; Funding acquisition, Y.J.; Investigation, M.C. and D.Z.; Methodology, M.C.; Project administration, Y.J.; Resources, M.C.; Software, M.C.; Supervision, Y.J., M.C. and D.Z.; Validation, M.C.; Visualization, M.C.; Writing—original draft, M.C.; Writing—review and editing, Y.J. and M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Key Research and Development Program of Anhui Province under grant 202104g01020012 and the Research and Development Special Fund for Environmentally Friendly Materials and Occupational Health Research Institute of Anhui University of Science and Technology under grant ALW2020YF18.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chang, P.C.; Chen, L.Y.; Fan, C.Y. A case-based evolutionary model for defect classifification of printed circuit board images. J. Intell. Manuf. 2008, 19, 203–214. [Google Scholar] [CrossRef]
  2. Geczy, A.; Fejos, M.; Tersztyanszky, L.; Kemler, A.; Szabo, A. Investigating printed circuit board shrinkage during reflow soldering. In Proceedings of the 37th International Spring Seminar on Electronics Technology 2014, Dresden, Germany, 7–11 May 2014; pp. 219–224. [Google Scholar] [CrossRef]
  3. Abd Al Rahman, M.; Mousavi, A. A review and analysis of automatic optical inspection and quality monitoring methods in electronics industry. IEEE Access 2020, 8, 183192–183271. [Google Scholar]
  4. Chen, J.; Ran, X. Deep learning with edge computing: A review. Proc. IEEE 2019, 107, 1655–1674. [Google Scholar] [CrossRef]
  5. Ren, R.; Hung, T.; Tan, K.C. A generic deep-learning-based approach for automated surface inspection. IEEE Trans. Cybern. 2017, 48, 929–940. [Google Scholar] [CrossRef] [PubMed]
  6. Dave, N.; Tambade, V.; Pandhare, B.; Saurav, S. PCB defect detection using image processing and embedded system. Int. Res. J. Eng. Technol. 2016, 3, 1897–1901. [Google Scholar]
  7. Guo, F.; Guan, S.-A. Research of the Machine Vision Based PCB Defect Inspection System. In Proceedings of the International Conference on Intelligence Science and Information Engineering, Washington, DC, USA, 20–21 August 2011; pp. 472–475. [Google Scholar]
  8. Baldi, P. Autoencoders, Unsupervised Learning, and Deep Architectures; Guyon, I., Dror, G., Lemaire, V., Taylor, G., Silver, D., Eds.; PMLR: Bellevue, WA, USA, 2012; pp. 37–49. [Google Scholar]
  9. Chattopadhyay, A.; Sarkar, A.; Howlader, P.; Balasubramanian, V.N. Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks. arXiv 2018, arXiv:1710.11063. [Google Scholar]
  10. Scherer, D.; Müller, A.; Behnke, S. Evaluation of Pooling Operations in Convolutional Architectures for Object Recognition. In Proceedings of the International Conference on Artifificial Neural Networks, Thessaloniki, Greece, 15–18 September 2010; pp. 92–101. [Google Scholar]
  11. Kim, J.; Ko, J.; Choi, H.; Kim, H. Printed Circuit Board Defect Detection Using Deep Learning via A Skip-Connected Convolutional Autoencoder. Sensors 2021, 21, 4968. [Google Scholar] [CrossRef]
  12. Gong, D.; Liu, L.; Le, V.; Saha, B.; Mansour, M.R.; Venkatesh, S.; Hengel, A.V.D. Memorizing Normality to Detect Anomaly: Memory-Augmented Deep Autoencoder for Unsupervised Anomaly Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
  13. Turchenko, V.; Chalmers, E.; Luczak, A. A Deep convolutional auto-encoder with pooling—Unpooling layers in caffe. Int. J. Comput. 2019, 18, 8–31. [Google Scholar] [CrossRef]
  14. Choi, Y.; El-Khamy, M.; Lee, J. Variable Rate Deep Image Compression with a Conditional Autoencoder. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
  15. Zong, B.; Song, Q.; Min, M.R.; Cheng, W.; Lumezanu, C.; Cho, D.; Chen, H. Deep Autoencoding Gaussian Mixture Model for Unsupervised Anomaly Detection. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  16. Lalak, M.; Wierzbicki, D. Automated Detection of Atypical Aviation Obstacles from UAV Images Using a YOLO Algorithm. Sensors 2022, 22, 6611. [Google Scholar] [CrossRef]
  17. Huang, R.; Gu, J.; Sun, X.; Hou, Y.; Uddin, S. A rapid recognition method for electronic components based on the improved YOLO-V3 network. Electronics 2019, 8, 825. [Google Scholar] [CrossRef]
  18. Wu, Y.; Li, J. YOLOv4 with Deformable-Embedding-Transformer Feature Extractor for Exact Object Detection in Aerial Imagery. Sensors 2023, 23, 2522. [Google Scholar] [CrossRef] [PubMed]
  19. Liao, X.; Lv, S.; Li, D.; Luo, Y.; Zhu, Z.; Jiang, C. YOLOv4-MN3 for PCB Surface Defect Detection. Appl. Sci. 2021, 11, 11701. [Google Scholar] [CrossRef]
  20. Han, J.; Liu, Y.; Li, Z.; Liu, Y.; Zhan, B. Safety Helmet Detection Based on YOLOv5 Driven by Super-Resolution Reconstruction. Sensors 2023, 23, 1822. [Google Scholar] [CrossRef] [PubMed]
  21. Ahmad, T.; Cavazza, M.; Matsuo, Y.; Prendinger, H. Detecting Human Actions in Drone Images Using YoloV5 and Stochastic Gradient Boosting. Sensors 2022, 22, 7020. [Google Scholar] [CrossRef] [PubMed]
  22. Liu, H.; Sun, F.; Gu, J.; Deng, L. SF-YOLOv5: A Lightweight Small Object Detection Algorithm Based on Improved Feature Fusion Mode. Sensors 2022, 22, 5817. [Google Scholar] [CrossRef] [PubMed]
  23. Jin, J.; Feng, W.; Lei, Q.; Gui, G.; Li, X.; Deng, Z.; Wang, W. Defect Detection of Printed Circuit Boards Using EffificientDet. In Proceedings of the 2021 IEEE 6th International Conference on Signal and Image Processing (ICSIP), Nanjing, China, 9–11 July 2021; pp. 287–293. [Google Scholar]
  24. Lai, H.; Chen, L.; Liu, W.; Yan, Z.; Ye, S. STC-YOLO: Small Object Detection Network for Traffic Signs in Complex Environments. Sensors 2023, 23, 5307. [Google Scholar] [CrossRef] [PubMed]
  25. Mohamed, E.; Sirlantzis, K.; Howells, G.; Hoque, S. Optimisation of Deep Learning Small-Object Detectors with Novel Explainable Verification. Sensors 2022, 22, 5596. [Google Scholar] [CrossRef]
  26. Wang, J.; Zhang, F.; Zhang, Y.; Liu, Y.; Cheng, T. Lightweight Object Detection Algorithm for UAV Aerial Imagery. Sensors 2023, 23, 5786. [Google Scholar] [CrossRef]
  27. Betti, A.; Tucci, M. YOLO-S: A Lightweight and Accurate YOLO-like Network for Small Target Detection in Aerial Imagery. Sensors 2023, 23, 1865. [Google Scholar] [CrossRef]
  28. Zagoruyko, S.; Komodakis, N. Wide Residual Networks. arXiv 2016, arXiv:1605.07146. [Google Scholar]
  29. Luo, Q.; Jiang, W.; Su, J.; Ai, J.; Yang, C. Smoothing Complete Feature Pyramid Networks for Roll Mark Detection of Steel Strips. Sensors 2021, 21, 7264. [Google Scholar] [CrossRef] [PubMed]
  30. Lee, D.; Kim, S.; Kim, I.; Cheon, Y.; Cho, M.; Han, W.-S. Contrastive Regularization for Semi-Supervised Learning. arXiv 2022, arXiv:2201.06247. [Google Scholar]
  31. Tan, M.; Le, Q.V. EffificientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv 2019, arXiv:1905.11946. [Google Scholar]
  32. Neogi, N.; Mohanta, D.K.; Dutta, P.K. Defect detection of steel surfaces with global adaptive percentile thresholding of gradient image. J. Inst. Eng. 2017, 98, 557–565. [Google Scholar] [CrossRef]
  33. Wu, Y.; Zhao, L.; Yuan, Y.; Jie, Y. Research Status and Prospect of Machine Vision Based Current Status and Prospect of PCB Defect Detection Algorithm Based on Machine Vision. J. Instrum. 2022, 43, 1–17. [Google Scholar]
  34. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
  35. Bucolo, M.; Buscarino, A.; Famoso, C.; Fortuna, L.; Frasca, M.; Cucuccio, A.; Rasconá, G.; Vinci, G. Model Identification to validate Printed Circuit Boards for power applications: A new technique. IEEE Access 2022, 10, 31760–31774. [Google Scholar] [CrossRef]
Figure 1. DCR-YOLO network structure.
Figure 1. DCR-YOLO network structure.
Sensors 23 07310 g001
Figure 2. DCR-backbone structure.
Figure 2. DCR-backbone structure.
Sensors 23 07310 g002
Figure 3. PCR structure.
Figure 3. PCR structure.
Sensors 23 07310 g003
Figure 4. SDDT-FPN structure.
Figure 4. SDDT-FPN structure.
Sensors 23 07310 g004
Figure 5. C5ECA structure.
Figure 5. C5ECA structure.
Sensors 23 07310 g005
Figure 6. Defect diagram for 6 types of printed circuit boards. (The white boxes contain defects on the surface of the printed circuit board).
Figure 6. Defect diagram for 6 types of printed circuit boards. (The white boxes contain defects on the surface of the printed circuit board).
Sensors 23 07310 g006
Figure 7. The process and results of the training.
Figure 7. The process and results of the training.
Sensors 23 07310 g007
Figure 8. YOLOv4-tiny model test results.
Figure 8. YOLOv4-tiny model test results.
Sensors 23 07310 g008
Figure 9. DCR-YOLO model test results.
Figure 9. DCR-YOLO model test results.
Sensors 23 07310 g009
Table 1. Training-related parameters.
Table 1. Training-related parameters.
ParametersNumerical Values
Original image size604 × 604
Training size416 × 416
Initial learning rate0.01
Batch size4
Optimizer typeSGD Optimizer
Table 2. Results of ablation experiment.
Table 2. Results of ablation experiment.
NumberDCR-3HeadSDDT-FPNP3-PCRP4-PCRP5-PCR1-C5ECA2-C5ECA3-C5ECAMap/%R/%FPS
1 96.7695.38123.47
2 97.2696.75117.08
3 98.0096.76112.55
4 97.5496.80112.74
5 97.8796.67113.46
6 98.1697.35105.17
7 98.4797.13106.00
8 98.5897.24103.15
9 97.8396.30102.66
Table 3. Comparison of experimental results.
Table 3. Comparison of experimental results.
Model NameMap/%R/%Model Volume/MBFPS
YOLOv388.4466.3361.5539.35
YOLOv497.1491.8763.9631.36
YOLOv4-tiny89.6379.955.89170.43
YOLOv5-s94.3472.197.0872.69
YOLOv5-m95.8979.8621.0738.98
DCR-YOLO98.5897.247.73103.15
YOLOv7-tiny95.3280.336.0398.61
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, Y.; Cai, M.; Zhang, D. Lightweight Network DCR-YOLO for Surface Defect Detection on Printed Circuit Boards. Sensors 2023, 23, 7310. https://doi.org/10.3390/s23177310

AMA Style

Jiang Y, Cai M, Zhang D. Lightweight Network DCR-YOLO for Surface Defect Detection on Printed Circuit Boards. Sensors. 2023; 23(17):7310. https://doi.org/10.3390/s23177310

Chicago/Turabian Style

Jiang, Yuanyuan, Mengnan Cai, and Dong Zhang. 2023. "Lightweight Network DCR-YOLO for Surface Defect Detection on Printed Circuit Boards" Sensors 23, no. 17: 7310. https://doi.org/10.3390/s23177310

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop