Next Article in Journal
High-Spatial-Resolution OFDR Distributed Temperature Sensor Based on Step-by-Step and Image Wavelet Denoising Methods
Previous Article in Journal
An Online Rail Track Fastener Classification System Based on YOLO Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Robust Deep Learning Ensemble-Driven Model for Defect and Non-Defect Recognition and Classification Using a Weighted Averaging Sequence-Based Meta-Learning Ensembler

Computer Science & Software Engineering, Auckland University of Technology, Auckland 1010, New Zealand
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(24), 9971; https://doi.org/10.3390/s22249971
Submission received: 1 December 2022 / Revised: 11 December 2022 / Accepted: 14 December 2022 / Published: 17 December 2022
(This article belongs to the Section Intelligent Sensors)

Abstract

:
The need to overcome the challenges of visual inspections conducted by domain experts drives the recent surge in visual inspection research. Typical manual industrial data analysis and inspection for defects conducted by trained personnel are expensive, time-consuming, and characterized by mistakes. Thus, an efficient intelligent-driven model is needed to eliminate or minimize the challenges of defect identification and elimination in processes to the barest minimum. This paper presents a robust method for recognizing and classifying defects in industrial products using a deep-learning architectural ensemble approach integrated with a weighted sequence meta-learning unification framework. In the proposed method, a unique base model is constructed and fused together with other co-learning pretrained models using a sequence-driven meta-learning ensembler that aggregates the best features learned from the various contributing models for better and superior performance. During experimentation in the study, different publicly available industrial product datasets consisting of the defect and non-defect samples were used to train, validate, and test the introduced model, with remarkable results obtained that demonstrate the viability of the proposed method in tackling the challenges of the manual visual inspection approach.

1. Introduction

Sustaining quality standards is a crucial task for every industry, and visual inspections deal with the detection of defects from manufactured products for quality control. Quality inspections can be conducted at any stage of the industrial production circle, such as product components, products within the manufacturing lines, incoming material, or finished products. Inspection examines products to determine those that meet the set standards and those that deviate from the set quality requirements, paving the way for the rejection of faulty products and progression to the next stage of those that conform to the set standards [1]. In situ or in-process inspections are standard practices conducted during industrial parts and other product manufacturing processes [2]. Manual inspections for defects are characterized by challenges such as boredom of inspection operators, failure to meet production targets, bias, inadequate inspection skillset, subjective judgements, etc.
The limitations of the human-oriented industrial visual inspection for faulty product identification could be addressed through independent, intelligent models and computer vision algorithms. In recent years, intelligent machine vision models have become desirable in tackling high costs and other shortcomings of human-driven defect recognition and analysis processes. Deep learning (DL) models, in particular, convolutional neural networks (CNNs), have been increasingly used in the automation of inspection processes [3,4]. CNN-driven models have proven effective in performing visual inspections by recognizing, classifying, and detecting defects and non-defects in objects of interest [5]. Despite the remarkable performance of deep learning techniques in recent times, significant issues and challenges, such as model robustness, performance accuracy, and efficiency, are still abound. Therefore, in this work, we propose a robust deep-learning method driven by the model ensemble concept with a sequence-enabled meta-learning unifier to perform the recognition and classification of an industrial product for defect identification (see Figure 1).
This article section presents an overview of the current state and the limitations of human-in-the-loop industrial product inspection systems. In Section 2, relevant and related literature were explored, ranging from the use of complex multilayer CNN architecture, conditional random field (CRFs) algorithm with CNN, fully convolutional network (FCN), meta-learning CNN architectural framework, deep convolutional sparse-coding-based network, etc. Section 3 provides the theoretical background and the method adopted in the study, while the experimental procedure used in the study is elaborated in Section 4. In Section 5, the results obtained from the study are presented and concisely explained. Comparisons with related works are also made in Section 5, while the study’s conclusion is presented in Section 6.

2. Related Works

Recently, product defect identification and classification for visual inspections have attracted considerable research interest. He et al. [6] and Borji et al. [7] deployed a deep-learning model based on the LeNet network structure [8]. Their proposed framework detects defects in industrial products using a complex multilayer CNN architecture to extract defect image features and then a full end-to-end training process to learn and classify the defects. In another work on defect spotting and the classification of products, a CNN model and a conditional random field (CRFs) algorithm were combined to train and optimize a built DL network prediction process [9]. Xue and Li (2018) deployed a region-based fully convolutional network (FCN) DL model to build an intelligent classification and detection model for rapid tunnel lining defect detections.
Furthermore, Bartler et al. [10] proposed a DL-based classification pipeline to identify solar cell defects automatically. A meta-learning CNN architectural framework was introduced to perform a multi-target concrete defect classification in concrete bridge image frames [11,12]. In another work, a DL model was deployed to ensure sustainable transportation by developing a model that fused the features of two models for high accuracy in classifying defects on rail tracks [13]. Krummenacher et al. [14] proposed a wheel defect identification system based on machine learning methods on railway wagons for easy damage recognition on rolling stocks and railway infrastructures. A deep convolutional sparse coding-based network was deployed to perform tire defect classification tasks to ensure an efficient quality control process [15]. A weld defects classification framework driven by transfer learning and activation features of deep learning was proposed to detect defects on industrial weld X-ray images for a rapid, nondestructive test process [16].
Konovalenko et al. [17], in their work on defect classification, proposed a deep residual neural network-based model to classify defects and non-defects on steel surfaces. In a similar study, a time-efficient steel surface defect classification built with a completed local binary pattern was introduced by Luo et al. [18]. Wang et al. (2021) presented a graph convolution network-based semi-supervised model to learn the inter-class similarities and intra-class variations in surfaces for fault and non-fault recognition and classifications. With the aid of a hybrid chromosome genetic algorithm, Hu et al. [19] developed a large-scale strip steel surface defect classification framework. Additionally, an automatic PCB defect classification, analysis, and inspection system was introduced by Deng et al. [20], and Zhang et al. [21] proposed a multi-label class classification of PCB defects using a multi-task convolutional neural network framework. For a micro-defect diagnosis on piston throats, Chen et al. [22] proposed a SMOTE in conjunction with a new model selection method utilized on the active learning of the SVM algorithm (E-SVM-AL). Additionally, an image processing-based piston surface defect recognition system combined different strategies, such as edge detection, threshold segmentation, and morphological operations, to recognize defects on piston surfaces [23]. Furthermore, Nikolić et al. [24] introduced a deep learning-based classification methodology to detect the porosity defects in aluminum alloys, and Habibpour et al. [25] proposed an uncertainty-aware deep learning model to detect defects in industrial casting products. Despite these studies on defect recognition and classification, little effort has been made on the robust model for the defect spotting in products; therefore, we propose a weighted sequence-based meta-learning ensemble on a collection of models aggregated together to learn the class and interclass similarities and dissimilarities in objects for defect and non-defect separation.

3. Theoretical Background and Method

This section presents the underground theoretical method for the proposed defects recognition and classification framework. Let D = { ( d z ,   c z )   1 z N } represent the dataset consisting of N number of training samples, with c z = { 1 ,   2 , , C } their corresponding class labels and C the total sum of the classes. Then, the proposed model contains M different numbers of deep learning models fused with convolutional LSTM layers that learn from the meta-features emanating from the various participating models for superior performance. The proposed method can ensemble different numbers of given CNN models. However, in this study, we used M = 5 number of models for the metal surface defects classification and M = 6 for the other datasets. In the deep learning ensemble process, the resultant features R from the various model is expressed as R = [ r 1 r 2 , , r n ] . During training, a forward propagation process is conducted in each epoch to generate features from each co-learning model and then fussed together by the integrated sequence-based convolutional LSTM layers.

3.1. The Contributing CNN Models

In this investigation, we crafted a unique base model and adopted four other state-of-the-art convolutional neural network-based models: Inceptionv3 [26], DenseNet [27], Xception [28], and MobileNet [29]. The built base model contains four significant layers and sublayers, as shown in Table 1, with 223,873 total parameters used in the model training process. The respective feature extractors in the CNN architecture have conv2 × 3, 32; conv2 × 3, 64; and conv2 × 3, 128 layer sizes, as well as a 2 × 2 max pooling layer size between the 1st and the 2nd layers. A Relu activator was equally used in the 1st three layers (see Table 1), and a stride of 2 was used across all the layers excluding the last layer.
Inception-v3, which belongs to the inception model group, consists of a label smoothing mechanism, factorized 7 × 7 convolutions, and an auxiliary classifier that channels training data label details from the top to the lower ebb of the network [26]. The DenseNet (densely connected convolutional networks) [27], on the other hand, is a variant of the deep CNN model that consists of dense blocks and uses dense connections between layers in the network to propagate information across the network. Furthermore, the Xception model depends on depth-wise separable convolution layers to compute the spatial information from the training and validation data. Finally, on the co-learning models, the MobileNet was initially designed and built for mobile applications [29]; however, many applications have adopted the framework to solve different scientific problems [30,31,32].
All the adopted models in this study were ImageNet dataset pre-trained, but the decision layers were chopped off during our experiments because the models were pre-trained initially to classify objects of 1000 classes. During the training process in the experiments conducted in this study, the meta-features that emanated from the various models were concatenated via the integrated weighted averaging sequence-based meta-learning ensemble, which then performed the final classification tasks. The convolution components of the models were useful in extracting the features from the defect and non-defect data samples. The features fed to the unification framework are called meta-features and are significantly valuable for distinguishing the defective and non-defectives data samples. The lower layer of the networks extracted the local image data features, while the higher layers extracted more semantic meta-features through convolution operations.

3.2. The Weighted Averaging Sequence-Based Meta-Feature Learning Derivative

In the weighted averaging ensemble strategy, the final model’s classification report was acquired by obtaining the outputs of the various contributing models and averaging the results with some weight inducements for better predictive performance. This, in particular, motivated us to adopt this approach because of its robustness and ability to handle imbalanced datasets, such as the ones used in this investigation. The weighted averaging sequence-based meta-feature learning component of the proposed method was inspired by the work of Shi et al. [33], in which the meta-features from all the inputs I 1 , I t ,   the output of the cells O 1 , O t , the hidden states H 1 , H t , and the gates g t , l t , m t of the ConvLSTM layer were 3D tensors that enable our proposed method to learn the spatial meta-features of the defect and non-defect samples in the final aggregate layers (see Equations (1)–(5)). In other words, the ensemble layer of the proposed method consists of ConvLSTM layers that possess convolution operators which join the features emanating from the various co-learning models. The ConvLSTM layer component of the proposed weighted averaging sequence-based meta-feature learning is expressed as:
g t = σ ( W i g · I t + W h g ·   H t 1 + W o g · O t 1 + k g )
l t = σ ( W i l ·   I t + W h l ·   H t 1 + W o l · O t 1 + k l )
O t = l t · O t 1 + g t · t a n h ( W i o ·   I t + W h o ·   H t 1 + k o )
m t = σ ( W i m ·   I t + W h m ·   H t 1 + W o m · O t + k m )
H t = m t   · t a n h ( O t )
In training the model, padding is required before the application of convolution operations to guarantee that the same number of matrix computations are conducted as the inputs possessed by the state. In the ConvLSTM layer, all the states in the LSTM are initialized to zero before the arrival of the first input, and a zero-padding was used in the hidden states in this study, so that the boundary points of the training dataset were computed differently for prompt learning of the intra-class differences in the defect and non-defect samples.

4. Experiments

4.1. Dataset Preparation Process

We adopted four datasets consisting of different types of defect and non-defect samples to train and validate the proposed method for robustness and performance. The first dataset employed in this study is the NEU (Northeastern University) surface defect dataset consisting of six distinct classes of typical surface defects [34]. The data were collected and made available for research from hot-rolled steel strips with patches (Pa), pitted surface (PS), rolled-in scale (RS), crazing (Cr), scratches (Sc), and inclusion (In) as defect classes. The dataset initially contained 1,800 grayscale images of 200 × 200 size. During the experiments in this study, the dataset was split into 1152 samples for the train set, 288 samples for validation, and 360 for testing the final trained model. The second dataset [35] used in the experiment consisted of 512 × 512 grey scale-size images of submersible pump impellers with defects and non-defect samples available publicly for research. The total number of images was 7348 samples, which were reduced to 300 × 300 grayscale sizes and split into 4644 training samples, 1989 validation samples, and 715 samples for testing the proposed model.
Furthermore, the third dataset employed in this investigation was the printed circuit board (PCB) industrial dataset [36], which comprised 1500 images of defect and non-defect samples. The data were obtained from linear scan CCD processes in the resolution range of 48 pixels per 1 mm. The dataset was cross-examined and certified for suitability for training and validating the proposed model and was split into 892 training samples, 223 validation samples, and 180 test samples. Finally, the fourth dataset used in the experiments was the piston image dataset from industrial mechanic components with shaped-out, greasy, broken, fallen, rust stains, and oily class samples [37]. The dataset contained 285 samples and was collected during the AC’s pistons production process. During the experiments in this study, the dataset was divided into 173 samples for training, 42 validation samples, and 70 test samples.

4.2. Experimental and Evaluation Metrics

We implemented the proposed DL method using a high-end computing resource integrated with two GPU cores, each having a 12GB video card and a RAM size of 32GB. Keras and TensorFlow open-source DL libraries, in conjunction with other supportive python modules, such as NumPy, pandas, matplotlib, sklearn, etc., as well as a Linux operating system, were also used for the implementation of the introduced DL method. There were 30 epochs, each involved in the training and validating of the individual models used in the experiments. Furthermore, a binary cross-entropy loss function was used for the datasets, except for the NEU categorical dataset; thus, a categorical cross-entropy loss function was used. A 1 × 10 − 3 × 0.9 learning rate scheduler with a 1 × 10−4 learning rate Adam optimizer was employed to train and validate the models. To help boost the performance of the models’ entire training and learning process, standard augmentation techniques were deployed to artificially increase the training data (i.e., horizontal flipping, random cropping, rotation, and shear range). At the end of the training and validation process of the selected models, the meta-features obtained from the models were aggregated using the weighted averaging sequence-based meta-feature learning ensembler to form the new proposed model.
Different DL model evaluation metrics were employed to thoroughly examine the results obtained from the experiments run in this investigation. One such metric is the Cohen kappa score, which calculates the inter-rater trustworthiness of the proposed model. Additionally, Matthew’s correlation coefficient (MCC) estimates the quality of association between the pairs of the defect and non-defect samples. The mean square error measures the mean square of the difference between the actual data samples and the predicted samples. In contrast, the mean square log error obtains the relative error between the actual defective and non-defect data samples and the predicted samples. We further extracted the precision, recall, F1-score, and weighted average scores of the individual models and the proposed model, thus solidifying the results of the proposed method.

5. Results

We first constructed a unique, but efficient, custom CNN model to handle the problems of dataset limitations effectively. The CNN model is simple and unique because it consists of a few layers and parameters (see Table 1), but proactively learns the defect’s features and non-defect data samples. Additionally, we adopted the transfer learning (TL) concept on the other pre-trained models selected in this study. We first trained the custom model, Inceptionv3, Xception, DenseNet, and MobileNet, with the data samples allotted for training. Then, we fused the outputs of the models using the introduced weighted averaging sequence-based meta-feature learning ensembler to form an entirely new model for superior performance. A comparative performance of each of the adopted co-learning models with the proposed model was conducted and tabulated accordingly (see Table 2, Table 3, Table 4 and Table 5). The classification performance, with respect to the Cohen kappa (Kp), Matthew’s correlation coefficient (MCC), accuracy, mean square error (MSE), and mean square log error (MSLE), using the NEU dataset, is shown in Table 2 below.
As tabulated in Table 2, the classification performance significantly improved after the models’ ensemble process with approximately 9.99 × 10−1 KP score, 1.00 × 10+00 MCC, 1.00 × 10+00 accuracy, and low MSE and MSLE of 3.47 × 10−4 and 3.48 × 10−6, respectively. More vividly, the performance accuracy scores rose from 9.94 × 10−1 (Inceptionv3), 9.94 × 10−1 (both the Custom and DenseNet), and 9.93 × 10−1 (MobileNet) to 1.00 × 10+00 for the proposed method. Detailed performance of the proposed method is shown using the confusion matrix table having the experiments’ precision, recall, and F1-scores (see Figure 2).
The rows in the confusion matrix table correspond to the various classes of the NEU dataset, i.e., class 0 equals the crazing data samples, class 1 represents the inclusion samples, and class 2 represents the patched samples. The pitted_surface samples are denoted by class 3, while the rolled-in_scale samples are defined by class 4 and class 5, represented by the scratch samples. Additionally, the overall weighted average is found in the rows. According to the confusion matrix table, the InceptionV3 model returned close to 100% scores in precision, recall, and F1-score across the samples, with a weighted average precision of 9.95 × 10−1, weighted average recall of 9.94 × 10−1, and a weighted average F1-score of 9.94 × 10−1. The scores were similar across the other participating models, but the proposed model yielded a superior performance, with a weighted average precision, recall, and F1 score of approximately 100%. Furthermore, the results obtained using the piston dataset are shown in Table 3.
From Table 3, both the DenseNet and MobileNet recorded the least Kp, MCC, and accuracy scores. The Inceptionv3 followed this with 7.90 × 10−1 Kp, 7.90 × 10−1 MCC, and 9.14 × 10−1 accuracy scores, while the crafted custom model returned accuracy scores of 9.43 × 10−1, 8.51 × 10−1 Kp, and 8.61 × 10−1 MCC. In continuation, the Xception model produced the second-best performance, with 9.66 × 10−1 Kp, 9.66 × 10−1 MCC, and 9.14 × 10−1 accuracy. Meanwhile, the proposed model produced the overall best performance, with 9.80 × 10−1 Kp, 9.72 × 10−1 MCC, and 9.49 × 10−1 accuracy, showing a remarkable performance accuracy gain over the various participating models.
Figure 3 shows more detailed results extracted from the various experiments conducted with the piston dataset. Given the 50 non-defect and 20 defect samples, the introduced method yielded a tremendous performance against the different pre-trained models and the typical model constructed during the experiments. The custom model produced weighted average precision, recall, and F1 scores of approximately 95% each across the 70 defect and non-defect test samples, and the Inceptionv3 model output weighted average precision, recall, and F1 scores of about 91% each; the Xception model generated weighted average precision, recall, and F1 scores of approximately 98.6% each. In contrast, the DenseNet and MobileNet produced weighted average precision, recall, and F1 scores of roughly 91% each across the 70 defect and non-defect test samples. In comparison, the proposed method outclassed these models by producing an improved weighted average precision of about 99.7%, 99.7% recall, and F1 scores of approximately 97% across the 70 defect and non-defect test samples.
To continue demonstrating the robustness of the proposed method in learning and classifying the various kinds of defect and non-defect data samples in manufacturing products, another casting dataset was employed to train and test the introduced model. The results obtained from the experiments involving this dataset are shown in Table 4. According to the table, the presented method produced enhanced results against the other models, with Kp scores of 9.98 × 10−1, 1.00 × 10+00 MCC, and 1.00 × 10+00 accuracy. It also returned the least MSE of 6.70 × 10−6 and MSLE of 7.80 × 10−8, respectively.
Furthermore, the performance of the various models using the casting dataset is further showcased using the matrix table in Figure 4. Given the 453 non-defect and 262 defect samples, the proposed model returned an incredible performance improvement against the various pre-trained models and the custom model constructed during the experiments. The custom model produced weighted average precision, recall, and F1 scores of approximately 9.95 × 10−1 each across the 714 faulty and non-faulty test samples, with the Inceptionv3 model returning weighted average precision, recall, and F1 scores closely similar to the typical model. The Xception model, on the other hand, produced weighted average precision, recall, and F1 scores of about 9.96 × 10−1 each. At the same time, both the DenseNet returned weighted average precision, recall, and F1 scores of approximately 9.93 × 10−1 each across the 714 defect and non-defect test samples. However, the MobileNet returned weighted average precision, recall, and F1 scores of approximately 9.95 × 10−1 each across the 714 faulty and non-faulty test samples, and finally, the introduced approach displayed improved performance by returning better weighted average precision and F1 scores of about 1.00 × 10+00 and recall of approximately 9.99 × 10−1 across the 714 defect and non-defect test samples.
The PCB dataset was the final dataset used to train, test, and validate the proposed approach. As shown in Table 5, the proposed approach returned the enhanced results against the other models, with a Kp score of 9.78 × 10−1, 9.78−1 MCC, and 9.89 × 10−1 accuracy. The introduced method also outputted the lowest possible MSE score of 1.11 × 10−2 and MSLE of 5.34 × 10−3.
Finally, the performance of the different models using the PCB dataset was further demonstrated using the matrix table in Figure 5. With the 90 non-defect and 90 defect samples, the introduced model produced a remarkable performance improvement against the other pre-trained models and the unique conventional model constructed during the investigations. The custom model generated the lowest weighted average precision, recall, and F1 scores of approximately 2.35 × 10−1, 2.39 × 10−1, and 2.36 × 10−1, respectively, across the 180 faulty and non-faulty PCB test samples. The Inceptionv3 model returned weighted average precision, recall, and F1 scores of about 9.73 × 10−1, 9.72 × 10−1, and 9.72 × 10−1, respectively, while the Xception model produced weighted average precision, recall, and F1 scores of approximately 5.50 × 10−1, 8.57 × 10−1, and 6.86 × 10−1 respectively. Additionally, the DenseNet returned weighted average precision, recall, and F1 scores of approximately 8.90 × 10−1 each across the 180 defect and non-defect PCB test samples. In contrast, the MobileNet returned weighted average precision of 9.08 × 10−1, 9.00 × 10−1 recall, and 9.02 × 10−1 F1 scores. Finally, the proposed approach demonstrated an improved performance by producing superior weighted average precision, recall, and F1 scores of about 9.89 × 10−1 each across the 180 defect and non-defect PCB test samples.
We also compared our results with other similar studies in the literature. Our model outperformed ShuffleDefectNet [38], which used the ShuffleNet to detect metallic surface defects on the Northeastern University (NEU) dataset. While their method achieved a mean average accuracy of 99.75%, our proposed method returned an approximately mean weighted average of 100%. The introduced method in this work also outclassed the approach that combined a modified AlexNet architecture and support vector machine algorithm to classify the steel strip defect NEU dataset that yielded 99.7% accuracy [39]. For the classification of faulty and non-faulty samples of the PCB dataset, our method also outperformed the 98.79% accuracy score recorded by Adibhatla et al. [40], 97.5% by Khalilian et al. [41], 98% score by Kim et al. [42], and 98.1% by Bhattacharya and Cloutier [43] (see Figure 6).
Additionally, our introduced method performed better than the 98.06% accuracy score obtained by [44,45] and the 99% accuracy score recorded by Tang et al. [46] in the defect and non-defect piston data classification. Our result also surpassed the 95.5% accuracy score obtained by Lin et al. [47], with a 99.9% accuracy, 9.98 × 10−1 Kp score, and 1.00 × 10+00 MCC score, respectively, for the identification and separation of the defect and non-defect submersible pump impeller casting data samples. Hence, it can be inferred that our proposed method can achieve high accuracy in industrial implementation with high robustness in different defect and non-defect types.

6. Conclusions

This paper introduced an ensembled deep learning model for accurately and rapidly identifying and classifying defects and non-defects from manufactured industrial products. During the experiments in the study, different DL models were trained to individually learn the vital features necessary for distinguishing the faulty industrial products from non-faulty ones and then unifying them using the convolution LSTM sequence-based ensemler to obtain high accuracy and near-optimal models for inferencing and implementation. Different matrices were employed to test and validate the proposed model, with remarkable results obtained to support the usefulness of the new method.
The proposed model yielded a superior performance because of the fine-tuning process of the different parameters in the different adopted models used in the experiment with the specific datasets and the fusion of the features from the various participating models to overcome the drawbacks of the individual models. The results from the multiple experiments showed that the adopted DL networks yielded good generalization on the NEU dataset, and they are highly adaptable for transfer into different domains. The same characteristics exhibited on the NEU dataset were also repeated on the piston dataset by the various adopted models. However, the Xception and the proposed model returned better performance than the custom, Inceptionv3, DenseNet, and MobileNet on the casting dataset. On the other hand, the Inceptionv3, MobileNet, and the introduced model performed better than the custom, DenseNet, and Xception DL networks on the PCB data samples.
The ensembled architectures offered the proposed model great assistance in aggregating the fine features from the different classes of the datasets used in the experiments and, in turn, returned better results. The proposed model extracted the defect and non-defect features from the data samples pertinent to learning to distinguish faulty and non-faulty manufactured products. Our introduced method offers a general ensembling architecture for learning more deep feature representations from different and diverse datasets for robustness and better performance.

Author Contributions

Conceptualization, O.S.; methodology, O.S.; software, O.S.; formal analysis, O.S. and M.N.; data curation, O.S.; writing—original draft preparation, O.S.; writing—review and editing, O.S., S.M., and M.N.; visualization, O.S.; supervision, S.M. and M.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding and the APC was funded by the Auckland University of Technology.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

(1) NEU (Northeastern University) surface defect dataset [34]. (2) PCB defect dataset [36]. (3) https://www.kaggle.com/datasets/ravirajsinh45/real-life-industrial-dataset-of-casting-product, accessed on 12 August 2022. (4) https://www.kaggle.com/datasets/satishpaladi11/mechanic-component-images-normal-defected, accessed on 12 August 2022.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Anand, S.; Priya, L. A Guide for Machine Vision in Quality Control; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  2. Goetsch, D.L.; Davis, S.B. Quality Management for Organizational Excellence; Pearson: Upper Saddle River, NJ, USA, 2014. [Google Scholar]
  3. Naranjo-Torres, J.; Mora, M.; Hernández-García, R.; Barrientos, R.J.; Fredes, C.; Valenzuela, A. A review of convolutional neural network applied to fruit image processing. Appl. Sci. 2020, 10, 3443. [Google Scholar] [CrossRef]
  4. Czimmermann, T.; Ciuti, G.; Milazzo, M.; Chiurazzi, M.; Roccella, S.; Oddo, C.M.; Dario, P. Visual-based defect detection and classification approaches for industrial applications—A survey. Sensors 2020, 20, 1459. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Alom, M.Z.; Taha, T.M.; Yakopcic, C.; Westberg, S.; Sidike, P.; Nasrin, M.S.; Van Esesn, B.C.; Awwal, A.A.S.; Asari, V.K. The history began from alexnet: A comprehensive survey on deep learning approaches. arXiv 2018, arXiv:1803.01164. [Google Scholar]
  6. He, Y.; Song, K.; Meng, Q.; Yan, Y. An end-to-end steel surface defect detection approach via fusing multiple hierarchical features. IEEE Trans. Instrum. Meas. 2019, 69, 1493–1504. [Google Scholar] [CrossRef]
  7. Borji, A.; Cheng, M.-M.; Jiang, H.; Li, J. Salient object detection: A benchmark. IEEE Trans. Image Process. 2015, 24, 5706–5722. [Google Scholar] [CrossRef] [Green Version]
  8. LeCun, Y. LeNet-5, Convolutional Neural Networks. 2015. Available online: http://yann.lecun.com/exdb/lenet (accessed on 12 August 2022).
  9. Tao, X.; Wang, Z.; Zhang, Z.; Zhang, D.; Xu, D.; Gong, X.; Zhang, L. Wire defect recognition of spring-wire socket using multitask convolutional neural networks. IEEE Trans. Compon. Packag. Manuf. Technol. 2018, 8, 689–698. [Google Scholar] [CrossRef]
  10. Bartler, A.; Mauch, L.; Yang, B.; Reuter, M.; Stoicescu, L. Automated detection of solar cell defects with deep learning. In Proceedings of the 2018 26th European Signal Processing Conference (EUSIPCO), Roma, Italy, 3–7 September 2018; pp. 2035–2039. [Google Scholar]
  11. Mundt, M.; Majumder, S.; Murali, S.; Panetsos, P.; Ramesh, V. Meta-learning convolutional neural architectures for multi-target concrete defect classification with the concrete defect bridge image dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–17 June 2019; pp. 11196–11205. [Google Scholar]
  12. Stephen, O.; Maduh, U.J.; Sain, M. A Machine Learning Method for Detection of Surface Defects on Ceramic Tiles Using Convolutional Neural Networks. Electronics 2021, 11, 55. [Google Scholar] [CrossRef]
  13. Aydin, I.; Akin, E.; Karakose, M. Defect classification based on deep features for railway tracks in sustainable transportation. Appl. Soft Comput. 2021, 111, 107706. [Google Scholar] [CrossRef]
  14. Krummenacher, G.; Ong, C.S.; Koller, S.; Kobayashi, S.; Buhmann, J.M. Wheel defect detection with machine learning. IEEE Trans. Intell. Transp. Syst. 2017, 19, 1176–1187. [Google Scholar] [CrossRef]
  15. Zheng, Z.; Shen, J.; Shao, Y.; Zhang, J.; Tian, C.; Yu, B.; Zhang, Y. Tire defect classification using a deep convolutional sparse-coding network. Meas. Sci. Technol. 2021, 32, 055401. [Google Scholar] [CrossRef]
  16. Ajmi, C.; Zapata, J.; Elferchichi, S.; Zaafouri, A.; Laabidi, K. Deep learning technology for weld defects classification based on transfer learning and activation features. Adv. Mater. Sci. Eng. 2020, 2020, 1574350. [Google Scholar] [CrossRef]
  17. Konovalenko, I.; Maruschak, P.; Brezinová, J.; Viňáš, J.; Brezina, J. Steel surface defect classification using deep residual neural network. Metals 2020, 10, 846. [Google Scholar] [CrossRef]
  18. Luo, Q.; Sun, Y.; Li, P.; Simpson, O.; Tian, L.; He, Y. Generalized completed local binary patterns for time-efficient steel surface defect classification. IEEE Trans. Instrum. Meas. 2018, 68, 667–679. [Google Scholar] [CrossRef] [Green Version]
  19. Hu, H.; Liu, Y.; Liu, M.; Nie, L. Surface defect classification in large-scale strip steel image collection via hybrid chromosome genetic algorithm. Neurocomputing 2016, 181, 86–95. [Google Scholar] [CrossRef]
  20. Deng, Y.-S.; Luo, A.-C.; Dai, M.-J. Building an automatic defect verification system using deep neural network for pcb defect classification. In Proceedings of the 2018 4th International Conference on Frontiers of Signal Processing (ICFSP), Poitiers, France, 24–27 September 2018; pp. 145–149. [Google Scholar]
  21. Zhang, L.; Jin, Y.; Yang, X.; Li, X.; Duan, X.; Sun, Y.; Liu, H. Convolutional neural network-based multi-label classification of PCB defects. J. Eng. 2018, 2018, 1612–1616. [Google Scholar] [CrossRef]
  22. Chen, Z.; Zhao, F.; Zhou, J.; Huang, P.; Song, W. A novel approach applied to fault diagnosis for micro-defects on piston throat. Measurement 2021, 173, 108508. [Google Scholar] [CrossRef]
  23. Zheng, B.; Wang, C.; Qing, S. Piston Surface Defect Recognition Method Based on Image Processing. In Proceedings of the 2022 IEEE 10th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), Chongqing, China, 17–19 June 2022; pp. 928–932. [Google Scholar]
  24. Nikolić, F.; Štajduhar, I.; Čanađija, M. Casting Defects Detection in Aluminum Alloys Using Deep Learning: A Classification Approach. Int. J. Met. 2022. [Google Scholar] [CrossRef]
  25. Habibpour, M.; Gharoun, H.; Tajally, A.; Shamsi, A.; Asgharnezhad, H.; Khosravi, A.; Nahavandi, S. An Uncertainty-Aware Deep Learning Framework for Defect Detection in Casting Products. arXiv 2021, arXiv:2107.11643. [Google Scholar] [CrossRef]
  26. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision And Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  27. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  28. Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
  29. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  30. Li, Y.; Huang, H.; Xie, Q.; Yao, L.; Chen, Q. Research on a surface defect detection algorithm based on MobileNet-SSD. Appl. Sci. 2018, 8, 1678. [Google Scholar] [CrossRef] [Green Version]
  31. Rabano, S.L.; Cabatuan, M.K.; Sybingco, E.; Dadios, E.P.; Calilung, E.J. Common garbage classification using mobilenet. In Proceedings of the 2018 IEEE 10th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM), Baguio City, Philippines, 29 November–2 December 2018; pp. 1–4. [Google Scholar]
  32. Michele, A.; Colin, V.; Santika, D.D. Mobilenet convolutional neural networks and support vector machines for palmprint recognition. Procedia Comput. Sci. 2019, 157, 110–117. [Google Scholar] [CrossRef]
  33. Shi, X.; Chen, Z.; Wang, H.; Yeung, D.-Y.; Wong, W.-K.; Woo, W.-C. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. Adv. Neural Inf. Process. Syst. 2015, arXiv:1506.04214. [Google Scholar]
  34. Bao, Y.; Song, K.; Liu, J.; Wang, Y.; Yan, Y.; Yu, H.; Li, X. Triplet-graph reasoning network for few-shot metal generic surface defect segmentation. IEEE Trans. Instrum. Meas. 2021, 70, 1–11. [Google Scholar] [CrossRef]
  35. Dabhi, R. Casting Product Image Data for Quality Inspection. Available online: https://www.kaggle.com/datasets/ravirajsinh45/real-life-industrial-dataset-of-casting-product (accessed on 12 August 2022).
  36. Tang, S.; He, F.; Huang, X.; Yang, J. Online PCB defect detector on a new PCB defect dataset. arXiv 2019, arXiv:1902.06197. [Google Scholar]
  37. Paladi, S. Mechanic Component Images (Normal/Defected) Dataset. Available online: https://www.kaggle.com/datasets/satishpaladi11/mechanic-component-images-normal-defected (accessed on 12 August 2022).
  38. Anvar, A.; Cho, Y.I. Automatic metallic surface defect detection using shuffledefectnet. J. Korea Soc. Comput. Inf. 2020, 25, 19–26. [Google Scholar]
  39. Boudiaf, A.; Benlahmidi, S.; Harrar, K.; Zaghdoudi, R. Classification of surface defects on steel strip images using convolution neural network and support vector machine. J. Fail. Anal. Prev. 2022, 22, 531–541. [Google Scholar] [CrossRef]
  40. Adibhatla, V.A.; Chih, H.-C.; Hsu, C.-C.; Cheng, J.; Abbod, M.F.; Shieh, J.-S. Defect detection in printed circuit boards using you-only-look-once convolutional neural networks. Electronics 2020, 9, 1547. [Google Scholar] [CrossRef]
  41. Khalilian, S.; Hallaj, Y.; Balouchestani, A.; Karshenas, H.; Mohammadi, A. Pcb defect detection using denoising convolutional autoencoders. In Proceedings of the 2020 International Conference on Machine Vision and Image Processing (MVIP), Qom, Iran, 18–20 February 2020; pp. 1–5. [Google Scholar]
  42. Kim, J.; Ko, J.; Choi, H.; Kim, H. Printed circuit board defect detection using deep learning via a skip-connected convolutional autoencoder. Sensors 2021, 21, 4968. [Google Scholar] [CrossRef]
  43. Bhattacharya, A.; Cloutier, S.G. End-to-end deep learning framework for printed circuit board manufacturing defect classification. Sci. Rep. 2022, 12, 12559. [Google Scholar] [CrossRef]
  44. Zhu, Y.; Li, G.; Wang, R.; Tang, S.; Su, H.; Cao, K. Intelligent fault diagnosis of hydraulic piston pump based on wavelet analysis and improved alexnet. Sensors 2021, 21, 549. [Google Scholar] [CrossRef]
  45. Tang, S.; Zhu, Y.; Yuan, S.; Li, G. Intelligent diagnosis towards hydraulic axial piston pump using a novel integrated CNN model. Sensors 2020, 20, 7152. [Google Scholar] [CrossRef] [PubMed]
  46. Tang, S.; Zhu, Y.; Yuan, S. An adaptive deep learning model towards fault diagnosis of hydraulic piston pump using pressure signal. Eng. Fail. Anal. 2022, 138, 106300. [Google Scholar] [CrossRef]
  47. Lin, J.; Yao, Y.; Ma, L.; Wang, Y. Detection of a casting defect tracked by deep convolution neural network. Int. J. Adv. Manuf. Technol. 2018, 97, 573–581. [Google Scholar] [CrossRef]
Figure 1. The flow sketch of the proposed Weighted Averaging Sequence-based Meta-learning Ensembler.
Figure 1. The flow sketch of the proposed Weighted Averaging Sequence-based Meta-learning Ensembler.
Sensors 22 09971 g001
Figure 2. The confusion matrix table using the NEU dataset.
Figure 2. The confusion matrix table using the NEU dataset.
Sensors 22 09971 g002
Figure 3. The confusion matrix table using the piston dataset.
Figure 3. The confusion matrix table using the piston dataset.
Sensors 22 09971 g003
Figure 4. The confusion matrix table using the casting dataset.
Figure 4. The confusion matrix table using the casting dataset.
Sensors 22 09971 g004
Figure 5. The confusion matrix table using the PCB dataset.
Figure 5. The confusion matrix table using the PCB dataset.
Sensors 22 09971 g005
Figure 6. The output summary of the model training and testing process with the datasets.
Figure 6. The output summary of the model training and testing process with the datasets.
Sensors 22 09971 g006
Table 1. The conventional base model.
Table 1. The conventional base model.
Layer TypeOutput ShapeParameters
conv2d (Conv2D)(None, 45, 45, 32)320
max_pooling2d(None, 22, 22, 32)0
conv2d_1 (Conv2D)(None, 11, 11, 64)18,496
max_pooling2d_1(None, 5, 5, 64)0
flatten (Flatten)(None, 1600)0
dense (Dense)(None, 128)204,928
dense_1 (Dense)(None, 1)129
Table 2. The classification report for each of the models and the final model using the NEU dataset.
Table 2. The classification report for each of the models and the final model using the NEU dataset.
KpMCCAccuracyMSEMSLE
Inceptionv39.93 × 10−19.93 × 10−19.94 × 10−14.72 × 10−23.58 × 10−3
Custom9.99 × 10−19.95 × 10−19.94 × 10−14.21 × 10−23.49 × 10−3
DenseNet9.99 × 10−19.94 × 10−19.94 × 10−14.53 × 10−22.78 × 10−3
MobileNet9.94 × 10−19.98 × 10−19.93 × 10−13.17 × 10−22.34 × 10−3
Proposed9.99 × 10−11.00 × 10+001.00 × 10+003.47 × 10−43.48 × 10−6
Table 3. The classification report for each of the models and the final model using the piston dataset.
Table 3. The classification report for each of the models and the final model using the piston dataset.
KpMCCAccuracyMSEMSLE
Custom8.51 × 10−18.61 × 10−19.43 × 10−15.71 × 10−22.75 × 10−2
Inceptionv37.90 × 10−17.90 × 10−19.14 × 10−18.57 × 10−24.12 × 10−2
Xception9.66 × 10−19.66 × 10−19.86 × 10−11.43 × 10−26.87 × 10−3
Densenet7.46 × 10−17.79 × 10−19.10 × 10−11.00 × 10−14.61 × 10−2
MobileNet7.66 × 10−17.69 × 10−19.00 × 10−11.00 × 10−14.81 × 10−2
Proposed9.80 × 10−19.72 × 10−19.49 × 10−12.89 × 10−22.93 × 10−2
Table 4. The classification report for each of the models and the final model using the casting dataset.
Table 4. The classification report for each of the models and the final model using the casting dataset.
KpMCCAccuracyMSEMSLE
Custom9.88 × 10−19.88 × 10−19.94 × 10−15.59 × 10−32.69 × 10−3
Inceptionv39.88 × 10−19.88 × 10−19.94 × 10−15.59 × 10−32.69 × 10−3
Xception9.91 × 10−19.91 × 10−19.96 × 10−14.20 × 10−32.02 × 10−3
DenseNet9.85 × 10−19.94 × 10−19.93 × 10−14.53 × 10−22.78 × 10−3
MobileNet9.88 × 10−19.88 × 10−19.94 × 10−15.59 × 10−32.69 × 10−3
Proposed9.98 × 10−11.00 × 10+001.00 × 10+006.70 × 10−67.80 × 10−8
Table 5. The classification report for each of the models and the final model using the PCB dataset.
Table 5. The classification report for each of the models and the final model using the PCB dataset.
KpMCCAccuracyMSEMSLE
Custom8.52 × 10−17.83 × 10−18.39 × 10−12.76 × 10−11.66 × 10−1
Inceptionv39.44 × 10−19.45 × 10−19.72 × 10−12.78 × 10−21.34 × 10−2
Xception7.97 × 10−18.11 × 10−18.96 × 10−14.72 × 10−12.40 × 10−1
DenseNet7.78 × 10−17.79 × 10−18.89 × 10−11.11 × 10−15.34 × 10−2
MobileNet9.00 × 10−19.03 × 10−19.50 × 10−15.00 × 10−22.40 × 10−2
Proposed9.78 × 10−19.78 × 10−19.89 × 10−11.11 × 10−25.34 × 10−3
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Stephen, O.; Madanian, S.; Nguyen, M. A Robust Deep Learning Ensemble-Driven Model for Defect and Non-Defect Recognition and Classification Using a Weighted Averaging Sequence-Based Meta-Learning Ensembler. Sensors 2022, 22, 9971. https://doi.org/10.3390/s22249971

AMA Style

Stephen O, Madanian S, Nguyen M. A Robust Deep Learning Ensemble-Driven Model for Defect and Non-Defect Recognition and Classification Using a Weighted Averaging Sequence-Based Meta-Learning Ensembler. Sensors. 2022; 22(24):9971. https://doi.org/10.3390/s22249971

Chicago/Turabian Style

Stephen, Okeke, Samaneh Madanian, and Minh Nguyen. 2022. "A Robust Deep Learning Ensemble-Driven Model for Defect and Non-Defect Recognition and Classification Using a Weighted Averaging Sequence-Based Meta-Learning Ensembler" Sensors 22, no. 24: 9971. https://doi.org/10.3390/s22249971

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop