Next Article in Journal
An Assessment of Smart Factories in Korea: An Exploratory Empirical Investigation
Previous Article in Journal
PSHRisk-Tool: A Python-Based Computational Tool for Developing Site Seismic Hazard Analysis and Failure Risk Assessment of Infrastructure
Previous Article in Special Issue
Development of Classification Algorithms for the Detection of Postures Using Non-Marker-Based Motion Capture Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wood Defect Detection Based on Depth Extreme Learning Machine

1
College of Mechanical & Electronic Engineering, Nanjing Forestry University, Nanjing 210037, China
2
School of Artificial Intelligence, Hezhou University, Hezhou 542899, China
3
Nanjing Fujitsu Nanda Software Technology Co., Ltd., Nanjing 210012, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(21), 7488; https://doi.org/10.3390/app10217488
Submission received: 18 September 2020 / Revised: 20 October 2020 / Accepted: 22 October 2020 / Published: 24 October 2020
(This article belongs to the Special Issue Mathematics and Digital Signal Processing)

Abstract

:
The deep learning feature extraction method and extreme learning machine (ELM) classification method are combined to establish a depth extreme learning machine model for wood image defect detection. The convolution neural network (CNN) algorithm alone tends to provide inaccurate defect locations, incomplete defect contour and boundary information, and inaccurate recognition of defect types. The nonsubsampled shearlet transform (NSST) is used here to preprocess the wood images, which reduces the complexity and computation of the image processing. CNN is then applied to manage the deep algorithm design of the wood images. The simple linear iterative clustering algorithm is used to improve the initial model; the obtained image features are used as ELM classification inputs. ELM has faster training speed and stronger generalization ability than other similar neural networks, but the random selection of input weights and thresholds degrades the classification accuracy. A genetic algorithm is used here to optimize the initial parameters of the ELM to stabilize the network classification performance. The depth extreme learning machine can extract high-level abstract information from the data, does not require iterative adjustment of the network weights, has high calculation efficiency, and allows CNN to effectively extract the wood defect contour. The distributed input data feature is automatically expressed in layer form by deep learning pre-training. The wood defect recognition accuracy reached 96.72% in a test time of only 187 ms.

1. Introduction

Today’s wood products are manufactured under increasingly stringent requirements for surface processing. In developed countries such as Sweden and Finland with developed forest resources, the comprehensive use rate of wood is as high as 90%. In sharp contrast, the comprehensive use rate of wood in China is less than 60%, causing a serious waste of resources. With China’s rapid economic development, people are increasingly pursuing a high-quality life, which will inevitably lead to an increase in demand for wood and wood products, such as solid wood panels, wood-based panels, paper and cardboard, and other consumption levels are among the highest in the world. The existing wood storage capacity and processing level make it difficult to meet the rapid growth demand. The lack of wood supply and the low use rate have led to the limited development of China’s wood industry. Therefore, it is necessary to comprehensively inspect the processing quality of logs and boards to improve the use rate of wood and the quality of wood products.
The nondestructive testing of wood can accurately and quickly make judgments on the physical properties and growth defects in wood, and nondestructive wood testing and automation can be realized. In recent years, the combined application of computer technology along with detection and control theory has made great progress in the detection of wood defects. In the nondestructive testing of wood surfaces, commonly used traditional methods include laser testing [1,2], ultrasonic testing [3,4,5], acoustic emission technology [6,7], etc. Computer-aided techniques are a common approach to surface processing, as they are efficient and have a generally high recognition rate [8,9]. Deep learning was first proposed by Hinton in 2006; in 2012, scholars adopted an AlexNet network based on deep learning to achieve computer vision recognition accuracy of up to 84.7%. Deep learning prevents dimensionality in layer initialization and represents a revolutionary development in the field of machine learning [10,11]. More and more scholars are applying deep learning networks in wood nondestructive testing. He [12] et al. used a linear array CCD camera to obtain wood surface images, and proposed a hybrid total convolution neural network (Mix-FCN) for the recognition and location of wood defects; however, the network depth was too deep and required too much calculation. Hu [13] and Shi [14] used the Mask R-CNN algorithm in wood defect recognition, but they used a combination of multiple feature extraction methods, which resulted in a very complex model. However, the current deep learning algorithms still have problems such as inaccurate defect location, incomplete defect contour and boundary information in the wood defect detection process. To solve the above problems and effectively meet the needs of wood processing enterprises for wood testing, we carry out the research of this article.
The innovations of this article are: (1) Simple pre-processing of wood images using nonsubsampled shear wave transform (NSST), reducing the complexity and computational complexity of image processing, as the input of convolutional neural network; (2) application of a simple linear iterative clustering (SLIC) algorithm to enhance and improve the convolutional neural network to obtain a super pixel image with a more complete boundary contour; (3) use of genetic algorithm to improve extreme learning machine and classified the obtained image features. Through the above method, the accuracy of defect detection is improved, and the recognition time is truncated to establish an innovative machine-vision-based wood testing technique.

2. Materials and Methods

2.1. Wood Defect Original Image Dataset

According to the different processes and causes of solid wood board defects, they are divided into biohazard defects, growth defects and processing defects. Among them, growth defects and biohazard defects are natural defects, which have certain shape and structure characteristics, and are also an important basis for wood grade classification. Generally speaking, solid wood board growth defects and biohazard defects can be divided into: dead knots, live knots, worm holes, decay, etc. The original data set used in the experiment in this article is derived from the wood sampling image in the 948 project of the State Forestry Administration (the introduction of the laser profile and color integrated scanning technology for solid wood panels). When scanning to obtain wood images, the scanning speed of the scanner is 170 Hz–5000 Hz; Z direction resolution is 0.055 mm–0.200 mm; X direction resolution is 0.2755 mm–0.550 mm; and color pixel resolution can reach 1 mm × 0.5 mm. The data set includes 5000 defect maps of pine, fir, and ash. The bit depth of each image is 24, and the specified size is at the 100*100 pixel level. Part of the defect image is shown in Figure 1.

2.2. Optimized Convolution Neural Network

This paper proposes an optimized algorithm which uses NSST to preprocess the images followed by the CNN to extract defect features from wood images as a preliminary CNN model. The simple linear iterative clustering (SLIC) super-pixel segmentation algorithm is used to analyze the wood images by super-pixel clustering, which allows the defects in wood images and local information regarding defects and cracks to be efficiently located. The obtained information is fed back to the initial model, which enhances the original CNN.

2.2.1. Structure and Characteristics of Convolution Neural Networks

The CNN is an artificial neural network algorithm with multi-layer trainable architecture [15]. It generally consists of an input layer, excitation layer, pool layer, convolution layer, and full connection layer. CNNs have many advantages in terms of image processing applications. (1) Feature extraction and classification can be combined into the same network structure and synchronized training can be achieved, and the algorithm is fully adaptive. (2) When the image size is larger, the deep feature information can be extracted better. (3) Its unique network structure has strong adaptability to the local deformation, image rotation, image translation, and other changes in the input image. In this study, each pixel in the wood image was convoluted and the defect feature was extracted by exploiting these CNN characteristics. The CNN network skeleton used in this article is shown in Figure 2.

2.2.2. Non-Subsampled Shearlet Transform (NSST)

The NSST can represent signals sparsely and optimally, but it also has a strong direction-sensitivity [16,17,18]. Therefore, using NSST to preprocess wood images can preserve the defects feature of wood images. Redundancies in the wood image information are reduced in addition to the complexity and computation of image processing with the depth learning method.

2.2.3. Simple Linear Iterative Clustering (SLIC)

The CNN uses a matrix form to represent an image to be processed, so the spatial organization relationship between pixels is not considered—this affects the image segmentation and obscures the boundary of the defective region of the wood image. The SLIC algorithm can generate relatively compact super-pixel image blocks after processing a gray or color image. The generated super-pixel image is compact between pixels, and the edge contour of the image is clear. To this effect, the SLIC extracts a relatively accurate contour to supplement the feature contour. The SLIC also works with relatively few initial parameters. Only the number of hyper pixels needed to segment the image must be set. The algorithm is simple in principle, and has a small calculation range and rapid running speed. By 2015, the parallel execution speed had reached 250 FPS; it is now the fastest super-pixel segmentation method available [19].

2.2.4. Feature Extraction

The optimized CNN model proposed in this paper was designed for wood surface feature extraction. Knots were used as example wood defects (Figure 3a) to test feature extraction via the following operations. The input image Figure 3a was directly processed by CNN algorithm to obtain image Figure 3b, which presented local irregularity and nonsmooth edges in the contour after enlargement [20]. The SLIC algorithm was used to process the input image (Figure 3a) followed by longitudinal convolution (Figure 3d). The image shown in Figure 3h was obtained after edge removal and fusion processing. The defect contour features of Figure 3h are substantially clearer compared to Figure 3b because the segmentation of wood images using CNN, which is expressed in pixels as a matrix without considering the spatial organization relationship between pixels, affects the end image segmentation results. The SLIC algorithm instead extracts the wood defect boundary and contour information from the original image and feeds back the information to the initial segmentation results of the CNN model.
The above process reduces the redundancy of local image information in addition to the complexity and computation of the image processing. The pixel-level CNN model method does not accurately reveal the boundary of the defective region of wood image, but instead indicates only its general position. SLIC can extract a relatively accurate contour to supplement it and optimize the initial CNN model. To this effect, the proposed SLIC algorithm-based method improves the defect feature extraction of wood images over CNN alone.
The input image (Figure 3a) was processed by the NSST algorithm to obtain the image shown in Figure 3e, then Figure 3f was obtained using the SLIC algorithm. Vertical convolution was carried out to obtain the image shown in Figure 3g. Figure 3i was obtained after edge removal and fusion processing. Consider Figure 3i compared to Figure 3h: although the wood defect contour feature extraction effects are not obvious, using NSST to preprocess the image reduces environmental interference and training depth to markedly decrease the computation and complexity of the image processing.
Based on the above analysis, this paper decided to use the wood image processing frame shown in Figure 4 to obtain the wood defect feature map.

2.3. Extreme Learning Machine (ELM)

Depth learning is commonly used in target recognition and defect detection applications due to its excellent feature extraction capability. The CNN is highly time-consuming due to the necessity of iterative pre-training and fine-tuning stages; the hardware requirements for more complex engineering applications are also high. The deep CNN structure has a large quantity of adjustable free parameters, which makes its construction highly flexible. On the other hand, it lacks theoretical guidance and is overly reliant on experience, so its generalization performance is dubious. In this study, we integrated the ELM into a depth extreme learning machine (Figure 5) to improve the training efficiency of the deep convolution network. The proposed method extracts wood defects by using an optimized CNN and ELM classifier to exploit the excellent feature extraction ability of the deep network and fast training of ELM simultaneously.
The ELM algorithm differs from traditional pre-feedback neural network training learning. Its hidden layer does not need to be iterated, and input weights and hidden layer node biases are set randomly to minimize training error. The output weights of the hidden layer are determined by the algorithm [21,22,23]. The ultimate learning machine is based on the proved ordinary extreme theorem and interpolation theorem, under which when the hidden layer activation function of a single hidden layer feedforward neural network is infinitely differentiable, its learning ability is independent of the hidden layer parameters and is only related to the current network structure. When the input weights and hidden layer node offsets are randomly assigned to obtain the appropriate network structure, the ELM has universal approximation capability. The network input weights and hidden layer node offsets can be randomly assigned by approximating any continuous function. Under the premise of network hidden layer activation function infinite differentiability, the output weights of the network can be calculated via the least square method. The network model that can approximate the function can be established, and the corresponding neural network functions such as classification, regression, and fitting can be realized.
This paper mainly centers on the classification function of ELM, which serves to select a relatively simple single hidden layer neural network as the classifier. The traditional neural network algorithm needs many iterations and parameters, learns slowly, has poor expansibility, and requires intensive manual interventions. The ELM used here requires no iterations, the learning speed is relatively fast, the input weights and biases are generated randomly, the follow-up does not need to be set, and relatively few manual interventions are required. In the large sample database, the recognition rate of ELM is better than that of the support vector machine (SVM). For these reasons, we use ELMs as classifiers to enhance recognition efficiency and performance [24,25].
The ELM algorithm introduced above is the main classification method for wood defect feature recognition in this paper. However, in the ELM network structure, the input weights and the threshold of hidden layer nodes are given randomly. For the ELM structure with the same number of hidden layer neurons, the performance of the network is very different, which makes the classification performance of the network more unstable. The genetic algorithm (GA) simulates Darwinian evolutionary theory to optimize the initial weights and threshold of ELM by eliminating less-fit weights and thresholds.
Figure 6a,c show the variation curves of GA-ELM population fitness functions under Radbas, Hardlim, and Sigmoid excitation functions, respectively. Smaller fitness values indicate higher accuracy; the Sigmoid incentive function has the best network effect.
The classification accuracy of GA-ELM and ELM under different excitation functions is also shown in Table 1. We found that the classification accuracy of Sigmoid and Radbas excitations were similar, and the Hardlim excitation function was an exception. The accuracy of ELM and GA-ELM was highest when the Sigmoid function was used as the activation function. The accuracy of GA-ELM reached 95.93%, which is markedly better than that of an unoptimized ELM. The GA optimized ELM network required fewer hidden layer nodes and showed higher test accuracy as well.
In summary, an improved depth extreme learning machine was constructed in this study by combining the optimized GA-ELM classifier with the optimized CNN feature extraction. It is referred to from here on as “D-ELM”.

3. Experimentation

3.1. Experimental Parameters

Table 2 shows the computer-related parameters and software platform used by the experimental system, including CPU model, main frequency and memory size.

3.2. Empirical Method and Result

The specific experimental process is shown in Figure 7. First, we preprocessed 5000 original images via NSST and randomly selected 4000 images for training. Second, for each pixel in each image, the neighborhood subgraph was taken as the input of CNN, and a total of 40,000,000 samples were obtained as the experimental training set to train the network model. The remaining 10,000,000 samples were used as test images to evaluate the algorithm. The features extracted from the test samples were input into the ELM network classifier, the number of hidden layer nodes of the extreme learning machine was set to 100, then the accuracy and stability of the feature extraction method was statistically analyzed. We found that when the number of iterations exceeds 3500, the loss function is around 0.2 and the convergence performance is acceptable.
Figure 8a shows the relationship between the training loss value and the number of iterations. Although the training loss value fluctuates a little during the iteration process, it shows a downward trend as a whole. When the iteration is completed, the training loss value was around 0.2. Figure 8b shows a graph of accuracy. When the number of iterations was 1500, the accuracy of the proposed algorithm reached 90%. Accuracy continued to increase as iteration quantity increased until reaching a maximum of about 98%.
Figure 9 shows our final recognition effect on the test set. We surround the identified wood defects with different colored rectangular borders

4. Discussion

This paper proposes an ELM classifier based on depth structure. Choosing the appropriate number of hidden nodes under the D-ELM structure provides enhanced stability and generalization ability in the network. To ensure accurate tests and prevent node redundancy, when the number of hidden nodes was 100, the test accuracy of D-ELM was maintained at a relatively stable value over repeated tests. The accuracy was phased as shown in Figure 10. D-ELM significantly outperformed ELM with small fluctuations in amplitude, robustness to the number of test iterations, and higher network stability.
Table 3 shows the results of our algorithm accuracy tests, as mentioned above. D-ELM has a higher average accuracy rate but lower standard deviation than ELM. The accuracy and stability of D-ELM network were both accurate and stable. As a result, the performance of the classifier was improved.
We added an SVM classifier to the experiment to further assess the depth extreme learning machine. Table 4 shows the accuracy and timing of D-ELM and SVM training tests on all samples, where D-ELM again has the highest accuracy in both training and testing. Although the training time and network layer quantity are higher in D-ELM, its training time and test time are shorter than the other algorithms we tested, and its accuracy is much higher. The overall performance of D-ELM is better than that of ELM and SVM.

5. System Interface

We constructed a network model and classification optimizer based on the proposed algorithm by integrating Anaconda 3.5 and TensorFlow. We then constructed a real wood plate defect identification system in the C# development language on the Microsoft Visual Studio 2017 open platform. The system can identify defects in solid wood plate images and provide their position and size information. We also used a Microsoft SQL server 2012 database to store the information before and after processing.
Our experiment on deep network feature learning mainly involved the implementation of the network framework and the training model for wood recognition. The system is based on the network training model discussed in this paper; it can be used to detect defects in the wood image database on a single sheet and display the coordinates in the X and Y directions of the defects, as shown in Figure 11. On the left side, the scanned wood images are displayed with defects marked in green boxes. The top right side of the interface shows the coordinates of each defect, the cutting position of the plank, and the type of defects. Below the table are the total numbers of defects identified by the machine and the recognition rate.

6. Conclusions

The depth extreme learning machine proposed in this paper has reasonable dimensions, effectively manages heterogeneous data, and works within an acceptable run time. Our results suggest that it is a promising new solution to problems such as obtaining marking samples, constructing features, and training. It has excellent feature extraction ability and fast training time. Based on the method of machine learning, The NSST transform is used to preprocess the original image (i.e., reduce its complexity and dimensionality while minimizing the down-sampling process in CNN), then SLIC is applied to optimize the CNN model training process. This method effectively reduces the redundancy of local image information and extracts relatively accurate supplementary feature contours. The optimized CNN is then used to extract wood image features and secure corresponding image features. The feature is input to the ELM classifier and the parameters of the related neural network are optimized. The GA is used to select the initial weight threshold of ELM to improve the prediction accuracy and stability of the network model. Finally, the image data to be tested is input to the well-trained network model and final test results are obtained.
We also compared the stability of D-ELM and ELM network models. The standard deviation of D-ELM was only 0.0967 and the accuracy of D-ELM improved by about 3% compared to ELM; the stability of the D-ELM network was also found to be higher and less affected by test quantity than ELM. We also found that D-ELM has an accuracy of up to 96.72% and a shorter test time than ELM or SVM at only 187 ms. The D-ELM network model is capable of highly accurate wood defect recognition within a very short training and detection time.

Author Contributions

Conceptualization, Y.Y. and Y.L.; methodology, Y.Y.; software, X.Z.; validation, Y.Y., Z.H. and F.D.; resources, X.Z. and Z.H.; writing—original draft preparation, Y.Y.; writing—review and editing, Y.L. and X.Z.; project administration, Y.L.; funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the 2019 Jiangsu Province Key Research and Development Plan by the Jiangsu Province Science and Technology under grant BE2019112, and was funded by the Jiangsu Province International Science and Technology Cooperation Project under grant BZ2016028, and was supported from the 948 Import Program on the Internationally Advanced Forestry Science and Technology by the State Forestry Bureau under grant 2014-4-48.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Qiu, Q.W.; Lau, D. Grain Effect on the Accuracy of Defect Detection in Wood Structure by Using Acoustic-Laser Technique. In Proceedings of the Nondestructive Characterization and Monitoring of Advanced Materials, Aerospace, Civil Infrastructure, and Transportation XIII, Denver, CO, USA, 3–7 March 2019. [Google Scholar] [CrossRef]
  2. Siekanski, P.; Magda, K.; Malowany, K.; Rutkiewicz, J.; Styk, A.; Krzeslowski, J.; Kowaluk, T.; Zagorski, A. On-line laser triangulation scanner for wood logs surface geometry measurement. Sensors 2019, 19, 1074. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Espinosa, L.; Prieto, F.; Brancheriau, L.; Lasaygues, P. Effect of wood anisotropy in ultrasonic wave propagation: A ray-tracing approach. Ultrasonics 2019, 91, 242–251. [Google Scholar] [CrossRef] [PubMed]
  4. Taskhiri, M.S.; Hafezi, M.H.; Harle, R.; Williams, D.; Kundu, T.; Turner, P. Ultrasonic and thermal testing to non-destructively identify internal defects in plantation eucalypts. Comput. Electron. Agric. 2020, 173. [Google Scholar] [CrossRef]
  5. Yang, H.M.; Yu, L. Feature extraction of wood-hole defects using wavelet-based ultrasonic testing. J. For. Res. 2017, 28, 395–402. [Google Scholar] [CrossRef]
  6. Lukomski, M.; Strojecki, M.; Pretzel, B.; Blades, N.; Beltran, V.L.; Freeman, A. Acoustic emission monitoring of micro-damage in wooden art objects to assess climate management strategies. Insight 2017, 59, 256–264. [Google Scholar] [CrossRef]
  7. Rescalvo, F.J.; Valverde-Palacios, I.; Suarez, E.; Roldan, A.; Gallego, A. Monitoring of carbon fiber-reinforced old timber beams via strain and multiresonant acoustic emission sensors. Sensors 2018, 18, 1224. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Li, C.; Zhang, Y.; Tu, W.; Jun, C.; Liang, H.; Yu, H. Soft measurement of wood defects based on LDA feature fusion and compressed sensor images. J. For. Res. 2017, 28, 1285–1292. [Google Scholar] [CrossRef]
  9. Dai, J.; Li, Y.; He, K.; Sun, J. R-FCN: Object detection via region-based fully convolutional networks. arXiv 2016, arXiv:1605.06409. [Google Scholar]
  10. Szegedy, C.; Liu, W.; Jia, Y.Q.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef] [Green Version]
  11. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  12. He, T.; Liu, Y.; Yu, Y.B.; Zhao, Q.; Hu, Z.K. Application of deep convolutional neural network on feature extraction and detection of wood defects. Measurement 2020, 152. [Google Scholar] [CrossRef]
  13. Hu, K.; Wang, B.J.; Shen, Y.; Guan, J.R.; Cai, Y. Defect identification method for poplar veneer based on progressive growing generated adversarial network and MASK R-CNN Model. Bioresources 2020, 15, 3041–3052. [Google Scholar] [CrossRef]
  14. Shi, J.H.; Li, Z.Y.; Zhu, T.T.; Wang, D.Y.; Ni, C. Defect detection of industry wood veneer based on NAS and multi-channel mask R-CNN. Sensors 2020, 20, 4398. [Google Scholar] [CrossRef] [PubMed]
  15. Yang, W.X.; Jin, L.W.; Tao, D.C.; Xie, Z.C.; Feng, Z.Y. DropSample: A new training method to enhance deep convolutional neural networks for large-scale unconstrained handwritten Chinese character recognition. Pattern Recogn. 2016, 58, 190–203. [Google Scholar] [CrossRef] [Green Version]
  16. Wan, W.; Lee, H.J. Multi-focus image fusion based on non-subsampled shearlet transform and sparse representation. Lect. Notes Electr. Eng. 2018, 449, 120–126. [Google Scholar] [CrossRef]
  17. Singh, S.; Anand, R.S.; Gupta, D. CT and MR image information fusion scheme using a cascaded framework in ripplet and NSST domain. IET Image Process 2018, 12, 696–707. [Google Scholar] [CrossRef]
  18. Wu, W.; Qiu, Z.M.; Zhao, M.; Huang, Q.H.; Lei, Y. Visible and infrared image fusion using NSST and deep Boltzmann machine. Optik 2018, 157, 334–342. [Google Scholar] [CrossRef]
  19. Boemer, F.; Ratner, E.; Lendasse, A. Parameter-free image segmentation with SLIC. Neurocomputing 2018, 277, 228–236. [Google Scholar] [CrossRef]
  20. Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef]
  21. Zhu, W.T.; Miao, J.; Qing, L.Y.; Huang, G.B. Hierarchical Extreme Learning Machine for Unsupervised Representation Learning. In Proceedings of the Joint Conference on Neural Networks, Killarney, Ireland, 11–16 July 2015. [Google Scholar]
  22. Uzair, M.; Shafait, F.; Ghanem, B.; Mian, A. Representation learning with deep extreme learning machines for efficient image set classification. Neural Comput. Appl. 2018, 30, 1211–1223. [Google Scholar] [CrossRef] [Green Version]
  23. Tang, J.X.; Deng, C.W.; Huang, G.B. Extreme learning machine for multilayer perceptron. IEEE Trans. Neural Networks Learn. Syst. 2016, 27, 809–821. [Google Scholar] [CrossRef]
  24. Yu, W.C.; Zhuang, F.Z.; He, Q.; Shi, Z.Z. Learning deep representations via extreme learning machines. Neurocomputing 2015, 149, 308–315. [Google Scholar] [CrossRef]
  25. Liu, S.L.; Feng, L.; Xiao, Y.; Wang, H.B. Robust activation function and its application: Semi-supervised kernel extreme learning method. Neurocomputing 2014, 144, 318–328. [Google Scholar] [CrossRef]
Figure 1. Common defects of solid wood such as dead-knot, live-knot and decay.
Figure 1. Common defects of solid wood such as dead-knot, live-knot and decay.
Applsci 10 07488 g001
Figure 2. CNN network skeleton.
Figure 2. CNN network skeleton.
Applsci 10 07488 g002
Figure 3. Contrast diagram of optimized CNN feature extraction effects.
Figure 3. Contrast diagram of optimized CNN feature extraction effects.
Applsci 10 07488 g003
Figure 4. Wood image processing frame.
Figure 4. Wood image processing frame.
Applsci 10 07488 g004
Figure 5. Contrast diagram of optimized CNN feature extraction effects.
Figure 5. Contrast diagram of optimized CNN feature extraction effects.
Applsci 10 07488 g005
Figure 6. Variation curves of driving function population fitness; (a) Radbas driving function; (b) Hardlim driving function; (c) Sigmoid driving function.
Figure 6. Variation curves of driving function population fitness; (a) Radbas driving function; (b) Hardlim driving function; (c) Sigmoid driving function.
Applsci 10 07488 g006
Figure 7. Flowchart of wood defect feature extraction and classification process.
Figure 7. Flowchart of wood defect feature extraction and classification process.
Applsci 10 07488 g007
Figure 8. Relationship between loss function, accuracy, iteration number: (a) Relationship between loss function and iteration; (b) accuracy graph.
Figure 8. Relationship between loss function, accuracy, iteration number: (a) Relationship between loss function and iteration; (b) accuracy graph.
Applsci 10 07488 g008aApplsci 10 07488 g008b
Figure 9. Recognition results based on based on deep learning.
Figure 9. Recognition results based on based on deep learning.
Applsci 10 07488 g009
Figure 10. Accuracy of D-ELM and ELM after multiple tests.
Figure 10. Accuracy of D-ELM and ELM after multiple tests.
Applsci 10 07488 g010
Figure 11. System interface diagram based on depth learning.
Figure 11. System interface diagram based on depth learning.
Applsci 10 07488 g011
Table 1. Classification accuracy of GA-ELM and ELM under different excitation functions.
Table 1. Classification accuracy of GA-ELM and ELM under different excitation functions.
Excitation FunctionClassification Accuracy Rate (%)Number of Hidden Node
GA-ELMELMGA-ELMELM
Sigmoid95.9393.857090
Hardlim93.9891.3790170
Radbas95.3793.36110200
Table 2. Experimental parameters.
Table 2. Experimental parameters.
ItemsType
ServerIntel(R)Xeon(R)CPU E5-4603v2
Quantity of physical CPUs2
Main frequency of CPUs2.20 GHz
Memory16 GB
Experimental platformAnaconda 3.5,
Microsoft Visual Studio 2017
Table 3. D-ELM versus ELM stability.
Table 3. D-ELM versus ELM stability.
AlgorithmsAverage Accuracy Rate (%)Highest Accuracy Rate (%)Lowest Accuracy Rate (%)Standard Deviation (std)
D-ELM95.9296.1195.650.0967
ELM92.8493.7292.160.3220
Table 4. Defect recognition of D-ELM ELM and SVM on wood images.
Table 4. Defect recognition of D-ELM ELM and SVM on wood images.
AlgorithmD-ELMELMSVM
TrainTime (s)89018792356
Accuracy rate (%)99.47%92.31%90.45%
TestTime (ms)187532631
Accuracy rate (%)96.7292.1691.55
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, Y.; Zhou, X.; Liu, Y.; Hu, Z.; Ding, F. Wood Defect Detection Based on Depth Extreme Learning Machine. Appl. Sci. 2020, 10, 7488. https://doi.org/10.3390/app10217488

AMA Style

Yang Y, Zhou X, Liu Y, Hu Z, Ding F. Wood Defect Detection Based on Depth Extreme Learning Machine. Applied Sciences. 2020; 10(21):7488. https://doi.org/10.3390/app10217488

Chicago/Turabian Style

Yang, Yutu, Xiaolin Zhou, Ying Liu, Zhongkang Hu, and Fenglong Ding. 2020. "Wood Defect Detection Based on Depth Extreme Learning Machine" Applied Sciences 10, no. 21: 7488. https://doi.org/10.3390/app10217488

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop