Next Article in Journal
Subjective Preference and Visual Attention to the Attributes of Ornamental Plants in Urban Green Space: An Eye-Tracking Study
Next Article in Special Issue
Effects of Compression Ratio and Phenolic Resin Concentration on the Properties of Laminated Compreg Inner Oil Palm and Sesenduk Wood Composites
Previous Article in Journal
Feasibility of Agarwood Cultivation in Indonesia: Dynamic System Modeling Approach
Previous Article in Special Issue
Effects of Biological and Chemical Degradation on the Properties of Scots Pine—Part II: Wood-Moisture Relations and Viscoelastic Behaviour
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mechanical Properties of Wood Prediction Based on the NAGGWO-BP Neural Network

College of Engineering and Technology, Northeast Forestry University, Harbin 150040, China
*
Author to whom correspondence should be addressed.
Forests 2022, 13(11), 1870; https://doi.org/10.3390/f13111870
Submission received: 4 October 2022 / Revised: 4 November 2022 / Accepted: 5 November 2022 / Published: 9 November 2022
(This article belongs to the Special Issue Physical Properties of Wood)

Abstract

:
The existing original BP neural network models for wood performance prediction have low fitting accuracy and imprecise prediction results. We propose a nonlinear, adaptive grouping gray wolf optimization (NAGGWO)-BP neural network model for wood performance prediction. Firstly, the original gray wolf optimization (GWO) algorithm is optimized. We propose CPM mapping (the Chebyshev mapping method combined with piecewise mapping followed by mod operation) to generate the initial populations and improve population diversity, and an ‘S’-type nonlinear control parameter is proposed to balance the exploitation and exploration capabilities of the algorithm; an adaptive grouping strategy is also proposed, based on which the wolves are divided into the predator, wanderer, and searcher groups. The improved differential evolution strategy, the stochastic opposition-based learning strategy, and the oscillation perturbation operator are used to update the positions of the wolves in the different groups to improve the convergence speed and accuracy of the GWO. Then, the BP neural network weights and thresholds are optimized using the NAGGWO algorithm. Finally, we separately predicted heat-treated wood’s five main mechanical property parameters using different models. The experimental results show that the proposed NAGGWO-BP model significantly improved the mean absolute error (MAE), the mean square error (MSE), and the mean absolute percentage error (MAPE) of the specimens, compared with the BP, GWO-BP, and TSSA-BP algorithms. Therefore, this model has strong generalization ability and good prediction accuracy and reliability, which can fully meet practical engineering needs.

1. Introduction

Wood, a renewable material with high strength and low weight, has been widely used in different fields. However, it is limited in the scope of application due to its disadvantages, such as weak biological durability and poor dimensional stability [1]. Among the many wood modification techniques, thermal treatment has received much attention as an environmentally friendly method [2]. Heat-treated wood is produced by heating wood to 150–260 ℃ and maintaining it for several hours. Water vapor, nitrogen, and other gases are used as protective media during this period. Heat treatment changes the wood’s chemical composition and structure, improving its dimensional stability and mechanical properties [3]. Heat-treated wood is widely preferred in the market due to its environmentally friendly properties, excellent dimensional stability, and durability [4,5,6]. Its products are widely used in furniture, exterior wall panels of buildings, dock construction materials, etc. [7]. There have been many studies related to the heat treatment of wood. Wang and Cooper [8] showed the changes in wet swelling and the dimensional stability of wood after heat treatment under different conditions. Suri et al. [2] studied the effect of different heat treatment media on the mechanical properties of wood. The results showed that heat treatment with oil as a medium is more effective in improving wood properties than air as a medium. Esteves et al. [1] demonstrated that wood’s dimensional stability was improved after heat treatment, but its mechanical properties were reduced. Bayani et al. [9] studied the changes in the physical and mechanical properties of wood impregnated with silver nanosuspension. The heat treatment effect was enhanced due to the excellent thermal conductivity of the silver nanosuspension. Herrera-Diaz et al. [10] evaluated the effect of the wood’s physical and mechanical properties at different treatment temperatures for thermal modification. The results showed that a temperature below 190 °C positively affected the mechanical properties of wood. It is essential to investigate the correlation between the heat treatment process and wood mechanical properties to optimize the heat treatment process and predict the quality of heat-treated lumber products. If the mechanical properties of heat-treated wood can be predicted, it will be beneficial to control the changes in the mechanical properties of wood over time. It can also give a full take on the advantages of heat-treated wood and provide a scientific basis for the rational use of heat-treated wood.
It is difficult to build an ideal prediction model because the correlation between the heat treatment process and wood mechanical properties are non-linear and complex. Artificial neural networks can adequately approximate arbitrarily complex nonlinear relationships, which possess self-learning functions and the ability to find optimal solutions quickly [11]. They are often used to optimize material process parameters [12,13,14,15]. In particular, the BP neural network is widely used in wood heat treatment due to its outstanding nonlinear mapping capability and flexible network structure. Zhang et al. [16] predicted the change in wood moisture content resulting from heat treatment by using the BP network model. Yang et al. [17] used the BP neural network to predict the mechanical properties of wood after heat treatment. Chai et al. [18] predicted the changes in wood moisture content during high-frequency vacuum drying based on the BP neural network. However, the BP neural network has defects such as solid dependence on training data and slow convergence speed, which hinder its application in practical engineering. Some researchers have used gray wolf optimization (GWO) to optimize the convergence ability of the BP neural network, which speeds up the convergence of the BP neural network and improves the prediction accuracy [19,20]. However, the GWO algorithm tends to fall into the local optimum, making it difficult to meet the practical application requirements for optimization accuracy.
Taking these into consideration, a nonlinear adaptive grouping gray wolf optimization algorithm (NAGGWO) is proposed to optimize the BP neural network for the mechanical property prediction of wood. The NAGGWO improves the problem that the GWO tends to fall into local optimum. It mainly includes the following three parts of improvement: First, the population is initialized using the presented CMP mapping to increase its diversity. Secondly, an ‘S’-type nonlinear control parameter is introduced to coordinate the exploration and development ability of the algorithm. Finally, an adaptive grouping strategy is applied to classify the wolf population. Different updating methods are used for individuals in different groups to improve the convergence speed of the GWO algorithm and its global searching ability. The parameters of the BP neural network are optimized using NAGGWO to address the problems of slow convergence and low prediction accuracy. Larch wood’s mechanical property data are used to verify the model’s effectiveness. The prediction results show that the prediction accuracy of the proposed NAGGWO-BP model is much higher than the traditional model. It can analyze the relationship between the parameters of the thermal modification process of wood and its mechanical properties.
All the mathematical notations are listed in Table 1.

2. The NAGGWO-BP Neural Network Prediction Model

2.1. The Principle of BP Neural Network

The standard BP neural network is divided into two main parts: The first part is forward information transmission, which is the forward transmission of the input information according to the input layer, the hidden layer, and the output layer. The second part is the backward signal propagation, which is the backpropagation method to modify the weights of each layer connection according to the error between the actual value and the predicted value [21,22]. The concrete implementation is shown in Figure 1.
As can be seen from Figure 1, the initial weights and thresholds of the BP neural network are randomly generated. Moreover, the parameters are generally updated by using the gradient descent method. Such a working mechanism makes the BP neural network extremely sensitive to the initial weights, which increases the algorithm’s solving difficulty and convergence time. The initial weights and thresholds of the BP neural network are optimized using the NAGGWO algorithm, which improves its stability and accuracy.

2.2. The Traditional GWO Algorithm

The GWO was first proposed as a swarm intelligence optimization algorithm by Mirjalili et al. [23] in 2014. Gray wolves rely on a clear division of labor and cooperation to hunt and have a clear social dominance hierarchy. The gray wolf population is divided into four levels. The leading gray wolf is called the α wolf, and it is the optimal individual in the group. The next best individual is the β wolf, and the third best is the δ wolf. They represent the direction of the whole wolf group moving forward. The remaining gray wolves of the lower rank are called the ω wolves. The wolves update their positions with Equations (1) and (2) to surround the prey as follows:
D = | C × X p t X t |
X t + 1 = X p t A × D
where t denotes the number of current iterations; X p t is the position vector of the prey; X t is the position vector of the gray wolf; A and D are the coefficient vector, and it is expressed using Equations (3)–(5) as follows:
A = 2 a × r 1 a
C = 2 × r 2
a = 2 2 ( t T max )
where T max denotes the maximum number of iterations; a is the control coefficient that gradually and linearly decreases from 2 to 0; r 1 and r 2 are random variables whose values are in the range of [0, 1].
It is difficult to determine the prey location in a realistic problem, which is the exact location of optimal solutions. The α wolf, β wolf, and δ wolf are assumed to have excellent prey recognition. The other gray wolves update their position by the position of The α wolf, β wolf, and δ wolf, as shown in Equations (6)–(12).
D α = | C 1 × X α t X t |
D β = | C 2 × X β t X t |
D δ = | C 3 × X δ t X t |
X 1 = X α A 1 × D α
X 2 = X β A 2 × D β
X 3 = X δ A 3 × D δ
X t + 1 = X 1 + X 2 + X 3 3

2.3. The NAGGWO Algorithm

2.3.1. Initialization of CPM Mapping

The GWO algorithm usually uses randomly generated data as the initial population, which easily leads to uneven distribution of the initial population, affects the convergence speed of wolves, and reduces the diversity of the algorithm. To solve this problem, we propose CPM mapping to initialize the population.
Chaotic motions have the characteristics of randomness and ergodicity. When solving optimization problems, the characteristic can ensure population diversity and improve the global search capability of the algorithm. Chaotic mappings include logistic mapping, piecewise mapping, etc. Logistic mapping is widely used in initializing intelligent algorithms [24,25], but its frequency is high in the ranges of [0, 0.1] and [0.9, 1], so the generated initial solutions are often not completely dispersed. Piecewise mapping has a more uniform distribution, but its system loses the chaos at x = 0.5. CPM mapping is proposed to improve it. The mathematical expression is shown in Equation (13) as follows:
x n + 1 = { mod ( ( x n d + cos ( n cos 1 ( x n × π ) ) ) , 1 ) , 0 x n < d mod ( ( x n d 0.5 d + cos ( n cos 1 ( x n × π ) ) ) , 1 ) , d x n < 0.5 mod ( ( 1 d x n 0.5 d + cos ( n cos 1 ( x n × π ) ) ) , 1 ) , 0.5 x n < 1 d mod ( ( 1 x n d + cos ( n cos 1 ( x n × π ) ) ) , 1 ) , 1 d x n < 1
where d is the control parameter taking values in the range (0,1). CPM mapping is performed by combining the piecewise mapping method [26] with the Chebyshev mapping method [27] followed by mod operation. It makes the system chaotic even at x = 0.5. The results of several experiments show that the distribution of the system is relatively uniform for any value of d. It can be used to generate the algorithm’s initial solution and enhance the population’s diversity. The initialized population (one-dimensional) distribution when d = 0.3 is shown in Figure 2.

2.3.2. The Nonlinear Control Parameter Adjustment Strategy

The GWO is divided into two steps: prey localization and gray wolf predation. As shown by Equation (2), coefficient A plays a vital role in balancing the global search and local exploitation of the GWO algorithm. When | A | > 1 , the wolves will expand the search area. When | A | < 1 , the wolves will narrow the search area and attack the prey. The convergence parameter a influences the magnitude of coefficient A. The value of a linearly decreases with the number of iterations. The convergence parameter a of the linear update is difficult to adapt to the actual search situation due to the complexity of the GWO algorithm’s search process. It cannot achieve strong coordination between global search and local search. Therefore, an ‘S’-type nonlinear control parameter is proposed. The mathematical model is shown in Equation (14).
a = 2 2 1 + e ( 10 T max ) × ( t T max 2 )
The proposed control parameter a slowly decreases in the early iterations, which enables the wolves to search for prey at a large pace and expand the search range of the algorithm. Parameter a rapidly decreases in the middle period, improving the algorithm’s convergence speed. Moreover, parameter a decreases at a slow velocity and maintains a small value in the later stage, which enables the algorithm to fully search around the optimal solution and improves the algorithm’s local search capability. The improved control parameter a can better balance the global and local search and improve the algorithm’s performance. The comparison of the control parameter a before and after the improvement is shown in Figure 3.

2.3.3. The Adaptive Grouping Strategy

Since different individuals of the GWO algorithm are located in different positions, the wolves are divided into three groups according to their distance from the prey (size of fitness value). They are named the predator group, the wanderer group, and the searcher group. Since it is uncertain whether the found prey is the optimal global solution, the individuals closer to the prey should quickly approach it. At the same time, the individuals farther away from the prey are slow to approach the prey or even ignore it to research for other possible prey due to the uncertainty. The individuals with high fitness values are closer to the prey, which accelerates their approach toward the prey for predation. The improved differential evolution strategy is used for position updating to enhance the convergence ability of the algorithm. The individuals with general fitness values are far from the prey, and they wander slowly toward the prey and search the surrounding environment while approaching the prey. Their searching capability is enhanced by combining the GWO position update strategy with stochastic opposition-based learning.. The individuals with poor fitness values, i.e., those farthest from the prey, choose to expand their search range to find other prey. The oscillatory perturbation operator is used for position updating to enhance the ability of the algorithm to jump out of the local optimum. The grouping mathematical model is shown in Equation (15) as follows:
X i , j = { X i , a , 1 a < n 1 X i , b , n 1 a < n 2 X i , c , n 2 a N
where X i , a , X i , b , X i , c denote the individuals in the predator, wanderer, and searcher groups. n is the number of gray wolves. n 1 and n 2 are the grouping boundaries. In other words, the wolves are ranked according to their fitness values, with the best individuals in the predator group, the general individuals in the wanderer group, and the poor individuals in the searcher group. n 1 and n 2 are calculated as shown in Equation (16).
{ n 1 = 1 4 N ( 1 t T max ) n 2 = N 1 4 N ( t T max )
It can be seen that the grouping boundaries of wolves are adaptively updated with the iteration. At the beginning of the iteration, the convergence speed should be increased because the wolves are dispersed. The predator group can quickly converge to the current optimal solution, so there are more individuals in the predator group in the early stage. As the iteration proceeds, the wolves tend to converge, and it is more necessary to jump out of the local optimum. The searcher group can have a larger search range, so the number of individuals in the searcher group increases as the iteration progresses.

2.3.4. The Position Updating Strategy for Different Groups

The Improved Differential Evolution Strategy

The improved differential evolution strategy is used to update the position of the wolves in the predator group. The differential evolution strategy is an algorithm that evolves based on individual differences in the population [28]. It is widely used in intelligence optimization algorithms [29,30]. In the process of population evolution, individuals are recombined according to their differences from each other to obtain a more competitive intermediate population. The offspring individuals in the intermediate population compete with the parent individuals to produce a more competitive next-generation population. The mutation operation is the most significant part of the differential evolution process. Individual variation is achieved through the differential evolution strategy. A common differential strategy is to randomly select two different individuals, scale their vector differences, and perform vector synthesis with the individual to be mutated. It is calculated using Equation (17) as follows:
X i t + 1 = X i t + W × ( X r 1 t X r 2 t )
where X i , X r 1 , X r 2 denote the three different individuals in the population. W is the scaling factor used to control the scaling scale of the difference vector.
This strategy enables the gray wolves in the predator group to quickly approach the prey by integrating the idea of differential evolution into the GWO algorithm to improve the predator group’s hunting speed. It is the basis of ensuring population competitiveness that the wolves evolve in a good direction according to their environment. Therefore, outstanding gray wolf individuals with high competitiveness are selected as the parents of the evolving population. The expression of its variation function is based on Equation (18) as follows:
X i t + 1 = X α t + W × ( X β t 2 + X δ t 2 X i t )
where W is the scaling factor with the value range of [−1, 1]. It can be seen that the strategy allows the predator group to quickly approach the immediate area of the prey. The algorithm’s convergence ability is greatly enhanced.

The Stochastic Opposition-Based Learning

The position updating strategy of the GWO algorithm is used to update the individuals of the wanderer group. The opposition-based learning was introduced by Tizhoosh [31]. This strategy is widely used in swarm intelligence optimization algorithms and has achieved good experimental results [32,33]. Stochastic opposition-based learning is added to enhance the global search capability of the GWO. The stochastic reverse solution is calculated using Equation (19) as follows:
x i , j = U B j + L B j r 3 x i , j
where r 3 is the random factor taking values in the range of [0, 1]. U B j and L B j are the upper and lower bounds of the j-dimensional values. Compared with general opposition-based learning, the new solutions obtained through stochastic opposition-based learning are dynamic and can increase the diversity of the algorithm. The better individual found through the GWO position updating strategy and the stochastic opposition-based learning is retained in the wanderer group’s position iteration.

The Oscillation Perturbation Operator

The searcher group in the adaptive grouping introduces the oscillatory perturbation operator to increase the algorithm’s diversity and improve the ability of the GWO to jump out of the optimal local solution. The mathematical model of the oscillation operator ξ is shown in Equation (20).
ξ = { 2 r 4 1 r 5 , i f   t < T max 2 2 r 4 1 , i f   t T max 2
where r 4 and r 5 are the random numbers in the range of [0, 1]. From Equation (20), it can be seen that the oscillation operator takes a large value in the early stage of algorithm iteration, which improves the range of the algorithm’s exploration and increases the algorithm’s diversity. Later in the algorithm iteration, a small oscillation factor is beneficial to increase the algorithm’s development capability. The searcher group’s position updating formula after introducing the oscillation perturbation operator is shown in Equation (21).
X i t + 1 = t T max X i t + ( 1 t T max ) × X i t × ξ
Equation (21) contains the original population information and the part of the oscillatory perturbation, which adaptively varies with the number of iterations. Its position update is not affected by the prey position, which can effectively prevent the algorithm from falling into local extreme points.

2.4. The NAGGWO-BP Algorithm

The BP neural network model randomly assigns weights and thresholds, which have many variable parameters, leading to unstable model computation [34]. The model prediction performance can be improved by optimizing the BP neural network using the GWO [35]. However, the GWO algorithm has the problems of uneven initial population distribution, slow convergence speed, and easily falling into local optimization. To solve these problems, we propose the NAGGWO algorithm. We first ensure the diversity of the initial gray wolf individuals by introducing CPM mapping. Secondly, we use the ‘S’-type nonlinear control parameter to effectively balance the local and global searching ability and improve the algorithm’s operation efficiency. Finally, we use the adaptive grouping strategy to group the wolves and adopt different updating methods for the individuals in different groups to improve the algorithm’s convergence speed and global searching ability.
The core idea of the NAGGWO-optimized BP neural network is to use the weights and thresholds of the BP neural network as the gray wolf location information. Updating the location is equal to updating the weights and thresholds of the BP neural network until the globally best location is found, which improves the prediction ability and prediction efficiency of the BP neural network. With the introduction of the NAGGWO algorithm, the weights and thresholds of the BP neural network can be dynamically optimized to achieve better and more stable prediction results. The flowchart of the NAGGWO-BP algorithm is shown in Figure 4. First, we normalize the data using Equation (22). Then, we apply CPM mapping, as shown in Equation (13), to initialize the gray wolf population location. Further, the algorithm control parameter a is updated according to Equation (14), and then the individuals are grouped according to the adaptive grouping strategy, and the corresponding positions are updated as in Equations (15)–(21). Finally, the optimal solution is output when the number of iterations reaches the maximum value, and the optimal weights and thresholds of the BP neural network are obtained.

3. Experimental Analysis

3.1. Data Preprocessing

The experimental data were obtained from the literature [17]. In this analysis, we examined the mechanical properties of 88 groups of wood after heat treatment under different conditions. The experimental data on the mechanical properties of these 88 groups are shown in Table A1 in Appendix A.
The material used for this experiment was 22 mm larch from northeastern China. The wood was heat-treated via steam at a temperature of 120–210 °C and relative humidity of 0–100% for 0.5–3 h. The treated wood was dried to reach the moisture content of 8–10% and placed at the ambient temperature of 20 °C with a relative humidity level of 65%. After equilibrium, the corresponding mechanical properties were measured. The experimental results are consistent with the conclusions reached by Ding et al. [36] and Tiryaki et al. [37]. This proves that the experimental data are accurate and reliable.
We used the same training and testing specimens as those found in the literature [38] to ensure the fairness of the model comparison. The first 58 sets of experimental data in Appendix A Table A1 were used as the training set, and the last 30 sets of experimental data were classified as the testing set. Since the magnitudes of the three input dimensions are different, Equation (22) is used to normalize the input data to avoid the effect of direct input on the training speed and prediction accuracy of the prediction model.
y = ( y max y min ) × ( x x min ) ( x max x min ) + y min
where y is the normalized value of x, and y min , y max are the normalized intervals set to −1, 1. x min and x max are the minimum and maximum values of x, respectively.

3.2. Model Parameter Setting

The BP neural network in the NAGGWO-BP model uses the three-layer structure. The number of nodes in its input layer is 3, corresponding to the input data’s temperature, humidity, and time. The number of nodes in the output layer is 1, which corresponds to the mechanical properties of wood. The number of hidden layer nodes is generally selected based on the empirical Equation (23). The range of the hidden layer nodes is calculated to be [3,12]. After several error calculations, the number of nodes in the hidden layer is confirmed to be 9.
h = ( u + v ) + w , w [ 1 , 10 ]
where h, u, and v denote the number of nodes at the hidden, input, and output layers, respectively.
The NAGGWO-BP model was used to predict the mechanical properties of wood. The predicted results were compared with those of the BP, GWO-BP, and TSSA-BP neural networks to verify the model’s prediction performance. The BP neural network’s maximum number of iterations was 1000; the target error was 0.0001; the maximum number of iterations was 50, and the population size was 20.

3.3. NAGGWO-BP Simulation Results Analysis

To verify the effectiveness of the NAGGWO-BP, the mean absolute error (MAE), the mean square error (MSE), and the mean absolute percentage error (MAPE) of the different algorithms were compared to evaluate their prediction performance. Smaller values indicate better model prediction performance. The experimental results are shown in Table 2, and the TSSA-BP data were obtained from the literature [38].
BP denotes the original backpropagation neural network. NAGGWO-BP, GWO-BP, and TSSA-BP denote the BP neural network after its optimization using the NAGGWO, GWO [23], and TSSA [38] models. As shown in Table 1, the MAE, MSE, and MAPE values of the NAGGWO-BP neural network prediction model are much smaller than the prediction errors of other models. The MAE of NAGGWO-BP is reduced by 74.5–90.7%, MSE by 94.4–99.1%, and MAPE by 4.0–34.1%, compared with those of the BP neural network. This indicates that combination with NAGGWO can optimize the BP neural network performance to a great extent. The other algorithms are also optimized for the BP neural network but are still significantly inferior to the proposed NAGGWO algorithm. This indicates that compared with the BP, GWO-BP, and TSSA-BP models, the NAGGWO-BP neural network is more reasonable and has better prediction ability.
Figure 5a–e show the prediction results of the NAGGWO-BP, GWO-BP, and BP neural networks for five mechanical properties of wood. The predicted values of the optimized BP neural network using GWO and NAGGWO are closer to the actual values, indicating that using GWO and NAGGWO can significantly improve the prediction accuracy of the BP neural network. However, it can be seen that using NAGGWO is more effective. In addition, Figure 5f shows the convergence curves of the GWO and NAGGWO models in terms of compressive strength along the grain. It can be seen that the NAGGWO model is superior to the GWO model in terms of convergence speed and convergence accuracy and often achieves better results after 10 iterations. The analysis of the reasons for this is as follows: (1) The CPM mapping process used by the NAGGWO algorithm initializes the wolf population, which improves the diversity of the algorithm and establishes a good foundation for the optimization search afterward. (2) The NAGGWO algorithm proposes the adaptive grouping strategy, divides the population into three parts according to the fitness size, and uses different position updating strategies for individuals in different positions, improving the algorithm’s search accuracy. This can optimize the BP neural network’s parameters to the maximum extent and improves its prediction ability. (3) The improved differential evolution strategy is used for the predator group so that it can quickly approach the optimal solution, which effectively improves the convergence speed of the algorithm. Meanwhile, the proposed nonlinear control parameter a keeps large values in the early stage, which gives the algorithm good global searching ability in the early stage. Hence, the algorithm tends to converge to the optimal solution more quickly. It can be proved that the proposed NAGGWO-BP is a prediction model with excellent performance.

4. Conclusions

The thermal modification of wood is a very complex and nonlinear process. The wood structure is complex, and its chemical composition is prone to change. Therefore, establishing an ideal mathematical model in line with the actual situation is the prerequisite for the automated control of the thermal modification of wood and an effective way to reduce and improve wood consumption. For the mechanical properties of wood thermal modification, the NAGGWO-BP prediction model was established, and the prediction of Chinese larch’s mechanical properties was studied. The main conclusions are as follows:
  • The NAGGWO algorithm is proposed to solve the problem that the traditional GWO algorithm tends to fall into the local optimum. Firstly, the population is initialized by using CPM mapping. Secondly, an ‘S’-type nonlinear control parameter is proposed to balance the exploration and exploitation ability of the algorithm. Finally, different search methods are used for different groups of wolves by adaptively grouping them according to their fitness size. The solving speed and accuracy of the algorithm are improved.
  • The proposed NAGGWO-BP model updates the weights and thresholds of the BP neural network using the NAGGWO algorithm to address the problem of its imprecise prediction results. It enhances the prediction ability of the BP neural network. We applied the NAGGWO-BP model to predict the five mechanical properties of wood to validate the model. The results show that the MAE, MSE, and MAPE values of the NAGGWO-BP model are greatly reduced, compared with the original BP neural network, and the prediction ability of the algorithm is substantially enhanced.

Author Contributions

Conceptualization, W.M. and W.W.; methodology, W.M.; software, W.M.; validation, W.M., W.W. and Y.C.; formal analysis, W.M.; investigation, Y.C.; resources, W.M.; data curation, W.M.; writing—original draft preparation, W.M.; writing—review and editing, W.M.; visualization, W.M.; supervision, Y.C.; project administration, W.W.; funding acquisition, W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Fundamental Research Funds for the Central Universities, grant number 2572019BL04, and the Scientific Research Foundation for the Returned Overseas Chinese Scholars of Heilongjiang Province, grant number LC201407.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

From this paper, the data are openly available in a public repository that issues datasets with DOIs. The data that support the findings of this study are openly available in Bioresources at http://doi.org/10.15376/biores.10.3.5758-5776, reference number [17] (accessed on 25 October 2021). The data presented in this study are available in the article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Wood treatment conditions and corresponding mechanical properties.
Table A1. Wood treatment conditions and corresponding mechanical properties.
Test
Temperature/°C
Test Time/hTest
Humidity/%
Axial Compressive Strength/MPaBending Strength/MPaBending Modulus of Elasticity/GPaRadial
Hardness/MPa
Tangential
Hardness/MPa
1200.5039.267.49.09314.1215.56
1200.54039.165.39.03813.0214.69
1200.56038.069.79.10014.6715.08
1200.510036.767.28.84514.6515.45
1201.0038.467.88.64913.9814.36
1201.04037.666.48.75212.9815.59
1201.06038.667.89.24513.7815.32
1201.010038.163.17.89514.5514.23
1202.0039.566.99.07413.3314.23
1202.04038.668.28.94512.5514.58
1202.06036.565.28.85413.2514.89
1202.010041.963.28.93313.3614.56
1203.0037.566.58.90013.5614.78
1203.04039.867.68.96313.4514.45
1203.06037.666.68.74513.0114.69
1203.010038.964.28.74512.4514.78
1400.5036.766.78.97814.6915.56
1400.54036.967.58.84513.0615.02
1400.56035.866.89.15514.0214.23
1400.510038.465.38.87715.0215.01
1401.0037.466.59.17914.1615.68
1401.04036.064.59.13713.0515.01
1401.06037.267.29.02413.4915.17
1401.010037.563.18.82313.4515.48
1402.0037.966.38.82313.5414.69
1402.04038.565.78.85214.6914.58
1402.06037.667.18.79913.9914.74
1402.010035.562.78.90014.2815.63
1403.0036.965.48.81114.3914.23
1403.04038.964.68.93413.2314.56
1403.06038.265.58.65414.2313.65
1403.010039.262.18.79813.5614.02
1600.5036.966.38.78814.8914.99
1600.54039.166.99.01114.8714.36
1600.56037.166.38.74514.5814.78
1600.510038.965.88.71214.6915.69
1601.0039.162.48.67913.4214.56
1601.04039.761.48.64514.0915.30
1601.06037.862.28.79814.6915.90
1601.010038.762.88.67913.5815.63
1602.0035.962.28.72714.6313.92
1602.04035.862.18.55714.0214.17
1602.06036.663.18.68715.1714.28
1602.010038.260.98.61114.6515.09
1603.0037.261.98.61113.6514.36
1603.04039.161.58.53413.4714.56
1603.06039.560.88.60113.5813.89
1603.010037.360.58.55213.6914.36
1800.5038.965.98.60115.2114.03
1800.54039.165.38.68915.9814.56
1800.56037.666.18.64516.0113.97
1800.510036.165.78.59914.3214.33
1801.0038.265.48.62315.0913.79
1801.04039.464.98.64514.9814.25
1801.06037.666.38.57915.4514.08
1801.010038.164.88.54514.3313.64
1802.0039.565.18.57414.6513.69
1802.04038.765.88.60014.1313.59
1802.00938.264.58.53213.9914.49
1802.010037.164.28.54415.1013.54
1803.0038.164.18.60014.2114.06
1803.04037.564.28.54113.9914.21
1803.06037.864.88.45614.5813.98
1803.010038.563.88.49914.9913.69
2000.5036.562.18.48312.0013.60
2000.54035.460.68.47511.9612.99
2000.56035.159.98.39911.4513.21
2001.0034.561.98.42211.6912.98
2001.04035.860.88.48911.4612.64
2001.06034.161.28.32111.5412.35
2002.0034.661.28.36911.9913.02
2002.04035.460.88.35411.1512.69
2002.06034.560.58.21110.6512.49
2003.0034.160.98.24910.6812.73
2003.04034.259.88.23111.0512.57
2003.06033.858.28.01110.2212.37
2100.5034.150.17.85610.2310.98
2100.54033.250.87.78910.599.98
2100.56032.149.97.86510.5510.23
2101.0033.950.67.76510.2110.65
2101.04032.949.87.7129.9810.21
2101.06032.848.97.49810.0110.65
2102.0032.949.17.6899.989.64
2102.04032.549.57.7129.659.35
2102.06031.849.67.62310.039.67
2103.0031.547.87.5009.218.91
2103.04030.546.57.4129.108.21
2103.06030.845.17.3219.038.99

References

  1. Esteves, B.; Ferreira, H.; Viana, H.; Ferreira, J.; Domingos, I.; Cruz-Lopes, L.; Jones, D.; Nunes, L. Termite Resistance, Chemical and Mechanical Characterization of Paulownia tomentosa Wood before and after Heat Treatment. Forests 2021, 12, 1114. [Google Scholar] [CrossRef]
  2. Suri, I.F.; Purusatama, B.D.; Kim, J.H.; Yang, G.U.; Prasetia, D.; Kwon, G.J.; Hidayat, W.; Lee, S.H.; Febrianto, F.; Kim, N.H. Comparison of physical and mechanical properties of Paulownia tomentosa and Pinus koraiensis wood heat-treated in oil and air. Eur. J. Wood Wood Prod. 2022, 80, 1389–1399. [Google Scholar] [CrossRef]
  3. Wang, W.; Ma, W.; Wu, M.; Sun, L. Effect of Water Molecules at Different Temperatures on Properties of Cellulose Based on Molecular Dynamics Simulation. Bioresources 2022, 17, 269–280. [Google Scholar] [CrossRef]
  4. Esteves, B.; Marques, A.V.; Domingos, I.; Pereira, H. Heat-induced colour changes of pine (Pinus pinaster) and eucalypt (Eucalyptus globulus) wood. Wood Sci. Technol. 2008, 42, 369–384. [Google Scholar] [CrossRef] [Green Version]
  5. Huang, X.; Kocaefe, D.; Kocaefe, Y.; Boluk, Y.; Pichette, A. A spectrocolorimetric and chemical study on color modification of heat-treated wood during artificial weathering. Appl. Surf. Sci. 2012, 258, 5360–5369. [Google Scholar] [CrossRef]
  6. Navickas, P.; Albrektas, D. Effect of Heat Treatment on Sorption Properties and Dimensional Stability of Wood. Mater. Sci.-Medzg. 2013, 19, 291–294. [Google Scholar] [CrossRef] [Green Version]
  7. Bekhta, P.; Niemz, P. Effect of high temperature on the change in color, dimensional stability and mechanical properties of spruce wood. Holzforschung 2003, 57, 539–546. [Google Scholar] [CrossRef]
  8. Wang, J.Y.; Cooper, P.A. Effect of oil type, temperature and time on moisture properties of hot oil-treated wood. Holz Als Roh-Und Werkst. 2005, 63, 417–422. [Google Scholar] [CrossRef]
  9. Bayani, S.; Taghiyari, H.R.; Papadopoulos, A.N. Physical and Mechanical Properties of Thermally-Modified Beech Wood Impregnated with Silver Nano-Suspension and Their Relationship with the Crystallinity of Cellulose. Polymers 2019, 11, 1538. [Google Scholar] [CrossRef] [Green Version]
  10. Herrera-Diaz, R.; Sepulveda-Villarroel, V.; Torres-Mella, J.; Salvo-Sepulveda, L.; Llano-Ponte, R.; Salinas-Lira, C.; Peredo, M.A.; Ananias, R.A. Influence of the wood quality and treatment temperature on the physical and mechanical properties of thermally modified radiata pine. Eur. J. Wood Wood Prod. 2019, 77, 661–671. [Google Scholar] [CrossRef]
  11. Cai, X.; Riedl, B.; Zhang, S.Y.; Wan, H. Effects of nanofillers on water resistance and dimensional stability of solid wood modified by melamine-urea-formaldehyde resin. Wood Fiber Sci. 2007, 39, 307–318. [Google Scholar] [CrossRef]
  12. Hussain, S.F.; Hussain, G.; Rahman, N. Artificial neural network modelling and optimization of elastic and an-elastic spring back in polymer parts produced through ISF. Int. J. Adv. Manuf. Technol. 2022, 118, 2163–2176. [Google Scholar] [CrossRef]
  13. Prikeznik, M.; Srcic, S. Artificial neural networks for investigation of the most important factors of industrial tablet manufacturing on the dissolution of active pharmaceutical ingredients as critical quality attributes. Farmacia 2021, 69, 732–740. [Google Scholar] [CrossRef]
  14. Shaik, N.B.; Mantrala, K.M.; Narayana, K.L. Prediction of corrosion properties of LENS (TM) deposited cobalt, chromium and molybdenum alloy using artificial neural networks. Int. J. Mater. Prod. Technol. 2021, 62, 4–15. [Google Scholar] [CrossRef]
  15. Wang, C.-S.; Hsiao, Y.-H.; Chang, H.-Y.; Chang, Y.-J. Process Parameter Prediction and Modeling of Laser Percussion Drilling by Artificial Neural Networks. Micromachines 2022, 13, 529. [Google Scholar] [CrossRef]
  16. Zhang, D.; Liu, Y.; Cao, J.; Sun, L. Neural Network Prediction Model of Wood Moisture Content for Drying Process. Sci. Silvae Sin. 2008, 44, 94–98. [Google Scholar] [CrossRef]
  17. Yang, H.; Cheng, W.; Han, G. Wood Modification at High Temperature and Pressurized Steam: A Relational Model of Mechanical Properties Based on a Neural Network. Bioresources 2015, 10, 5758–5776. [Google Scholar] [CrossRef]
  18. Chai, H.; Chen, X.; Cai, Y.; Zhao, J. Artificial Neural Network Modeling for Predicting Wood Moisture Content in High Frequency Vacuum Drying Process. Forests 2019, 10, 16. [Google Scholar] [CrossRef] [Green Version]
  19. Hadavandi, E.; Mostafayi, S.; Soltani, P. A Grey Wolf Optimizer-based neural network coupled with response surface method for modeling the strength of siro-spun yarn in spinning mills. Appl. Soft Comput. 2018, 72, 1–13. [Google Scholar] [CrossRef]
  20. Tian, Y.; Yu, J.; Zhao, A. Predictive model of energy consumption for office building by using improved GWO-BP. Energy Rep. 2020, 6, 620–627. [Google Scholar] [CrossRef]
  21. Hu, R.; Wen, S.; Zeng, Z.; Huang, T. A short-term power load forecasting model based on the generalized regression neural network with decreasing step fruit fly optimization algorithm. Neurocomputing 2017, 221, 24–31. [Google Scholar] [CrossRef]
  22. Wang, J.; Shi, P.; Jiang, P.; Hu, J.; Qu, S.; Chen, X.; Chen, Y.; Dai, Y.; Xiao, Z. Application of BP Neural Network Algorithm in Traditional Hydrological Model for Flood Forecasting. Water 2017, 9, 48. [Google Scholar] [CrossRef] [Green Version]
  23. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  24. Yan, Y.; Ma, H.; Li, Z. An Improved Grasshopper Optimization Algorithm for Global Optimization. Chin. J. Electron. 2021, 30, 451–459. [Google Scholar] [CrossRef]
  25. Lu, Q.Z.; Jiang, J.H.; Yu, R.Q.; Shen, G.L. A genetic algorithm based on prepotency evolution using chaotic initiation used for network training. J. Chem. Inf. Comput. Sci. 2003, 43, 1132–1137. [Google Scholar] [CrossRef]
  26. Leriche, R.; Sienra, G. Dynamical Aspects of Piecewise Conformal Maps. Qual. Theory Dyn. Syst. 2019, 18, 1237–1261. [Google Scholar] [CrossRef] [Green Version]
  27. Choi, J.-Y.; Im, D.K.; Park, J.; Choi, S. Prediction of Dynamic Stability Using Mapped Chebyshev Pseudospectral Method. Int. J. Aerosp. Eng. 2018, 2018, 2508153. [Google Scholar] [CrossRef]
  28. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  29. Sun, G.; Yang, B.; Yang, Z.; Xu, G. An adaptive differential evolution with combined strategy for global numerical optimization. Soft Comput. 2020, 24, 6277–6296. [Google Scholar] [CrossRef]
  30. Das, S.; Mullick, S.S.; Suganthan, P.N. Recent advances in differential evolution—An updated survey. Swarm Evol. Comput. 2016, 27, 1–30. [Google Scholar] [CrossRef]
  31. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce, Vienna, Austria, 28–30 November 2005. [Google Scholar]
  32. Tubishat, M.; Idris, N.; Shuib, L.; Abushariah, M.A.M.; Mirjalili, S. Improved Salp Swarm Algorithm based on opposition based learning and novel local search algorithm for feature selection. Expert Syst. Appl. 2020, 145, 113122. [Google Scholar] [CrossRef]
  33. Dhargupta, S.; Ghosh, M.; Mirjalili, S.; Sarkar, R. Selective Opposition based Grey Wolf Optimization. Expert Syst. Appl. 2020, 151, 113389. [Google Scholar] [CrossRef]
  34. Bai, H.R.; Chu, Z.Y.; Wang, D.W.; Bao, Y.; Qin, L.Y.; Zheng, Y.H.; Li, F.M. Predictive control of microwave hot-air coupled drying model based on GWO-BP neural network. Dry. Technol. 2022. [Google Scholar] [CrossRef]
  35. Liang, Q.; Zhang, X.M.; Liu, X.; Li, Y.L. Prediction of high-temperature flow stress of HMn64-8-5-1.5 manganese brass alloy based on modified Zerilli-Armstrong, Arrhenius and GWO-BPNN model. Mater. Res. Express 2022, 9, 9. [Google Scholar] [CrossRef]
  36. Ding, T.; Gu, L.; Li, T. Influence of steam pressure on physical and mechanical properties of heat-treated Mongolian pine lumber. Eur. J. Wood Wood Prod. 2011, 69, 121–126. [Google Scholar] [CrossRef]
  37. Tiryaki, S.; Aydin, A. An artificial neural network model for predicting compression strength of heat treated woods and comparison with a multiple linear regression model. Constr. Build. Mater. 2014, 62, 102–108. [Google Scholar] [CrossRef]
  38. Li, N.; Wang, W. Prediction of Mechanical Properties of Thermally Modified Wood Based on TSSA-BP Model. Forests 2022, 13, 160. [Google Scholar] [CrossRef]
Figure 1. Flowchart of BP neural network.
Figure 1. Flowchart of BP neural network.
Forests 13 01870 g001
Figure 2. Population initialization in CPM mapping: (a) scatter map; (b) frequency distribution histogram.
Figure 2. Population initialization in CPM mapping: (a) scatter map; (b) frequency distribution histogram.
Forests 13 01870 g002
Figure 3. The changing curve of the convergence parameter a.
Figure 3. The changing curve of the convergence parameter a.
Forests 13 01870 g003
Figure 4. Flowchart of NAGGWO-BP algorithm.
Figure 4. Flowchart of NAGGWO-BP algorithm.
Forests 13 01870 g004
Figure 5. Comparison between predicted results and actual values for different models: (a) compressive strength along grain; (b) flexural strength; (c) flexural modulus of elasticity; (d) radial hardness; (e) tangential hardness; (f) convergence curve of GWO-BP and NAGGWO-BP.
Figure 5. Comparison between predicted results and actual values for different models: (a) compressive strength along grain; (b) flexural strength; (c) flexural modulus of elasticity; (d) radial hardness; (e) tangential hardness; (f) convergence curve of GWO-BP and NAGGWO-BP.
Forests 13 01870 g005aForests 13 01870 g005b
Table 1. Symbols and their meanings.
Table 1. Symbols and their meanings.
SymbolMeaning
   t  The number of current iterations
   X t   The position vector of the gray wolf
   X p t   The position vector of the prey
   A , D   The coefficient vector
   T max   The maximum number of iterations
   r 1 , r 2 , r 3 , r 4 , r 5   Random variables
   d  Control parameter of CMP mapping
   a  Nonlinear control parameter of GWO
   X i , a , X i , b , X i , c   Individuals in the predator, wanderer, and searcher group
   n  The number of gray wolves
   n 1 , n 2   Group boundaries of gray wolves
   W  Scaling factor
   ξ  The oscillation operator
   y min , y max   The normalized interval set
   x min , x max   The minimum and maximum values of x
Table 2. Model and error quantity.
Table 2. Model and error quantity.
NAGGWO-BPGWO-BPBPTSSA-BP
MAEMSEMAPEMAEMSEMAPEMAEMSEMAPEMAEMSEMAPE
Axial Compressive Strength0.650.731.90%1.262.803.72%6.9963.1420.98%11.482.90%
Bending Strength0.790.991.38%2.5712.484.72%8.58113.3716.67%3.6816.996.81%
Bending Modulus of Elasticity103.5216,1741.27%272.2094,4163.41%406.44289,3515.25%282.56235,7953.12%
Radial Hardness0.370.213.27%1.212.6610.51%3.9019.3237.39%0.660.665.64%
Tangential Hardness0.390.253.29%1.101.749.98%1.785.2317.36%0.991.579.61%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ma, W.; Wang, W.; Cao, Y. Mechanical Properties of Wood Prediction Based on the NAGGWO-BP Neural Network. Forests 2022, 13, 1870. https://doi.org/10.3390/f13111870

AMA Style

Ma W, Wang W, Cao Y. Mechanical Properties of Wood Prediction Based on the NAGGWO-BP Neural Network. Forests. 2022; 13(11):1870. https://doi.org/10.3390/f13111870

Chicago/Turabian Style

Ma, Wei, Wei Wang, and Ying Cao. 2022. "Mechanical Properties of Wood Prediction Based on the NAGGWO-BP Neural Network" Forests 13, no. 11: 1870. https://doi.org/10.3390/f13111870

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop