Next Article in Journal
Study on the Behavior of Assembled T-Shaped Aluminum Alloy Specimens under Axial Compression
Previous Article in Journal
Determination of Corrosion Resistance of High-Silicon Ductile Iron Alloyed with Nb
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction of Silicon Content in the Hot Metal of a Blast Furnace Based on FPA-BP Model

1
School of Metallurgical Engineering, Xi’an University of Architecture and Technology, Xi’an 710055, China
2
State Key Laboratory of New Technology of Iron and Steel Metallurgy, Beijing University of Science and Technology, Beijing 100083, China
*
Authors to whom correspondence should be addressed.
Metals 2023, 13(5), 918; https://doi.org/10.3390/met13050918
Submission received: 6 March 2023 / Revised: 19 April 2023 / Accepted: 5 May 2023 / Published: 9 May 2023
(This article belongs to the Section Extractive Metallurgy)

Abstract

:
In the process of blast furnace smelting, the stability of the hearth thermal state is essential. According to the analysis of silicon content in hot metal and its change trend, the operation status of the blast furnace can be judged to ensure the stable and smooth operation of the blast furnace. Based on the error back-propagation neural network (BP), the flower pollination algorithm (FPA) is used to optimize the weight and threshold of the BP neural network, and the prediction model of silicon content is established. At the same time, the principal component analysis method is used to reduce the dimension of the input sequence to obtain relevant indicators. The relevant indicators are used as the input, and silicon content in the hot metal is used as the output, which is substituted into the model for training and utilizes the trained model to predict. The results show that the hit rate of the prediction model is 16% higher than the non-optimized BP prediction model. At the same time, the evaluation indicators and operation speed of the model are improved compared with the BP prediction model, which can be more accurately applied to predict the silicon content of the hot metal.

1. Introduction

Blast furnace ironmaking is essential in iron and steel production. Iron ore is reduced and melted into slag and hot metal in the blast furnace through various physical and chemical changes. The stable thermal state of the hearth plays an essential role in the stable operation and reaction of the blast furnace [1,2]. In practice, the smelting process of the blast furnace is highly complex and closed, and it is difficult to measure the hearth temperature directly [3]. The change in the silicon content of hot metal is closely related to the thermal stability of the blast furnace hearth. Therefore, the shift in silicon content in the hot metal is generally used to indirectly reflect the thermal state change of the blast furnace hearth [4,5,6]. Reasonable control of silicon content in the hot metal can not only maintain the stability of the blast furnace, predict the fuel ratio of the blast furnace, and improve the utilization factor, but can also reduce the smelting slag for subsequent steelmaking links [7,8,9]. Therefore, the accurate prediction of the silicon content in hot metal is an essential prerequisite for maintaining the smooth operation of blast furnaces and a necessary guarantee to achieve energy conservation and emission reduction.
To date, scholars at home and abroad have undertaken much work on predicting silicon content in hot metal. The prediction models adopted are generally divided into mechanism, empirical, and data-driven [10]. The mechanism and practical models rely on theoretical knowledge and artificial field experience to predict and guide. However, due to the complex internal reaction of the blast furnace and the profound human subjective impact, it is easy to cause a significant deviation in the prediction results. The data-driven model is mainly used by operators to analyze historical data, seek the correlation between data, and fully mine the decision-making relationship behind the data, and can achieve better prediction accuracy and generalization performance, which is more in line with the blast furnace ironmaking process. In recent years, it has received widespread attention and made some progress. By using statistical knowledge, the prediction model of silicon content in hot metal based on rough set theory and a BP neural network [11], the weighted limit learning machine prediction model [12], and the genetic algorithm optimization BP neural network prediction model [13] have been established, respectively, and good results have been achieved. In the prediction model of silicon content in hot metal, the rationality of the input sequence and prediction method determines the prediction accuracy, training speed, and industrial adaptability of the model [14,15]. Previous studies on the prediction of silicon content in hot metal have mainly used relatively stable operating data. Due to the complexity of blast furnace smelting, furnace condition fluctuations occur occasionally, and the prediction accuracy needs to be more in case of significant instability in furnace conditions [16,17,18,19]. Therefore, developing a new prediction model for silicon content in hot metal with a high prediction hit rate, good stability, adaptability to furnace condition fluctuations, and a small prediction error is significant.
This paper proposes a prediction model based on the combination of principal component analysis (PCA) and a flower pollination algorithm-error backpropagation neural network (FPA-BP). The input sequence of the model is determined by PCA dimension reduction processing. The prediction model of silicon content in hot metal is established based on the complex furnace condition fluctuation data of a steel plant to provide a new strategy for the stability of the blast furnace.

2. Data Analysis and Preprocessing

2.1. Data Source

The data used are from the actual operation of an iron and steel plant. There are 300 groups of data, and each group of data contains 64 attributes. The data are divided into five parts: charge structure, blast furnace operating parameters, tapping composition, gas composition, and slag composition. The selection of input parameters plays an important role in the prediction structure and results. If all attributes are taken as inputs, the model structure is extremely complex and difficult to achieve. If the attribute selection is insufficient, the key factors affecting the model will be missing, leading to the failure of the model. Therefore, it is crucial to select the input attributes of the model reasonably. Based on on-site data collection and combined with manual experience, 21 attributes closely related to the influence of molten iron temperature were selected as influencing factors. The selected influencing factors included oxygen enrichment, coke ratio to furnace, hot air temperature, furnace top temperature, furnace top pressure, permeability index, comprehensive smelting strength, inlet temperature, tuyere area, hot air pressure, CO utilization rate, pressure difference pressure, water temperature difference, molten iron temperature, material batch, coal injection ratio, wind speed, comprehensive coke ratio, outlet temperature, coal injection amount, and silicon content of the previous furnace. The above variables were labeled as zi (i = 1, 2, …, 21), and the silicon content of the hot metal was selected as the target variable. Parts of the original data are shown in Table 1, below.

2.2. Data Missing Value Processing

Data recording can fail due to faults in the blast furnace sensor or the operator of responsibility in the blast furnace smelting process. For missing data, Lagrangian linear interpolation was generally used to fill.

2.3. Data Normalization

The dimension and dimension units between different parameters are not uniform, and there are often significant differences, making the analysis process complex and affecting the results. The dimension influence between parameters can be avoided by data normalization. After normalization, all parameters are in the same dimension, which is appropriate for parameter comparison and analysis.
The Z-Score standardization method was used for normalization; it is simple and fast in operation, and has a particular anti-interference ability. The specific calculation formula is shown in Formula (1).
z = z μ δ
where z′ is the value after normalization; z is the original value; μ is the variance value of the population sample; and δ is the standard deviation of the population sample.

2.4. Principal Component Analysis Screening Input Sequence

When using neural network for prediction, the number of input sequences and the correlation and coupling between parameters are also very critical. When selecting neural network input sequences, not only the number of input sequence parameters, but also the correlation and coupling between parameters should be considered [20]. The PCA was used to complete the input parameter screening. The core idea of principal component analysis is to solve the correlation matrix of input variables according to orthogonal transformation and obtain the cumulative variance contribution rate according to the corresponding characteristic value of the correlation matrix to obtain the principal component parameters of the original variables. Multidimensional data were reduced to obtain comprehensive indicators while retaining the integrity of information and improving the prediction accuracy of the neural network model.
The steps of principal component analysis are as follows:
Step 1: Calculate the normalized data correlation coefficient matrix R.
Use Equation (2) to calculate the normalized coefficient correlation matrix R = (rij)(21 × 21)
r i j = k = 1 300 ( z i k z i ) · ( z j k z j ) k = 1 300 ( z i k z i ¯ ) 2 · ( z j k z j ¯ ) 2
In the equation, rij is the correlation coefficient between factor i and factor j, where i, j = 1, 2, …, 21.
Step 2: Calculate the eigenvalues and eigenvectors of a matrix.
Calculate the eigenvalues of the correlation coefficient matrix, and arrange them in order of magnitude as λ1 ≥ λ2 ≥ … ≥ λ21 ≥ 0, where corresponding eigenvectors are ei (i = 1, 2, …, 21), thus obtaining a set of principal components Xq, as shown in Equation (3):
X q = i = 1 21 e i · z i
middle: i, q = 1, 2, …, n, ||ei|| = 1
Step 3: Calculate the principal component contribution rate and cumulative contribution rate.
Calculate the contribution rate of each principal component, where the contribution rate of the k-th principal component is shown in Equation (4):
g k = λ k / i = 1 21 λ i   ( k = 1 , 2 , , 21 )
To obtain the cumulative contribution rate of the first k principal components, as shown in Equation (5),
l k = i = 1 k λ i / i = 1 p λ i   ( k = 1 , 2 , , 21 )
Taking eigenvalues with a cumulative contribution rate of 90%, λ1, λ2, …, λm, the first m principal components corresponding to m were used to reduce the dimensionality of these 21 indicators. The results are shown in Table 2.
From Table 2, it can be seen that once the cumulative contribution rate of the first nine principal components reached 90%, extracting the first nine principal components could fully reflect the impact of input variables on the silicon content of molten iron.
Step 4: Calculate principal component load and principal component score.
Based on the previous calculation results, use Equations (6) and (7) to calculate the principal component loads βi and corresponding scores γi of the eigenvalues λi and eigenvector ei, respectively:
β i = i = 1 21 λ i · e i
γ i = z · β i
The relationship between nine principal components and twenty-one variables is shown in Equation (8):
{ X 1 = a 11 · z 1 + a 12 · z 2 + + a 1 , 21 · z 21 X 9 = a 91 · z 1 + a 92 · z 2 + + a 9 , 21 · z 21
In the formula, z′i (i = 1, 2, …, 21) is the normalized value of the input variable and aij (i = 1, 2, …, 9, j = 1, 2, …, 21) is the main component coefficient matrix, as shown in Table 3.

3. Prediction Model and Method

3.1. BP Neural Network Prediction Model

The BP (backpropagation) neural network is a multi-layer feedforward neural network for error backpropagation learning. The neural network has a simple structure and substantial computing power and comprises an input, hidden, and output layer [21]. The specific network is shown in Figure 1, where Xi represents the input parameter, and Y means the silicon content of the output hot metal. The data were compared with the absolute value after being calculated by the three-layer network structure. The error will be backpropagated when it does not conform to the set value. The neural network weight and threshold value will be adjusted for re-calculation. The calculation will end when the error meets the committed value or reaches the set number of iterations’ end [22].

3.2. FPA-BP Neural Network Optimization Model

3.2.1. Flower Pollination Algorithm

British scholar Yang proposed the flower pollination algorithm in 2012 based on simplifying flower pollination behavior modeling in nature [23]. The reproduction of plants in nature generally depends on the pollination of flowers. According to the pollination objects, there are generally two kinds: self-pollination and cross-pollination. Self-pollination refers to pollinating the stamen pollen of flowers to the main pistil, while cross-pollination expresses the pollination of flowers between different plants. According to the manner of pollination, this is generally divided into biological pollination and abiotic pollination. Most flower pollination relies on bees, insects, and other organisms, and a few kinds rely on wind, water, and other abiotic methods for pollination. Plants reproduce by pollination. The flower pollination algorithm is based on the following four principles [24,25]:
Principle 1: Biological cross-pollination is regarded as a global search behavior, and the propagation behavior is considered to conform to Levy flight distribution. The formula is shown in Formula (9).
X j i + 1 = X j i + α L ( X j i X best )
where X j i + 1 is the solution of generation i + 1; X j i represents the solution of the ith generation; Xbest is the current best solution; α is a weighting factor, usually 0.01; and L means the step size obtained from the levy flight.
The step length L obtained from the levy flight is expressed as:
L ( t ,   c ) γ Γ ( γ ) sin ( π γ 2 ) 1 π c t 1 + γ , | t | +
where Γ(γ) is a standard gamma function; c is the regularization parameter of the distribution amplitude, which can be taken as 1 according to experience; and t is the step size generated by the nonlinear transformation.
The step size t generated by nonlinear transformation is expressed by Equation (11).
t = U | V | γ 1 , U N ( 0 , δ 2 ) , V N ( 0 , 1 )
In Formula (4), δ2 meets the conditions in Formula (12):
δ 2 = { Γ ( 1 + λ ) λ Γ [ ( 1 + λ ) / 2 ] , sin ( π λ / 2 ) 2 ( λ 1 ) / 2 } 1 / λ
Principle 2: Biological self-pollination is regarded as a local search, and the process can be described by a formula, as shown in Formula (13).
X j i + 1 = X j i + ε ( X j i X k i )
where X j i + 1 and X j i are randomly selected solutions, and ε represents the reproduction probability and takes the random number within the closed range [0,1].
Principle 3: The value of reproduction probability is proportional to the approximation of two flowers during pollination.
Principle 4: The time of global and local pollination transformation is controlled by the random probability p ∈ (0.1).

3.2.2. Model Optimization

As the BP neural network is prone to be trapped in local optimization and unstable due to manual parameter adjustment, the flower pollination algorithm is introduced to optimize the neural network. The specific process is as follows:
(1) The basic parameters of the neural network and flower pollination algorithm are set. The total number of training samples is 250, and the maximum number of training times is 6000, while the learning rate is 0.03, and the error setting value is 0.001. Code the weights and thresholds of the network to the pollen individuals, and each can represent a network structure [26]. The single hidden layer BP neural network can map any function [27,28]. In this paper, the single hidden layer neural network was used. The logarithmic sigmoid function was used as the transfer function between the hidden layer and the output layer, as shown in Equation (14).
f ( x ) = 1 1 + e x
The principal component analysis input sequence determines the number of neurons in the network input layer, which was 9. The output parameter is the silicon content of hot metal, so the number of output layers was 1. The number of neurons in the hidden layer is determined by the empirical Formula (15) [29].
N = n + i + a
where N is the number of neurons in the hidden layer; n is the number of neurons in the input layer; i is the number of neurons in the output layer; and a is a constant between 1 and 10.
The relationship between the number of neurons and the error is shown in Figure 2. When the number of neurons in the hidden layer is 12, the prediction error reaches the minimum value. Therefore, the number of neurons in the hidden layer is selected as 12.
(2) During the operation in the algorithm, the position of the pollen particles is initialized randomly. Each position of the pollen particles is regarded as a weight distribution of the neural network. The fitness function value of each individual is calculated using Equation (16), and the most petite individual with the fitness value is retained.
F = 1 N i = 1 N j = 1 C ( Z j . i Y j . i )
where N is the total number of training samples; Zj.i is the target output value; Yj.i is the actual output value; and C is the number of output neurons in the network.
(3) According to the randomly generated conversion probability, all pollen particle positions are converted and updated between local and global searches. Calculate the fitness and find the optimal solution.
(4) The optimal solution is decoded into the weights and thresholds of the BP neural network, and then the training calculation is carried out. Judge whether the training conditions are met according to the final results. If yes, end the training, input samples, and make a prediction. If not, repeat (2) and (3) to continue the calculation.
The basic flow of the FPA-BP algorithm is shown in Figure 3.

3.3. Data Set Segmentation

After the model parameters had been determined, the data set was divided. Two hundred and fifty data groups were used as the training set, and fifty groups were used as the test set. Figure 4 shows hot metal silicon content data segmentation before and after abnormal value processing.

3.4. Selection of Model Evaluation Indicators

The model’s running time and the algorithm’s prediction accuracy were used as the evaluation indicators for the excellent running of the model. The running time of the model determines the timeliness of the prediction results. The prediction accuracy of the algorithm is the core index to consider the effectiveness of the algorithm, which should be measured and characterized from multiple aspects. In this model, hit rate, average absolute error, root mean square error, and average fundamental percentage error were used as evaluation indicators to indicate the accuracy of the prediction model [30,31].
Hit rate (HR) was selected to characterize the model prediction reliability within the acceptable process range. The hit rate is the ratio of the number of samples whose absolute prediction error value is less than or equal to p to the total number of set pieces in this model, p = 0.1, which is adopted.
h = { 1 | e i | p 0 other
m = 1 n ( i = 1 n h i )
Mean absolute error (MAE) represents the total value of the overall deviation between the predicted value of the model and the measured value.
MAE = 1 n i = 1 n | ( y ^ j y i ) |
The root means square error (RMSE) is selected to represent the deviation fluctuation between the predicted value of the model and the measured reference value.
RMSE = 1 n i = 1 n ( y ^ i y i ) 2
The mean absolute percentage error (MAPE) is selected to represent the relative value of the overall deviation between the predicted value and the measured value of the model.
MAPE = 1 n i = 1 n ( | y ^ i y i | y i × 100 % )
where m is the hit rate; n is the number of samples; ei is the prediction error; i is the test sample number; p is the required error value; y ^ i is the predicted value; and yi represents the actual value.

3.5. Model Prediction

The model was built on the Jupyter Notebook platform using Python language and validated with some input data. The results are shown in Table 4. It can be seen that the model prediction is feasible, and the FPA-BP model prediction results are more accurate than the BP prediction model.
Bring the test set into the BP model as well as the FPA-BP model for prediction, and compare the prediction effect of the optimized model with that of the non-optimized model. As shown in Figure 5 and Figure 6, it can be seen that the overall prediction trend of the BP prediction model is consistent with the actual situation. When the data were stable, the predicted value was close to the actual value, but the error at the inflection point was significant. When the furnace condition fluctuated, the difference between the expected and actual values was substantial. This means that the prediction effect could not meet the essential requirements and needs further improvement. From the prediction results of the FPA-BP model, we can see that the predicted value was closer to the actual value, and the prediction effect of the model was better when the blast furnace was stable or fluctuating. The FPA-BP prediction model was superior to the BP. The BP neural network uses a stepwise descent method for weights and thresholds, which can easily fall into local extremum and slow convergence speed. The use of FPA optimization can significantly improve these problems, while also improving the generalization ability and prediction accuracy.
Observe the prediction effect of the BP and FPA-BP models by analyzing the absolute error. The fundamental mistake of the BP model prediction shown in Figure 7 fluctuates between −0.2415 and 0.3430, while the maximum error is 0.1386. It can be seen that the absolute error undulates in a wide range and high frequency. As presented in Figure 8, the fundamental error predicted by the FPA-BP model dates between −0.1178 and 0.1729, with a maximum error of 0.1186. It can be seen that the error is generally lower than that of the BP model, the base fluctuates in a small range, and the frequency is shallow.
The characteristics and model running time predicted by the BP and FPA-BP models are shown in Table 5. The HR of the FPA-BP prediction model was 86%, superior to 70% of the BP neural network prediction model. The MAPE, the MAE, and the RMSE of the FPA-BP prediction model were 0.1305, 0.0444, and 0.0599, respectively, lower than 0.1999, 0.0614, and 0.0932 for the BP neural network prediction model. The calculation results show that the prediction performance indicators of the FPA-BP model were significantly better than those of the BP model, and the model accuracy was better. In addition, the running time of the FPA-BP prediction model was 0.3230 s, which was faster than 0.8601 for the BP neural network model.

4. Conclusions

In view of the difficulty in predicting the silicon content in hot metal under complex furnace conditions, a prediction model based on PCA and FPA-BP was proposed, and the following conclusions were drawn:
(1)
For the prediction of silicon content in hot metal, the FPA-BP prediction model was superior to the BP for hit rate, average absolute error, root mean square error, average absolute percentage error, and other indicators. The prediction results were more accurate, and the operation speed was faster.
(2)
Principal component analysis was used to screen the input sequence that affects the silicon content in the hot metal of the blast furnace. When the retention rate of the central component information was 0.9, the influence parameters reduced from twenty-one dimensions to nine, reducing the influence of too many parameters and the correlation between parameters.
(3)
Under the furnace fluctuation, the predicted value of the BP model deviated significantly from the actual value. In contrast, the expected value of the optimized FPA-BP model was closer to the actual value, which was appropriate for the complex condition of the furnace.

Author Contributions

Conceptualization, X.X. and M.L.; methodology, Z.P.; software, J.S.; validation, X.X., Z.P. and J.S.; formal analysis, M.L.; investigation, J.S.; resources, X.X.; data curation, X.X.; writing—original draft preparation, J.S.; writing—review and editing, J.S.; visualization, Z.P.; supervision, X.X.; project administration, M.L.; funding acquisition, X.X. All authors have read and agreed to the published version of the manuscript.

Funding

The present work was financially supported by the Natural Science Basic Foundation of China (Program No. 52174325), the Key Research and Development Project of Shaanxi Province (Grant No. 2019TSLGY05-05) and the Shaanxi Provincial Innovation Capacity Support Plan (Grant No. 2023-CX-TD-53). The authors gratefully acknowledge their support.

Data Availability Statement

Some data has been provided in the article. If necessary, you can contact the author.

Acknowledgments

Xi’an University of Architecture and Technology and Beijing University of Science and Technology have provided assistance and resources, making it possible for us to complete this research. We express our gratitude for their help.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Smith, M. Blast furnace ironmaking: View on future developments. Ironmak. Steelmak. 2016, 42, 734–742. [Google Scholar] [CrossRef]
  2. Yang, Y.L.; Zhang, S.; Yin, Y.X. A modified ELM algorithm for the prediction of silicon content in hot metal. Neural Comput. Appl. 2016, 27, 241–247. [Google Scholar] [CrossRef]
  3. Li, Z.; Yang, C.; Liu, W.; Zhou, H.; Li, Y. Research on hot metal Si-content prediction based on LSTM-RNN. CIESC J. 2018, 69, 992–997. [Google Scholar]
  4. Yi, L.Z.; Li, L.; Jiang, Z.H. Prediction of silicon content in hot metal using neural network and rough set theory. J. Iron Steel Res. 2019, 31, 689–695. [Google Scholar]
  5. Zhao, H.; Zhao, D.; Yue, Y.; Wang, H. Study on prediction method of hot metal temperature in blast furnace. In Proceedings of the 2017 IEEE International Conference on Mechatronics and Automation (ICMA), Takamatsu, Japan, 6–9 August 2017; pp. 316–323. [Google Scholar]
  6. Wang, Y.K.; Liu, X.Y.; Zhang, B.L. Based on SVM-RFE. Australian and New Zealand Control Conference, on feature selection and blast furnace temperature tendency prediction in hot metal. In Proceedings of the 2018 Australian & New Zealand Control Conference (ANZCC), Melbourne, Australia, 7–8 December 2018; pp. 371–378. [Google Scholar]
  7. Zhang, H.; Zhang, S.; Yin, Y.; Chen, X. Prediction of the hot metal silicon content in blast furnace based on extreme learning machine. Int. J. Mach. Learn. Cybern. 2017, 9, 1697–1705. [Google Scholar] [CrossRef]
  8. Han, Y.; Li, J.; Yang, X.L.; Liu, W.X.; Zhang, Y.Z. Dynamic prediction research of silicon content in hot metal driven by big data in blast furnace smelting process under Hadoop cloud platform. Complexity 2018, 8, 8079697. [Google Scholar] [CrossRef]
  9. Spirin, N.A.; Polinov, A.A.; Gurin, I.A.; Beginyuk, V.A.; Pishnograev, S.N.; Istomin, A.S. Information System for Real-Time Prediction of the Silicon Content of Iron in a Blast Furnace. Metallurgist 2020, 63, 898–905. [Google Scholar] [CrossRef]
  10. Wang, Z.Y.; Jiang, D.W.; Wang, X.D.; Zhang, J.L.; Liu, Z.J.; Zhao, B.J. Prediction of blast furnace hot metal temperature based on support vector regression and limit learning machine. Chin. J. Eng. 2021, 43, 569–576. [Google Scholar]
  11. Wen, B.; Wu, S.; Zhou, H.; Gu, K. A BP neural network-based mathematical model for predicting Si content in hot metal from COREX process. J. Iron Steel Res. 2018, 30, 776–781. [Google Scholar]
  12. Cui, G.M.; Li, J.; Zhang, Y.; Li, Z.D.; Ma, X. Prediction Modeling Study for Blast Furnace Hot Metal Temperature Based on T-S Fuzzy Neural Network Model. Iron Steel 2013, 48, 11–15. [Google Scholar]
  13. Liang, W.; Wang, G.; Ning, X.; Zhang, J.; Li, Y.; Jiang, C.; Zhang, N. Application of BP neural network to predict coal ash melting characteristic temperature. Fuel 2020, 260, 56–63. [Google Scholar] [CrossRef]
  14. Tunckaya, Y. Performance assessment of permeability index prediction in an ironmaking process via soft computing techniques. J. Soc. Mech. Eng. Part II Electron. J. Process Mech. Eng. 2017, 231, 1101–1109. [Google Scholar] [CrossRef]
  15. Pan, D.; Jiang, Z.H.; Chen, Z.P. Temperature measurement and compensation method of blast furnace hot metal based on infrared computer vision. IEEE Trans. Instrum. Meas. 2018, 68, 3576–3582. [Google Scholar] [CrossRef]
  16. Song, J.H.; Yang, C.J.; Zhou, Z.; Liu, W.H.; Ma, S.Y. Application of improved EMD-Elman neural network to predict silicon content in hot metal. CIESC J. 2016, 67, 729–735. [Google Scholar]
  17. Jiang, K.; Jiang, Z.H.; Xie, Y.F.; P, D.; G, W.H. Inte lligent Prediction of the Change Trend of Silicon Content in Blast Furnace. Control Eng. China 2020, 27, 540–546. [Google Scholar]
  18. Guo, D.W.; Zhou, P. Soft-sensor modeling of silicon content in hot metal based on sparse robust LS-SVR and multi-objective optimization. Chin. J. Eng. 2016, 38, 1233–1241. [Google Scholar]
  19. Zhang, H.G.; Zhang, S.; Yin, Y.X. Multi-class fault diagnosis of BF based on global optimization LS-SVM. Chin. J. Eng. 2017, 39, 39–47. [Google Scholar]
  20. Gan, L.Z.; Liu, H.K.; Sun, Y.X. Sparse least squares support vector machine for function estimation. In Proceedings of the 3rd International Symposium on Neural Networks, Chengdu, China, 28 May–1 June 2006; pp. 1016–1022. [Google Scholar]
  21. Wu, B.; Han, S.; Xiao, J.; Hu, X.; Fan, J. Error compensation based on BP neural network for airborne laser ranging. Optik 2016, 127, 4083–4088. [Google Scholar] [CrossRef]
  22. Zhang, J.H.; Jin, Y.J.; Shen, F.M.; Su, X.L. The prediction model of silicon content in hot metal is established using an optimized BP neural network. J. Iron Steel Res. 2007, 19, 60–63. [Google Scholar]
  23. Luo, Z.Y.; Zhu, Z.H.; Xie, Z.Q.; Sun, G.L. Multi-objective optimization algorithm of flower differential pollination workflow for cloud computing. Acta Electron. Sin. 2021, 49, 470–476. [Google Scholar]
  24. Niu, Q.; Cao, A.M.; Chen, X.Y.; Z, D. Short-term load forecasting based on flower pollination algorithm and BP neural network. Power Grid Clean Energy 2020, 36, 28–32. [Google Scholar]
  25. Liu, J.S.; Liu, L.; Li, Y. A Differential Evolution Flower Pollination Algorithm with Dynamic Switch Probability. Chin. J. Electron. 2019, 28, 737–747. [Google Scholar] [CrossRef]
  26. Pham, Q.B.; Sammen, S.S.; Abba, S.I.; Mohammadi, B.; Shahid, S.; Abdulkadir, R.A. New hybrid model based on relevance vector machine with flower pollination algorithm for phycocyanin pigment concentration estimation. Environ. Sci. Pollut. Res. 2021, 28, 32–36. [Google Scholar] [CrossRef]
  27. Zhang, L.; Wang, F.L.; Sun, T. A constrained optimization method based on BP neural network. Neurocomput. Its Appl. 2018, 29, 413–419. [Google Scholar] [CrossRef]
  28. Chen, W.; Kong, F.; Wang, B.; Li, Y. Application of grey relational analysis and extreme learning machine method for predicting hot metal silicon content in blast furnace. Ironmak. Steelmak. 2019, 46, 974–979. [Google Scholar] [CrossRef]
  29. Zheng, B.H. Material procedure quality forecast based on genetic BP neural network. Mod. Phys. Lett. B 2017, 31, 26–32. [Google Scholar] [CrossRef]
  30. Zhang, S.Q.; Yang, Z.N.; Zhang, L.G.; Wan, S.Y.; Wang, Z.Y. Short-term power load forecast based on dimension reduction by elastic network and flower pollination algorithm optimized BP neural network. Chin. J. Sci. Instrum. 2019, 40, 47–54. [Google Scholar]
  31. Hu, J.H.; Huang, Y.L.; Zhang, J.; Wang, Q.H.; Zhou, X.L.; Ma, J.W. Prediction of the tension reducing steel pipe wall thickness based on sparrow search algorithm optimized double hidden layer BP neural network. J. Plast. Eng. 2022, 29, 145–151. [Google Scholar]
Figure 1. The basic structure of the BP neural network.
Figure 1. The basic structure of the BP neural network.
Metals 13 00918 g001
Figure 2. Error comparison of the number of hidden layer neurons.
Figure 2. Error comparison of the number of hidden layer neurons.
Metals 13 00918 g002
Figure 3. Schematic diagram of the optimization model of the flower pollination algorithm.
Figure 3. Schematic diagram of the optimization model of the flower pollination algorithm.
Metals 13 00918 g003
Figure 4. Segmentation of silicon content data in hot metal after outlier processing.
Figure 4. Segmentation of silicon content data in hot metal after outlier processing.
Metals 13 00918 g004
Figure 5. Comparison between the actual value and BP prediction value.
Figure 5. Comparison between the actual value and BP prediction value.
Metals 13 00918 g005
Figure 6. Comparison between the actual value and predicted value of FPA-BP.
Figure 6. Comparison between the actual value and predicted value of FPA-BP.
Metals 13 00918 g006
Figure 7. Prediction error of the BP model.
Figure 7. Prediction error of the BP model.
Metals 13 00918 g007
Figure 8. Prediction error of the FPA-BP model.
Figure 8. Prediction error of the FPA-BP model.
Metals 13 00918 g008
Table 1. Partial raw data table.
Table 1. Partial raw data table.
z1z2z3z4z5z6z7z8z9
9384303112021523828361850.42
9367299110821024325391760.39
9421304110320923527381830.40
9388306109521222126351850.40
9455300108920923627391800.39
9623298107622324725351840.38
10,653301110722625825401830.39
9202303113223023229331790.41
9324305113923322731371820.41
9405305115223824125361850.42
Table 2. Principal component analysis of the data.
Table 2. Principal component analysis of the data.
Principal
Component
Characteristic ValueContribution RateCumulative Contribution Rate
X18.21570.30430.3043
X24.32560.15450.4588
X32.95670.10560.5644
X42.52360.09010.6545
X51.98560.07090.7254
X61.52680.05450.7799
X71.32580.04740.8273
X81.10230.03940.8667
X90.98520.03520.9018
X210.0320.0021
Table 3. Principal component coefficient matrix.
Table 3. Principal component coefficient matrix.
X1X2X3X4X5X6X7X8X9
z′10.0815−0.06540.20990.08580.06120.0144−0.0205−0.01550.1255
z′2−0.06320.32460.06250.0024−0.1202−0.10320.02540.01940.0919
z′30.1983−0.0342−0.10410.28560.03180.02480.02560.1306−0.2260
z′4−0.09840.13460.22050.0588−0.01800.0138−0.1135−0.15850.0343
z′50.11280.13670.02060.0726−0.01440.04280.02110.01110.1211
z′6−0.00920.26230.0063−0.01440.1678−0.16440.0283−0.02130.0326
z′70.0167−0.2876−0.16400.27640.08420.01490.12350.2035−0.1378
z′8−0.09620.22010.03850.09840.10520.13790.24550.07350.1129
z′90.13260.09840.10150.1032−0.11160.02710.02950.21650.1388
z′100.33870.11280.0605−0.0084−0.05980.0377−0.00950.03450.0536
z′110.0097−0.3092−0.02320.03040.01660.02280.0242−0.01720.2838
z′120.34230.16270.04140.01360.1314−0.26960.02540.01990.0018
z′130.03410.0962−0.03660.0120.36280.1044−0.2006−0.0316−0.1260
z′14−0.10920.13250.11350.0144−0.11660.25090.11350.02550.0643
z′150.1110−0.0237−0.10010.11520.18680.22390.11110.1021−0.0811
z′16−0.01920.00970.0145−0.0044−0.00190.26820.03430.20630.0266
z′170.28140.1423−0.12640.0248−0.0332−0.1409−0.0155−0.0235−0.1378
z′180.1082−0.1341−0.01350.12240.08160.13790.01450.07630.0729
z′19−0.22320.32320.1075−0.2114−0.10160.0271−0.02950.02650.1388
z′200.3189−0.12090.0685−0.1064−0.13320.13770.03950.1455−0.0536
z′21−0.0216−0.09230.25270.01250.0818−0.00280.12420.07620.0838
Table 4. Model prediction results and some original data.
Table 4. Model prediction results and some original data.
Number12345
Hot air temperature11201108110310951089
Furnace top temperature215210209212209
Furnace top pressure238243235221236
Air permeability index2825272627
Coal injection rate3639383539
Differential pressure185176183185180
……
Hot air pressure0.420.390.40.40.39
Si content0.560.60.580.530.62
BP predicted value0.630.550.430.650.51
FPA-BP predicted value0.590.620.550.570.55
Table 5. Comprehensive quantitative characterization of prediction results of the FPA-BP and BP algorithms.
Table 5. Comprehensive quantitative characterization of prediction results of the FPA-BP and BP algorithms.
ModelHRMAPEMAERMSETime/s
BP70.00%0.19990.06140.09320.8601
FPA-BP86.00%0.13050.04440.05990.3230
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Song, J.; Xing, X.; Pang, Z.; Lv, M. Prediction of Silicon Content in the Hot Metal of a Blast Furnace Based on FPA-BP Model. Metals 2023, 13, 918. https://doi.org/10.3390/met13050918

AMA Style

Song J, Xing X, Pang Z, Lv M. Prediction of Silicon Content in the Hot Metal of a Blast Furnace Based on FPA-BP Model. Metals. 2023; 13(5):918. https://doi.org/10.3390/met13050918

Chicago/Turabian Style

Song, Jiale, Xiangdong Xing, Zhuogang Pang, and Ming Lv. 2023. "Prediction of Silicon Content in the Hot Metal of a Blast Furnace Based on FPA-BP Model" Metals 13, no. 5: 918. https://doi.org/10.3390/met13050918

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop