# Combustion Optimization for Coal Fired Power Plant Boilers Based on Improved Distributed ELM and Distributed PSO

^{1}

^{2}

^{*}

## Abstract

**:**

_{x}emission. The distributed particle swarm optimization algorithm based on MapReduce is used to optimize the input parameters of boiler combustion model, and weighted coefficient method is used to solve the multi-objective optimization problem (boiler combustion efficiency and NO

_{x}emissions). According to the experimental analysis, the results show that the method can optimize the boiler combustion efficiency and NO

_{x}emissions by combining different weight coefficients as needed.

## 1. Introduction

_{x}emissions and boiler combustion efficiency are modeled separately using the proposed IDELM. Two parameters (L and A) in the model are analyzed to select the optimal combination of parameters. Then the two models are combined to build a multi-objective boiler combustion model. Finally, the adjustable input parameters of the boiler combustion model are optimized with the optimized particle swarm optimization (MR-PSO) algorithm to get the optimal combination and use it to guide the boiler combustion system regulation.

## 2. Review of MapReduce, ELM and PSO

#### 2.1. MapReduce

#### 2.2. Extreme Learning Machine

_{i},t

_{i}), where ${x}_{i}={({x}_{i1},{x}_{i2},\cdots ,{x}_{in})}^{\mathrm{T}}\in {R}^{n}$, ${t}_{i}={({t}_{i1},{t}_{i2},\cdots ,{t}_{im})}^{\mathrm{T}}\in {R}^{m}$. In ELM, the input weights and hidden biases are randomly generated instead of tuned. Then, the nonlinear system can be converted to a linear system:

#### 2.3. Particle Swarm Optimization Algorithm

## 3. Improved Distributed Extreme Learning Machine (IDELM) Algorithm

#### 3.1. Preliminaries

#### 3.2. IDELM

Algorithm 1: Improved Distributed ELM (IDELM) S and D calculation steps |

1 class SandD |

2 // INITIALIZE |

3 int L, m |

4 s = new ASSOCIATIVEARRAY |

5 d = new ASSOCIATIVEARRAY |

6 class Mapper |

7 h = new ASSOCIATIVEARRAY |

8 x = new ASSOCIATIVEARRAY |

9 map(context) |

10 while (context.nextKeyValue) |

11 (x,t) = Parse(contest) |

12 for i = 1 to L do |

13 h[i] = g(w_{i} ∙x + b_{i}) |

14 for i = 1 to L do |

15 for j = 1 to L do |

16 s[i][j] = s[i][j] + h[i]∙h[j] |

17 for i = 1 to m do |

18 d[i][j] = d[i][j] + h[i]∙t |

19 for i = 1 to L do |

20 for j = 1 to L do |

21 context.write(triple (‘S’, i, j), s[i, j]) |

22 for j = 1 to m do |

23 context.write(triple (‘D’, i, j),d[i, j]) |

24 method run(context) |

25 map(context) |

26 class Reduce |

27 reduce(context) |

28 while (context.nextKey()) |

29 sd = 0 |

30 for (val : values) |

31 sd+=val.get() |

32 context.write(key, sd) |

33 method run(context) |

34 reduce(context) |

- Step 1:
- Initialize the number of hidden layer nodes $L$, label $m$, two arrays $s$ and $d$, which are used to store the calculation results of the elements in matrix $S$ and $D$ (Lines 3–5);
- Step 2:
- In class
**Mapper**, initialize the local variables $h$ and $x$ (Lines 6–8); - Step 3:
- In the
**map**method, we first use a while loop to read a sample, and then divide the sample into training data $x$ and the corresponding training result $t$. The separated training sample attribute value $x$ is brought into the partial result $h$ of the loop computation matrix $H$. According to the solved $h$ value and Formulas (14) and (15), the partial accumulation of elements in matrices $S$ and $D$ is calculated, respectively, and the final results are stored in the array $s$ and $d$. The role of the while loop is that when all the <key, value> key pairs in the data block are run, the final sum is stored in $s$ and $d$ (Lines 9–18); - Step 4:
- Put s and d in the form of key-value pairs in HDFS (Lines 19–23);
- Step 5:
- The
**run**method in the**Mapper**class is overloaded (Lines 24–25); - Step 6:
- The
**reduce**method uses a while loop to extract a sample and then initialize a temporary variable. Merge the intermediate results with the same key values in different Mapper to obtain the final accumulated sum of the elements corresponding to the key value (Lines 27–31); - Step 7:
- Store the result in HDFS (Line 32);
- Step 8:
- The
**run**method in the**Reduce**class is overloaded (Lines 33–34).

Algorithm 2: Improved Distributed ELM (IDELM) |

1 for i = 1 to L do |

2 Randomly generated hidden layer node parameters (w_{i}, b_{i}) |

3 In MapReduce,calculate S = H^{T}H, D = H^{T}T |

4 Calculate the output weight vector β = (1/A + S)^{−1}D |

5 Forecast result f(x) = h(x) β |

#### 3.3. Algorithm Performance Analysis

#### 3.3.1. Experiments Setup

^{8}.

#### 3.3.2. The Effect of Number of Hidden Layer Nodes on the Performance of IDELM Algorithm

#### 3.3.3. The Effect of Training Sample Number on the Performance of IDELM Algorithm

## 4. Distributed Particle Swarm Optimization Algorithm

#### 4.1. Initialization Stage

_{Best}. The global optimal particles are then stored in the Distributed File System (DFS) as key-value pair <Key, Value>. The contents of the key-value pair <Key, Value> are stored as shown in Figure 5. The values $i$, ${x}_{i}$, ${v}_{i}$, ${P}_{iBest}$, ${g}_{Best}$, $fitness({x}_{i})$, $fitness({P}_{iBest})$, $fitness({g}_{Best})$ are separated by semicolons respectively. They respectively represent the particle $i$, the position of particle $i$, the velocity of particle $i$, The global optimum position, the fitness value of particle $i$, the fitness value of global optimal solution of particle $i$, and the fitness value of global optimal particle.

#### 4.2. MapReduce Stage

Algorithm 3: MR-PSO algorithm steps |

1 class MAPPER |

2 method INITIALIZE() |

3 position = new ASSOCIATIVEARRAY |

4 velocity= new ASSOCIATIVEARRAY |

5 method Map( Key:ID,Value:particle) |

6 lparticle_{Best} = None; |

7 (position,velocity,fitness) = Parse(particle); |

9 for each particle do |

10 position^{New} = position.update(position); |

11 velocity^{New} = velocity.update(velocity); |

12 position^{New} _{fitness} = Fitness(position^{New}); |

13 if position^{New} _{fitness}> Fitness(lparticle_{Best})then; |

14 lparticle_{Best} =position^{New} |

15 end if |

16 context.write(id,position^{New}, velocity^{New}) |

17 end for |

18 context.write(localbest, lparticle_{Best}) |

19 class REDUCE |

20 method reduce(Key : localbest,ValList) |

21 gBestparticle = None; |

22 for each lparticle_{Best} in ValList do |

23 lparticle_{Best} = ParticleLocalBest(VslList) |

24 if Fitness(lparticle_{Best}) > Fitness(gBestparticle) then |

25 gBestparticle= lparticle_{Best} |

26 end if |

27 end for |

28 context.write(gbestID, gBestparticle); |

- (1)
- The input data is divided into a number of data blocks and stored in a distributed file system (HDFS). These data are entered into a MapReduce task as key-value pairs, and then each Map function is assigned a data block;
- (2)
- The program in Mapper will update the velocity and position of each particle in the data block. After the update, it will bring it into the fitness evaluation function to calculate the fitness value;
- (3)
- Compare the fitness value of new and old particles, if the fitness value of new particle is good, replace the position of the original particle with the current particle position and store it in HDFS;
- (4)
- Find the fitness value of the global best particle (lparticle
_{Best}) in each Map, and compare with each updated particle fitness value. If the fitness value of the particle is better than the fitness value of lparticle_{Best}, use this particle position and velocity replace lparticle_{Best}position and velocity; - (5)
- Through the above particle replacement, the local optimal solution in the current map task is finally obtained, and the local optimal solution is stored into the HDFS in the form of <key, value> key-value pairs, where the key is a fixed value, value is the local optimal particle position;
- (6)
- When all Map tasks have finished executing, they will be entered into a Reduce task. The main task of Reduce is to find the global best particle (${g}_{Best}$) from the lparticle
_{Best}generated from all the Maps, then replace the original global optimal particle and store the final result in HDFS.

#### 4.3. Conditional Judgment Stage

## 5. Boiler Combustion Model Based on IDELM Algorithm

#### 5.1. Model and Experiments Setup

_{x}emission can be divided into three categories: adjustable input parameters, non-adjustable input parameters, and measurable but non-adjustable parameters.

_{max}and x

_{min}are the maximum and the minimum in the original sample, respectively.

#### 5.2. Parameters L and A on IDELM Model

_{x}emission model and the boiler combustion efficiency model are determined as shown in Table 1.

#### 5.3. The Effect of Prediction

^{2}are used to evaluate the predictive ability of the model. The specific formula are shown as follows:

_{x}test sample on the IDELM model are shown in Figure 7. It can be seen that the predicted value of NO

_{x}and the actual measured values of NO

_{x}are generally distributed around the diagonal, indicating that the IDELM model can predict NO

_{x}emissions very well.

^{2}for training samples and test samples of NO

_{x}emissions. It can be seen that the error is small, and the error of the test sample is slightly larger than that of the training sample. The determination coefficient R

^{2}of the prediction result of the model training sample is 0.863 and the determination coefficient R

^{2}of the test sample is 0.891. The result shows that the model has a good ability of fitting and predicting.

^{2}of the training and test samples, their maximum (Max), minimum (Min), and average (Means) values are given. The results of RMSE, MRE, R

^{2}in the table are the average of 20 experimental results. Usually RMSE, MRE smaller the more able to respond to the model of high precision, and the R

^{2}value closer to 1, indicating that the model better fit. It can be seen from the table that the maximum value of RMSE and MRE in the test sample is smaller than the maximum value of the training sample and the average value is much smaller than that of the training sample, which shows that the model has a great generalization ability. The determination coefficient R

^{2}of the test sample is 0.9538, which shows that the model has better fitting and predictive ability.

## 6. The Realization of Boiler Combustion Optimization

#### 6.1. Optimization Problem Description

#### 6.1.1. Boiler Combustion Optimization Function Design

_{x}emissions and high boiler combustion efficiency, to combine both technical indicators. However, NO

_{x}emissions and boiler combustion efficiencies have different dimensions. In order to reduce the mutual influence between the two, we need to normalize the two optimization objectives to achieve the same order of magnitude before optimization. In real life, each power station has different requirements on NO

_{x}emissions and boiler combustion efficiency, so the related objective function in the multi-objective function can be weighted. Because the objective of this paper is to find the lowest NO

_{x}emissions and the highest combustion efficiency, this paper uses the subtraction of NO

_{x}emissions and combustion efficiency as the objective function to achieve the goal of the same direction of the optimization. Finally, the two objective functions are combined into a comprehensive objective function according to a certain weight ratio. The combined objective function is as follows:

_{x}emission, ${f}_{\eta}({x}_{\mathrm{max}})$, ${f}_{\eta}({x}_{\mathrm{min}})$ are the maximum value and minimum value of the actual boiler combustion efficiency. $\alpha $, $\beta $ are the weight of each technical indicator, and $\alpha +\beta =1$.

#### 6.1.2. Constraints

#### 6.2. Combustion Optimization Based on MR-PSO Algorithm

_{x}emission and boiler combustion efficiency. The flow chart for optimization using the MR-PSO algorithm on the IDELM model is shown in Figure 9.

- (1)
- In the experiment, the initialization range of the initial population should be determined according to the actual operating data of the boiler combustion system collected this time, and then randomly generate n particles within the constraint range according to the uniform distribution function and store them into the distributed File system (HDFS);
- (2)
- The master node shaves input files in HDFS and distributes the sliced data to Map tasks.
- (3)
- The Map task separates and extracts the particle information from the input file, separates the particle velocity and position information respectively, and updates the particle velocity and position according to the velocity Equation (10) and the position Equation (11), and then substitute the updated particle’s position information into the boiler combustion IDELM model to get the prediction result. The Fitness value is obtained by Equation (22), according to the size of the fitness evaluation value, it is determined whether to replace the original particle velocity and position, and store it in HDFS in a certain order. All the particles are compared with the global optimal particle, replace its value if it is better than the original value, and storing it in the HDFS, thereby obtaining the global optimal particle in the map;
- (4)
- The main task of Reduce is to compare the local global optimal particles obtained by each map task before, and get the global optimal position of the entire particle swarm, and store it into the HDFS in key-value pairs;
- (5)
- After the completion of the Reduce task, it is judged whether the maximum number of iterations is satisfied. If the maximum iteration is not satisfied, the Map task is returned to step (3), and the Map task is continued until the maximum number of Iterations is satisfied.

#### 6.3. Analysis of Optimization Results under Different Weights

_{x}emissions in the sample data, so as to generate a set of parameters that can reduce the NO

_{x}production and improve the boiler combustion efficiency. The 10 parameter values that are not adjustable in the operating conditions are shown in Table 5. The corresponding NO

_{x}emissions, boiler combustion efficiency and boiler load of the 18,001-th group are 412 mg/m

^{3}, 93.39% and 349.1 MW, respectively.

_{x}emissions, so the weight of NO

_{x}is greater than the weight of boiler combustion efficiency. To verify the effect of weight ratio on NO

_{x}emission and boiler combustion efficiency, we choose ($\alpha =0.9,\beta =0.1$), ($\alpha =0.8,\beta =0.2$), ($\alpha =0.7,\beta =0.3$), ($\alpha =0.6,\beta =0.4$), ($\alpha =0.5,\beta =0.5$) to be analyzed.

_{x}emissions and boiler combustion efficiency at different weighting factor ratios for a boiler combustion optimization model under a load of 349.1 MW.

_{x}emissions and boiler combustion efficiency decrease sharply at the beginning and stabilize after 20 cycles, resulting in NO

_{x}emissions of 330.42 mg/m

^{3}and boiler combustion efficiency of 93.81%. Comparing with the 412 mg/m

^{3}of NO

_{x}emissions and 93.39% of boiler combustion efficiency under the initial conditions, after optimization, the NO

_{x}emissions are reduced by 19.8% and the boiler combustion efficiency is improved by 0.45%, which is in line with the purpose of reducing NO

_{x}emission and improving boiler combustion efficiency.

_{x}emissions and boiler combustion efficiency also decrease sharply at the beginning and stabilize after about 25 generations. The optimized NO

_{x}emissions result is 333.29 mg/m

^{3}and the boiler combustion efficiency is 93.84%. Comparing with Figure 11a, the NO

_{x}emissions and boiler combustion efficiency all increase, but the increase is not significant. Comparing with the data under the original conditions, the optimized NO

_{x}emissions are reduced by 19.1%, and the optimized boiler combustion efficiency is improved by 0.48%.

_{x}emissions and boiler combustion efficiency in Figure 11c is significantly less than in Figure 11a,b. Boiler combustion efficiency has been in the range of [93.84%, 93.87%], and stabilizes around 35 generations later. Finally, the NO

_{x}emissions optimization result is 345.56 mg/m

^{3}and the boiler combustion efficiency is 93.85%. Compared to the pre-optimization data, the optimized NO

_{x}emissions are reduced by 19.1% and the optimized boiler combustion efficiency is reduced by 0.49%.

_{x}emissions result was 340.6 mg/m

^{3}and the boiler combustion efficiency was 93.86%. The optimized NO

_{x}emissions were reduced by 17.3% and the boiler combustion efficiency by 0.5% compared to the pre-optimization data.

_{x}emissions decline is also significantly less than the previous figures, and the final optimization result is 374.35 mg/m

^{3}. This is because the weight ratio of NO

_{x}emissions to boiler combustion efficiency is 5:5, and the NO

_{x}reduction is no longer emphasized. Comparing to the pre-optimization data, the optimized NO

_{x}emissions are reduced by 9.14% and the boiler combustion efficiency is reduced by 0.6%.

_{x}emission decrease after optimization, and the NO

_{x}emission increases slowly with the decrease of the weight. Boiler combustion efficiency increases after optimization and increases as the weight increases.

_{x}generated. The decrease of secondary air flow indicates that the furnace oxygen decreases, the decrease of furnace oxygen can reduce the thermal NO

_{x}and fuel NO

_{x}formation, thereby reducing the formation of NO

_{x}. With the damper opening to variation degrees, it is conducive to burn out of incomplete combustion products, then it can reduce NO

_{x}emissions and prevent furnace coking. The burn-in damper is closed before optimization, which is why the pre-optimization NO

_{x}emissions are very high and the boiler combustion efficiency is relatively low. The amount of pulverized coal from coal mill maintains the overall unchanged.

## 7. Conclusions

_{x}emissions and high boiler combustion efficiency, this involves multi-objective optimization. The multi-objective function of the boiler combustion system is established by a weight coefficient method, and the multi-objective optimization model of boiler combustion is established by an IDELM algorithm, in order to reduce the emissions of NO

_{x}and improve the boiler combustion efficiency of boiler as much as possible. The distributed transformation of particle swarm optimization (PSO) algorithm based on a MapReduce framework is carried out, and the adjustable input parameters of the boiler combustion model are optimized by MR-PSO, finally obtaining the optimal combination of a set of adjustable input parameters, to achieve low NO

_{x}emissions and high boiler combustion efficiency. The results of this implementation reveal a good performance of the proposed method for optimization of power plant boiler combustion.

## Author Contributions

## Funding

## Conflicts of Interest

## References

- BeloševićaIvan, S.; Tomanović, I.; Beljanski, V.; Tucaković, D.; Živanović, T. Numerical prediction of processes for clean and efficient combustion of pulverized coal in power plants. Appl. Therm. Eng.
**2015**, 74, 102–110. [Google Scholar] [CrossRef] - Barnes, D.I. Understanding pulverised coal, biomass and waste combustion—A brief overview. Appl. Therm. Eng.
**2015**, 74, 89–95. [Google Scholar] [CrossRef] - Zhou, H.; Zhao, J.P.; Zheng, L.G.; Wang, C.L.; Cen, K.F. Modeling NO
_{x}emissions from coal-fired utility boilers using support vector regression with ant colony optimization. Eng. Appl. Artif. Intell.**2012**, 25, 147–158. [Google Scholar] [CrossRef] - Wu, X.Y.; Tang, Z.H.; Cao, S.X. A hybrid least square support vector machine for boiler efficiency prediction. In Proceedings of the 2017 IEEE 3rd Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 3–5 October 2017. [Google Scholar] [CrossRef]
- Lu, Y.K.; Peng, X.; Zhao, K. Hybrid Modeling Optimization of Thermal Efficiency and NO
_{x}Emission of Utility Boiler. J. Chin. Soc. Electr. Eng.**2011**, 31, 16–22. [Google Scholar] [CrossRef] - Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: A new learning scheme of feedforward neural networks. Neural Netw.
**2004**, 2, 985–990. [Google Scholar] [CrossRef] - Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: Theory and applications. Neurocomputing
**2006**, 70, 489–501. [Google Scholar] [CrossRef] [Green Version] - Liu, H.; Li, F.; Xu, X.; Sun, F. Multi-modal local receptive field extreme learning machine for object recognition. Neurocomputing
**2018**, 277, 4–11. [Google Scholar] [CrossRef] - Huang, G.B.; Zhou, H.; Ding, X.; Zhang, R. Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. Part B
**2012**, 42, 513–529. [Google Scholar] [CrossRef] [PubMed] - Huang, G.B.; Chen, L. Enhanced random search based incremental extreme learning machine. Neurocomputing
**2008**, 71, 3460–3468. [Google Scholar] [CrossRef] [Green Version] - Feng, G.; Huang, G.B.; Lin, Q.; Gay, R. Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Trans. Neural Netw.
**2009**, 20, 1352–1357. [Google Scholar] [CrossRef] - Zhao, X.; Wang, G.; Bi, X.; Zhao, Y. XML document classification based on ELM. Neurocomputing
**2011**, 74, 2444–2451. [Google Scholar] [CrossRef] - Zong, W.; Huang, G.B. Face recognition based on extreme learning machine. Neurocomputing
**2011**, 74, 2541–2551. [Google Scholar] [CrossRef] - Mohammed, A.A.; Minhas, R.; Wu, Q.M.J.; Sid-Ahmed, M.A. Human face recognition based on multidimensional PCA and extreme learning machine. Pattern Recognit.
**2011**, 44, 2588–2597. [Google Scholar] [CrossRef] - Tan, P.; Xia, J.; Zhang, C.; Fang, Q.Y.; Chen, G. Modeling and reduction of NO
_{x}emissions for a 700 MW coal-fired boiler with the advanced machine learning method. Energy**2016**, 94, 672–679. [Google Scholar] [CrossRef] - Li, G.; Niu, P.; Ma, Y.; Wang, H.; Zhang, W. Tuning extreme learning machine by an improved artificial bee colony to model and optimize the boiler efficiency. Knowl. Based Syst.
**2014**, 67, 278–289. [Google Scholar] [CrossRef] - Wu, B.; Yan, T.H.; Xu, X.S.; He, B.; Li, W.H. A MapReduce-Based ELM for Regression in Big Data. In Proceedings of the International Conference on Intelligent Data Engineering and Automated Learning, Yangzhou, China, 12–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 164–173. [Google Scholar] [CrossRef]
- Luo, M.; Zhang, L.; Liu, J.; Guo, J.; Zheng, Q. Distributed extreme learning machine with alternating direction method of multiplier. Neurocomputing
**2017**, 261, 164–170. [Google Scholar] [CrossRef] - Xin, J.; Wang, Z.; Chen, C.; Ding, L.; Wang, G.; Zhao, Y. ELM*: Distributed extreme learning machine with MapReduce. World Wide Web
**2014**, 17, 1189–1204. [Google Scholar] [CrossRef] - Dean, J.; Ghemawat, S. MapReduce: A flexible data processing tool. Commun. ACM
**2010**, 53, 72–77. [Google Scholar] [CrossRef] - Dean, J.; Ghemawat, S. MapReduce: Simplified data processing on large clusters. Commun. ACM
**2008**, 51, 107–113. [Google Scholar] [CrossRef] - McKenna, A.; Hanna, M.; Sivachenko, E.B.A.; Sivachenko, A.; Cibulskis, K.; Kernytsky, A.; Garimella, K.; Altshuler, D.; Gabriel1, S.; Daly, M.; et al. The Genome Analysis Toolkit: A MapReduce framework for analyzing next-generation DNA sequencing data. Genome Res.
**2010**, 20, 1297–1303. [Google Scholar] [CrossRef] - Ramírez-Gallego, S.; Fernández, A.; García, S.; Chen, M.; Herreraa, F. Big data: Tutorial and guidelines on information and process fusion for analytics algorithms with MapReduce. Inf. Fusion
**2018**, 42, 51–61. [Google Scholar] [CrossRef] - Afrati, F.; Stasinoppulos, N.; Ullman, J.D.; Vassilakopoulos, A. Sharesskew: An algorithm to handle skew for joins in mapreduce. Inf. Syst.
**2018**, 77, 129–150. [Google Scholar] [CrossRef] - Shvachko, K.; Kuang, H.; Radia, S.; Chansler, R. The hadoop distributed file system. Mass storage systems and technologies (MSST). In Proceedings of the 2010 IEEE 26th Symposium on Mass Storage Systems and Technologies (MSST), Incline Village, NV, USA, 3–7 May 2010. [Google Scholar] [CrossRef]
- Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995. [Google Scholar] [CrossRef]
- Shi, Y.; Eberhart, R. A modified particle swarm optimizer. Evolutionary Computation Proceedings, 1998. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360), Anchorage, AK, USA, 4–9 May 1998. [Google Scholar] [CrossRef]
- Duda, P.; Dwornicka, R. Optimization of heating and cooling operations of steam gate valve. Struct. Multidiscip. Optim.
**2010**, 40, 529. [Google Scholar] [CrossRef] - Duda, P.; Rząsa, D. Numerical method for determining the allowable medium temperature during the heating operation of a thick-walled boiler element in a supercritical steam power plant. Int. J. Energy Res.
**2012**, 36, 703–709. [Google Scholar] [CrossRef]

**Figure 2.**The effect of the number of hidden layer nodes on the running time (

**a**) and speedup of the algorithm (

**b**).

**Figure 10.**The change curve of the weighted objective function value corresponding to the weight ratio.

**Figure 11.**Optimization results of NO

_{x}emissions and boiler combustion efficiency under different weight ratio.

Parameter | Value | |
---|---|---|

NO_{x} emission model | Regularization term (A) | 2^{8} |

Hidden layer node (L) | 1010 | |

Boiler combustion efficiency model | Regularization term (A) | 2^{6} |

Hidden layer node (L) | 1000 |

NO_{x} | RMSE | MRE | R^{2} |
---|---|---|---|

Training set | 0.0436 | 0.0481 | 0.863 |

Testing set | 0.0313 | 0.0423 | 0.891 |

Boiler Combustion Efficiency Data | RMSE | MRE | R^{2} | ||||||
---|---|---|---|---|---|---|---|---|---|

Max | Min | Means | Max | Min | Means | Max | Min | Means | |

Training set | 0.0189 | 0.0065 | 0.0152 | 0.0361 | 0.0101 | 0.0278 | 0.9876 | 0.8944 | 0.9256 |

Test set | 0.0153 | 0.0087 | 0.012 | 0.0227 | 0.0141 | 0.0187 | 0.9766 | 0.9275 | 0.9538 |

Parameter Limit | SAPB/kpa | SAS/m·s^{−1} | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|

SP | A(R) | A(L) | C(R) | C(L) | C | D(R) | D(L) | D | E(R) | E(L) | |

Upper | −0.25 | 97.17 | 81.52 | 97.49 | 83.51 | 175.5 | 104.39 | 100.56 | 202.86 | 104.29 | 73.4 |

Lower | −0.7 | 67.28 | 58.09 | 45.62 | 31.89 | 77.35 | 13.73 | 20.66 | 36.91 | 22.37 | 16.12 |

Parameter Limit | SAS/m·s^{−1} | SAT/^{o}C | SAF/t/h | SAPB/kpa | SAF/t/h | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|

E | F(R) | F(L) | F | A | B | R1 | R2 | R3 | R | L1 | ||

Upper | 178 | 128 | 91.7 | 216.8 | 28.1 | 34.4 | 328 | 325.7 | 326 | 326.1 | 1.09 | 341.54 |

Lower | 39.8 | 42.3 | 55.4 | 99.57 | 22.3 | 22.2 | 288 | 290.9 | 288 | 290.9 | 0.3 | 302.53 |

Parameter Limit | SAF/t/h | SAPB/kpa | CFCP/T·H^{−1} | PAS/m·s^{−1} | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|

L2 | L3 | L | A | B | C | D | E | F | A | B | ||

Upper | 340 | 339 | 339.2 | 2.06 | 54.95 | 54.63 | 56.79 | 58.69 | 51.91 | 54.12 | 106.6 | 99.73 |

Lower | 301 | 301 | 301.6 | 0.74 | 20.23 | 31.93 | 0 | 0 | 0 | 0 | 57.62 | 68.9 |

Parameter Limit | PAS/m·s^{−1} | DOP/% | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|

C | D | E | F | R1 | L1 | R2 | L2 | R3 | L3 | R4 | L4 | |

Upper | 100.06 | 104.2 | 102.1 | 92.02 | 50.18 | 50.58 | 50.31 | 55.29 | 99.25 | 98.7 | 100 | 100 |

Lower | 56.88 | 59.88 | 50.08 | 55.52 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

Working Condition | Boiler Load/MW | Exhaust Temperature/°C | Coal Quality Parameters | Oxygen Content of Flue Gas/% | ||||||
---|---|---|---|---|---|---|---|---|---|---|

Total Moisture/% | Air-Dried Moisture/% | Dry Base Ash/% | Entrance A | Entrance B | Entrance C | Exit A | Exit B | |||

18,001 | 349.1 | 129.58 | 10.6 | 2.71 | 31.93 | 5.35 | 5.45 | 5.54 | 8.37 | 6.99 |

Weights | Pre or Post | NO_{x} Emission/mg/m^{3} | Boiler Combustion Efficiency/% | SAPB/kpa | SAP/m·s^{−1} | ||||
---|---|---|---|---|---|---|---|---|---|

SP | A(R) | A(L) | C(R) | C(L) | C | ||||

Pre | 412 | 93.39 | −0.47 | 77.92 | 68.51 | 46.43 | 34.01 | 81.34 | |

α = 0.9, β = 0.1 | Post | 330.42 | 93.81 | −0.7 | 67.28 | 58.09 | 73.45 | 31.89 | 163.3 |

α = 0.8, β = 0.2 | Post | 333.29 | 93.84 | −0.25 | 67.28 | 58.09 | 97.49 | 54.53 | 119.8 |

α = 0.7, β = 0.3 | Post | 345.56 | 93.85 | −0.25 | 67.28 | 58.09 | 45.62 | 31.89 | 147.6 |

α = 0.6, β = 0.4 | Post | 340.6 | 93.86 | −0.25 | 67.28 | 58.09 | 97.49 | 83.51 | 175.5 |

α = 0.5, β = 0.5 | Post | 374.35 | 93.945 | −0.47 | 67.28 | 58.09 | 79.57 | 83.51 | 175.5 |

Weights | Pre or Post | SAS/m·s^{−1} | ||||||||
---|---|---|---|---|---|---|---|---|---|---|

D(R) | D(L) | D | E(R) | E(L) | E | F(R) | F(L) | F | ||

Pre | 84.99 | 85.84 | 169.39 | 89.63 | 64.62 | 153.6 | 49.28 | 63.08 | 111.39 | |

α = 0.9, β = 0.1 | Post | 13.73 | 20.66 | 36.91 | 22.37 | 16.12 | 39.78 | 42.29 | 55.4 | 99.57 |

α = 0.8, β = 0.2 | Post | 13.73 | 20.66 | 36.91 | 22.37 | 16.12 | 39.78 | 42.29 | 55.4 | 99.57 |

α = 0.7, β = 0.3 | Post | 13.73 | 20.66 | 36.91 | 22.37 | 16.12 | 39.78 | 42.29 | 55.4 | 99.57 |

α = 0.6, β = 0.4 | Post | 13.73 | 20.66 | 36.91 | 22.37 | 16.12 | 39.78 | 127.78 | 55.4 | 99.57 |

α = 0.5, β = 0.5 | Post | 13.73 | 100.56 | 36.91 | 22.37 | 16.12 | 39.78 | 42.29 | 55.4 | 99.57 |

Weights | Pre or Post | SAT/°C | SAF/t/h | SAPB/kpa | SAF/t/h | ||||
---|---|---|---|---|---|---|---|---|---|

A | B | R1 | R2 | R3 | R | L1 | |||

Pre | 23.8 | 24.75 | 308.81 | 308.52 | 307.58 | 308.52 | 0.42 | 317.1 | |

α = 0.9, β = 0.1 | Post | 22.3 | 22.18 | 288.02 | 290.84 | 287.83 | 290.84 | 0.66 | 341.54 |

α = 0.8, β = 0.2 | Post | 28.1 | 28.789 | 288.02 | 290.84 | 287.83 | 290.84 | 1.09 | 341.54 |

α = 0.7, β = 0.3 | Post | 28.1 | 22.18 | 288.02 | 290.84 | 287.83 | 290.84 | 0.82 | 341.54 |

α = 0.6, β = 0.4 | Post | 27.7 | 22.18 | 288.02 | 290.84 | 287.83 | 290.84 | 0.85 | 341.54 |

α = 0.5, β = 0.5 | Post | 28.1 | 29.87 | 288.02 | 290.84 | 287.83 | 290.84 | 0.79 | 341.54 |

Weights | Pre or Post | SAF/t/h | SAPB/ kpa | CFCP/T·H^{−1} | |||||
---|---|---|---|---|---|---|---|---|---|

L2 | L3 | L | A | B | C | D | |||

Pre | 315.45 | 314.91 | 315.73 | 0.99 | 42.55 | 49.2 | 0 | 53.96 | |

α = 0.9, β = 0.1 | Post | 339.44 | 338.27 | 339.2 | 1.52 | 20.23 | 51.7 | 56.79 | 20.2 |

α = 0.8, β = 0.2 | Post | 339.44 | 331.42 | 339.2 | 2.06 | 20.23 | 54.7 | 0 | 18.68 |

α = 0.7, β = 0.3 | Post | 323.97 | 332.24 | 339.2 | 1.31 | 28.53 | 42 | 56.79 | 0 |

α = 0.6, β = 0.4 | Post | 339.44 | 301.12 | 339.2 | 2.06 | 20.23 | 53.5 | 0 | 58.69 |

α = 0.5, β = 0.5 | Post | 333.43 | 338.27 | 339.2 | 0.74 | 54.95 | 54.6 | 56.79 | 18.69 |

Weights | Pre or Post | CFCP/T·H^{−1} | PAS/m·s^{−1} | ||||||
---|---|---|---|---|---|---|---|---|---|

E | F | A | B | C | D | E | F | ||

Pre | 40.56 | 0 | 87.07 | 90.17 | 60.04 | 95.37 | 79.02 | 60.05 | |

α = 0.9, β = 0.1 | Post | 0 | 38.23 | 106.6 | 68.9 | 100.06 | 104.21 | 50.08 | 80.9 |

α = 0.8, β = 0.2 | Post | 51.91 | 44.12 | 57.62 | 99.73 | 100.06 | 68.21 | 50.08 | 84.74 |

α = 0.7, β = 0.3 | Post | 51.91 | 0 | 106.6 | 68.9 | 100.06 | 104.21 | 50.08 | 92.02 |

α = 0.6, β = 0.4 | Post | 0 | 54.12 | 106.6 | 99.73 | 100.06 | 59.88 | 97.53 | 55.52 |

α = 0.5, β = 0.5 | Post | 26.9 | 0 | 106.6 | 68.9 | 100.06 | 104.21 | 102.04 | 79.17 |

Weights | Pre or Post | DOP/% | |||||||
---|---|---|---|---|---|---|---|---|---|

R1 | L1 | R2 | L2 | R3 | L3 | R4 | L4 | ||

Pre | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |

α = 0.9, β = 0.1 | Post | 30.43 | 50.58 | 50.31 | 55.29 | 99.25 | 98.7 | 100 | 100 |

α = 0.8, β = 0.2 | Post | 50.18 | 50.58 | 50.31 | 55.29 | 99.25 | 98.7 | 0 | 100 |

α = 0.7, β = 0.3 | Post | 50.18 | 50.58 | 50.31 | 55.29 | 99.25 | 98.7 | 100 | 100 |

α = 0.6, β = 0.4 | Post | 50.18 | 50.58 | 50.31 | 55.29 | 99.25 | 98.7 | 100 | 100 |

α = 0.5, β = 0.5 | Post | 50.18 | 0 | 50.31 | 55.29 | 99.25 | 98.7 | 0 | 100 |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Xu, X.; Chen, Q.; Ren, M.; Cheng, L.; Xie, J.
Combustion Optimization for Coal Fired Power Plant Boilers Based on Improved Distributed ELM and Distributed PSO. *Energies* **2019**, *12*, 1036.
https://doi.org/10.3390/en12061036

**AMA Style**

Xu X, Chen Q, Ren M, Cheng L, Xie J.
Combustion Optimization for Coal Fired Power Plant Boilers Based on Improved Distributed ELM and Distributed PSO. *Energies*. 2019; 12(6):1036.
https://doi.org/10.3390/en12061036

**Chicago/Turabian Style**

Xu, Xinying, Qi Chen, Mifeng Ren, Lan Cheng, and Jun Xie.
2019. "Combustion Optimization for Coal Fired Power Plant Boilers Based on Improved Distributed ELM and Distributed PSO" *Energies* 12, no. 6: 1036.
https://doi.org/10.3390/en12061036