Next Article in Journal
Investigating a New Method-Based Internal Joint Operation Law for Optimizing the Performance of a Turbocharger Compressor
Previous Article in Journal
Effects of Object-Oriented Advance Guidance Signage on Lane-Changing Behaviors at the Mainline Toll Stations of Expressways
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Short-Term Wind Power Prediction by an Extreme Learning Machine Based on an Improved Hunter–Prey Optimization Algorithm

1
School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin 300384, China
2
Tianjin Key Laboratory for Control Theory & Application in Complicated Systems, Tianjin 300384, China
*
Authors to whom correspondence should be addressed.
Sustainability 2023, 15(2), 991; https://doi.org/10.3390/su15020991
Submission received: 18 November 2022 / Revised: 1 January 2023 / Accepted: 3 January 2023 / Published: 5 January 2023

Abstract

:
Considering the volatility and randomness of wind speed, this research suggests an improved hunter-prey optimization (IHPO) algorithm-based extreme learning machine (ELM) short-term wind power prediction model to increase short-term wind power prediction accuracy. The original wind power history data from the wind farm are used in the model to achieve feature extraction and data dimensionality reduction, using the partial least squares’ variable importance of projection (PLS-VIP) and normalized mutual information (NMI) methods. Adaptive inertia weights are added to the HPO algorithm’s optimization search process to speed up the algorithm’s convergence. At the same time, the initialized population is modified, to improve the algorithm’s ability to perform global searches. To accomplish accurate wind power prediction, the enhanced algorithm’s optimal parameters optimize the extreme learning machine’s weights and threshold. The findings demonstrate that the method accurately predicts wind output and can be confirmed using measured data from a wind turbine in Inner Mongolia, China.

1. Introduction

Wind farms are having an increasingly massive effect on the power grid as the wind power network expands. However, because wind power is unstable and inconsistent, dispatching wind power is becoming more challenging [1,2]. High-precision power prediction ensures the power system’s secure, reliable, and safe operation and can significantly reduce the pressure on the power system’s regulators [3]. The choice and processing of original data, the development of the forecasting model, and the properties of the prediction objects themselves are the factors most influencing the accuracy of wind power prediction.
Physical and artificial intelligence models are the primary wind energy-planning techniques [4]. The information gained from numerical weather prediction (NWP), such as barometric pressure and temperature, is gathered to produce a conversion model between the NWP and wind power, which is then used to create a physical model of them [5]. This model is limited by the speed of NWP updates and is more commonly used for wind turbine maintenance and commissioning. To ascertain the relationship between wind speed and other wind turbine characteristic factors, the model that is used most often employs artificial intelligence and is trained using a large amount of data. The feedforward neural network model has been widely employed in various disciplines because of its strong learning capabilities [6]. However, most conventional feedforward neural networks use the iterative gradient descent approach in order to update the weight parameters, which causes the models to build slowly and the generalization performance to decline [7]. The ELM algorithm was developed to address these issues. Since the desired accuracy and stability of ELM cannot be met, and the weights and thresholds are randomly generated before training, it is typically optimized [8]. The benefits of the swarm intelligence optimization algorithm include strong parallelism, independent exploration, ease of use, quick convergence, etc. The swarm intelligence optimization algorithm offers good optimization abilities while handling several nonlinear optimization issues [9]. The HPO algorithm has established a good balance between the two stages of exploration and development, along with capability and good scalability. It presents very competitive results compared to most well-known and new optimization algorithms [10]. Many academics are interested in improving the ELM networks, and most studies optimize ELM using swarm intelligence optimization algorithms. A wind power prediction model combining Adaboost and a particle swarm optimization (PSO) algorithm-optimized ELM was proposed by An et al. [11] To optimize the ELM and create a short-term wind power forecast model, Ding et al. [12] employed the modified grey wolf optimization (GWO) algorithm, which increased the ELM algorithm’s capacity for generalization. Li et al. [13] optimized the ELM using the improved sparrow algorithm (SSA); the upgraded model showed improved prediction accuracy in four regression data sets. Historical wind power data often consists of a sizable amount of high-dimensional, redundant data [14]. At the same time, it is crucial to eliminate any extraneous data from the feature variables and simplify the prediction model. To lessen the redundancy of the input variables, Wang et al. [15] employed the Pearson correlation coefficient (PCC) to identify the weather variables related to wind power. Li et al. [16] improved the quality of the model input data and reduced the dimension of the wind power data using principal component analysis (PCA). Convolutional neural networks (CNN) were used by Meng et al. [17] to extract essential features from the input data. One of the most popular feature selection techniques is PLS-VIP because it is straightforward and practical, requires little computing power, and has few adjustable parameters [18]. The correlation and redundancy between feature variables are calculated using normalized mutual information (NMI) methods [19].
Random reverse learning and adaptive weight strategies are added to enhance the HPO algorithm’s performance by speeding up convergence and global search performance. After that, a short-term wind power forecast model, based on the PLS-VIP index, the NMI coefficient, and IHPO-ELM is suggested. Specifically, the PLS-VIP index and NMI coefficient were used to extract features from historical wind power data and reduce their dimensionality. The processed historical wind power data were used as the model’s input, and the modified predator optimization algorithm optimized the initial weight and threshold of the ELM to increase the generalization capacity of the ELM model. The key contributions of this study, when compared to the current literature, can be summed up as follows:
  • It is possible to efficiently minimize the dimension, preserve the features in the data, and lessen the complexity of the prediction model using the PLS-VIP and NMI methods.
  • IHPO has improved its global search performance by adding random reverse learning and an adaptive weight method, as shown by the single-peak and multi-peak reference functions.
  • The results demonstrate that the IHPO-ELM model outperforms previous prediction techniques in terms of forecast accuracy, using measured data from a wind farm in Inner Mongolia during several seasons as an example.

2. Selection of Model Input Data

Because there is a close relationship between the various wind turbine parts and a high correlation between the Supervisory Control and Data Acquisition (SCADA) system’s surveillance parameters, different characteristic parameters may influence the output variables for monitoring the operating status of the turbine. When only one variable’s impact on the results obtained is taken into account, the model’s accuracy frequently needs to be improved to satisfy the practical requirements. Using all variables as input complicates the model and affects the computational efficiency, reducing the model’s generalization capability. In this study, the PLS-VIP approach extracts the factors most strongly associated with wind power. The NMI method is used to eliminate interference from redundant elements.

2.1. PLS-VIP Method for Filtering Feature Variables

The PLS-VIP method has good reliability and demonstrates actual achievement when performing a dimensionality reduction of high-dimensional variables and typical correlation analysis of two groups of variables, which can increase the precision and stability of model prediction. The PLS-VIP method has the ability to precisely obtain the value of the influence of the variable on the target variable [20]. The basic principles of the PLS method are as follows:
{ X = T P T + E Y = U Q T + F
U = b T + e
In Equations (1) and (2), the WTG data is in an n × m matrix. PLS regression was used to fit the data, and the residual matrix values for X and Y are E and F. X is an n × (m − 1) matrix of other WTG parameters, while Y is an n × 1 matrix of wind power. T and U are the rating matrices of X and Y, respectively (typically, there exists TU); P and Q are the load matrices, respectively; B is the coefficient of determination for the regression model, and e is the corresponding residual in the PLS regression.
Wold et al. [21] proposed a variable projection importance indicator, which was used to construct a new low-dimensional data space after the PLS decomposition of the data, and thus can be used to filter the variables. This indicator measures the importance of the ith variable, xi, of the explained wind turbine on the wind power Y. For the ith independent variable, its VIPi is defined as:
V I P i = m h = 1 k ω h i 2 r h 2 ( Y ; t h ) h = 1 k r h 2 ( Y ; t h ) .
In Equation (3), the total independent variable number is m, and the overall PLS element number is k; ωhi is the independent variable’s variable on the principal component, where rh (Y; th) is the reliability analysis between the score column th of the hth main component and the dependent variable matrix Y. The more substantial the effect on wind power, the higher the VIP value, and the stronger the relationship between the variable and the wind power.

2.2. NMI Eliminates Redundant Feature Variables

The NMI coefficient was used to remove the redundant feature variables and select the model input variables with the strongest and lowest correlations to the output power [22]. In information theory, a measure known as mutual information is used to assess how dependent two random variables are on one another [23]. The entropy of two variables determines the value of their mutual information. Entropy is a stability analysis metric that gauges the level of data that should be present in every system variable. When the uncertainty of an event occurring is higher, its entropy is more elevated. We assume that both X and Y are two random variables, and use the following formula to calculate the mutual information:
I ( X ; Y ) = H ( X ) H ( X | Y ) = H ( Y ) H ( Y | X ) = H ( X ) + H ( Y ) H ( Y , X ) .
In Equation (4), I(X; Y) can be used to gauge how closely two variables are correlated. The joint entropy is H(X, Y), and the conditional entropies are H(X|Y) and H(Y|X) [24]. The association between variables X and Y gets closer as the value of I (X; Y) becomes larger.
Placing the mutual information between [0, 1] makes it easy to evaluate the goodness of fit of the algorithm. The more common normalization methods are as follows:
N M I ( X ; Y ) = 2 I ( X ; Y ) H ( X ) + H ( Y ) .
In Equation (5), factor X and factor Y are more closely related when the NMI(X; Y) value approaches 1.

2.3. Input Selection for Wind Power Prediction Models

Historical wind power data was gathered via a SCADA system for a 2.5 MW total-rated-capacity wind turbine in Inner Mongolia, China. The variables with a VIP value more extensive than 1 were chosen as the initial screening results for the model input variables after the VIP values between all the factors and the output power were calculated using the PLS-VIP metric. The variables of the preliminary screening results were processed with normalized mutual information to choose the input data that have the strongest association with the output power.

3. Principle of the ELM

The ELM is a quick-learning algorithm with good generalization capabilities for training hidden neuron layer feedback control neural networks [16]. The basic idea is that random input layer weights and thresholds are used during training. The generalized inverse matrix theory creates the weights for the output layer. The network output can be computed using the output layer weights that are gained to complete the data prediction once the ELM has finished learning and all the network nodes have weights and thresholds [25]. As a training sample of N wind power data for (xi, yi), (i = 1, 2, …, N), xi = [xi1, xi2, …, xi3]TRn, yi = [yi1,yi2, …,yim]TRm, the following can be used to express an ELM model with L cells in the hidden layer:
i = 1 L β j g ( w j x i + b j ) = y i .
In Equation (6), w stands for the input weight, β for the output weight, and b for the ith hidden layer neurons’ threshold, yi indicates the output value, and g(x) means the activation function, which is a non-linear segmented constant that fulfills the approximation power theorem of the ELM.
To obtain the training effect, you must have w, b, and B, so that:
H ( w j , b j ) β j Y = m i n w , b , β H ( w j , b j ) β j Y
In Equation (7), H is the output matrix for the hidden layer, while Y is the output matrix. As a result, the algorithm for ELM can be translated into solving linear systems, and the core of training can be achieved by finding the least-squares solution to the equation = Y [26].
Unlike traditional neural networks [27] where the parameters can be adjusted repeatedly by multiple iterations, ELM is derived directly from generalized matrices. Therefore, the initial weights w and threshold b significantly impact the prediction accuracy of the ELM, and it is particularly essential to choose the appropriate parameters for w and b [28].

4. IHPO-ELM Prediction Model

4.1. Principle of the Hunter–Prey Optimization Algorithm

Naruei et al. [10] proposed the hunter–prey optimization (HPO) algorithm in 2022. The HPO algorithm simulates animal hunting behavior, which has the advantages of rapid resolution and powerful search abilities to find the best possible solution to the problem. Specifically, the algorithm is described in the following two aspects; for the search mechanism of the predator, the mathematical model is given in Equation (8), while in terms of the update of the prey position, the mathematical model is given in Equation (9):
x i , j ( t + 1 ) = x i , j ( t ) + 0.5 [ ( 2   ·   C   ·   Z   ·   P pos ( j ) x i , j ( t ) ) + ( 2 ( 1 C )   ·   Z   ·   μ ( j ) x i , j ( t ) ) ]
In Equation (8), Ppos is the location of the prey, x(t) is the present predator location, x(t + 1) is the predator’s location in the subsequent iteration, μ is the mean of all sites, and Z is the adjustable variable.
x i , j ( t + 1 ) = T p o s ( j ) + C   ·   Z   ·   c o s ( 2 π   ·   R 4 )   ·   [ T p o s ( j ) x i , j ( t ) ]
In Equation (9), Tpos is the optimal global position, and x(t) and x(t + 1) are the prey’s current and following iteration positions, respectively; Z is the algorithm’s adaptive factor, and R4 is a random value between −1 and 1. The value of C, which combines exploitation and exploration, decreases as the process converges.

4.2. Improved Hunter–Prey Optimization (IHPO) Algorithm

4.2.1. Stochastic Reverse Learning

The starting population of the HPO algorithm is generated randomly, which frequently causes the population to settle into the local optimum. Therefore, the starting population in this paper uses a stochastic inverse learning technique, increases the diversity of the population in the target region, and boosts the algorithm’s performance for worldwide searches. The stochastic reverse-learning approach aims to generate a random inverse response from the current method during the population search, compare the objective function values between the two solutions, and then choose the best solution in order to move on to the next iteration. Its calculation formula is as follows.
X r a n d = L B + U B r   ·   X
In Equation (10), Xrand represents the random reverse solution, r is an arbitrary number between 0 and 1, X is the present solution, and X ∈ [LB,UB].

4.2.2. Adaptive Inertia Weights

Inertia weights are an essential class of control parameters in intelligence algorithms and include both linear and non-linear adjustment techniques to boost the algorithm’s convergence [29]. Among them, non-linear relationships are abundantly present in practical optimization problems, thus making non-linear strategies more widely available, in order to give the algorithm good global and local balance ability in the prey iteration stage and accelerate the convergence speed to the optimal solution. In this paper, a non-linear decreasing inertia weight, based on a concave function, is applied to the HPO algorithm, which can be represented as follows:
ω = ω m i n ( ω m a x ω m i n ) 1 1 + c t I t e r m a x
In Equation (11), ωmin and ωmax are the weight adjustment parameters. c is the adjustment parameter. There are currently t iterations, and there is a total of Itermax iterations. Equation (9) is added to Equation (12) to improve the prey’s fit with the iterative process and hasten the algorithm’s convergence toward the best solution.
x i , j ( t + 1 ) = ω ·   T p o s ( j ) + C ·   Z ·   c o s ( 2 π ·   R 4 ) ·   [ T p o s ( j ) x i , j ( t ) ]

4.3. Construction of the IHPO-ELM Prediction Model

The initial weights and threshold are crucial for the extreme learning machine’s ability to forecast outcomes accurately. In this study, we use the PLS-VIP and NMI methods to handle historical wind power data and optimize the weights and threshold, based on the improved hunter–prey optimization algorithm, to build a wind power prediction model with higher accuracy.

5. Simulation Analysis

5.1. IHPO Algorithm Performance Verification

In this study, six benchmark functions are chosen to evaluate the IHPO algorithm’s performance. Table 1 displays the essential details of the six benchmark functions. HPO, GWO, northern goshawk optimization (NGO), the whale optimization algorithm (WOA), and IHPO were chosen for comparison, under the same standard function. The experiment was conducted independently 30 times; Table 2 displays the operational findings. The level of relevance between each algorithm and IHPO was examined using the Wilcoxon signed-rank test. Table 3 shows the results of the p-values.
Table 2 demonstrates that, compared to other algorithms, the IHPO method exhibits superior precision in the six benchmark test functions. IHPO and HPO achieve the best solution in terms of the best convergence value for functions F4 and F6, outperforming other techniques. Table 3 shows that all the p-values are less than 0.05, demonstrating that IHPO’s statistical characteristics are significantly superior to those of HPO, GWO, NGO, and WOA.

5.2. The Training IHPO-ELM Model’s Network Performance

ELM can achieve its learning objectives by increasing the number of hidden layers [30]. Table 4 depicts the association between the training set’s coefficient of determination and the quantity of the IHPO-ELM hidden layer nodes. For 8–35 hidden layer nodes, when there are 30 hidden layers, the coefficient of determination is closer to 1. Relevant experience indicates that the activation function’s selection of a sigmoidal function improves the generalization performance [31]. Therefore, the number of nodes in the hidden layer of ELM is set at 30.

5.3. Model Performance Evaluation

5.3.1. Input Data Preprocessing

In order to generate a short-term projection of the seasonal wind power scenario, this study chooses the data from a wind farm in Inner Mongolia. There are outliers in practically every wind farm’s statistics during its actual operation, due to the unpredictability of measuring and data transmission technology. In this study, wind turbine code and density clustering are used to handle aberrant wind power data.

5.3.2. Feature Extraction

The VIP values of wind direction, wind speed, and other characteristic parameters on the wind turbine output power were calculated using the PLS method described in Section 2.1 of this paper. Figure 1 illustrates how each input’s characteristic parameter impacts wind power.
In Figure 1, the y-coordinate represents the VIP value of each characteristic variable on the output power of the wind turbine, and the x-coordinate represents the 117 characteristic variables gathered for this wind farm. Variables with VIP values of less than 1 are viewed as irrelevant variables and are removed to preserve as much helpful information in the original data as possible. Forty-seven characteristic variables, including wind speed and power factor, are among the chosen parameters. The typical redundant parameters of the model input were removed using the NMI method described in Section 2.2 of this paper, to choose the input variables with the strongest and most reasonable correlation with the output power. The NMI coefficient diagram is displayed below.
The degree to which the two variables are related is demonstrated depends on how near the NMI coefficient is to 1. Shown according to the coefficient between the feature variables in Figure 2, the filtered model input feature variables are shown in Table 5.

5.3.3. Forecast Accuracy Assessment Indicators

To assess how well the prediction model used in this research performed, the mean absolute error (MAE), root means square error (RMSE), and mean absolute percentage error (MAPE) are used as the evaluation indicators. The definition of the indicators is shown in Equation (13). The evaluation indicator’s value should be as low as possible to maximize the prediction model’s accuracy.
M A E = 1 n t = 1 n | a c t u a l ( t ) f o r e c a s t ( t ) | R M S E = t = 1 n | a c t u a l ( t ) f o r e c a s t ( t ) | 2 n M A P E = 1 n t = 1 n | a c t u a l ( t ) f o r e c a s t ( t ) a c t u a l ( t ) | ·   100 %
In Equation (13), in the test data set, n is the number of samples that were analyzed, and actual(t) and forecast(t) show the wind power value for the tth actual sample and the tth forecast sample, respectively.

5.3.4. Analysis of Experimental Results

In order to verify the prediction performance of the model in this paper, various electric fan scenarios in spring, summer, autumn, and winter were selected for analysis, and three artificial intelligence algorithms, EEMD-Tent-SSA-LS-SVM [32], ELM, and HPO-ELM were introduced for comparison. The prediction results are shown in Figure 3 and the prediction error indicators are shown in Table 6.
The anticipated and actual values for the four approaches often follow the same trend, as seen in Figure 3. The projected values of the IHPO-ELM model are closer to the real values at the extremities, where the wind power changes significantly, indicating that the IHPO-ELM network can predict wind power better under extreme conditions and has strong robustness and adaptability. Table 6 shows the evaluation indicators of the four models. The IHPO-ELM model has significantly more minor prediction errors and higher accuracy than the other models. The prediction outcomes and error indicators are as follows, in order to better confirm the model’s predictive capability and lengthen the prediction cycle.
There is a large peak–valley difference in the output power of wind turbines for the four seasons, and there is little difference in the forecast trend of wind power (Figure 4). After increasing the prediction period, the IHPO-ELM value is closer to the true value. As seen in Table 7, the evaluation index values of the model in this paper are smaller than those of other models. Generally, this paper’s prediction model has better generalization performance for different wind power fluctuations.

6. Conclusions and Prospects

This paper suggests an improved hunter–prey optimizer algorithm to optimize the ELM for short-term wind power prediction, to increase the accuracy of wind power predictions. Through example validation, the following findings are reached:
  • We can boost the global search efficiency and convergence speed of the HPO algorithm by including stochastic reverse learning and nonlinear adaptive weights. The IHPO method offers a greater level of optimization accuracy, according to the simulation findings.
  • The IHPO-ELM model is used in this research for prediction, after a method for extracting the characteristic variables utilizing PLS-VIP and NMI is proposed. The simulation results show that this paper’s prediction model has better prediction accuracy.
  • There is still room for improvement in terms of wind power prediction accuracy.

Author Contributions

Writing—original draft, X.W.; Supervision, J.L.; Project administration, L.S., H.L. and L.R.; Funding acquisition, L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 52277016.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data of the simulation is unavailable because of the privacy issue of the data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. An, J.; Yin, F.; Wu, M.; She, J.; Chen, X. Multisource Wind Speed Fusion Method for Short-Term Wind Power Prediction. IEEE Trans. Ind. Inform. 2021, 17, 5927–5937. [Google Scholar] [CrossRef]
  2. Liu, Y.; Zheng, J.; Chen, Q.; Duan, Z.; Tian, Y.; Ban, M.; Li, Z. MMC-STATCOM supplementary wide-band damping control to mitigate subsynchronous control interaction in wind farms. Int. J. Electr. Power Energy Syst. 2022, 141, 108171. [Google Scholar] [CrossRef]
  3. Zhang, J.; Wei, Y.; Tan, Z. An adaptive hybrid model for short term wind speed forecasting. Energy 2020, 190, 115615. [Google Scholar] [CrossRef]
  4. Son, N.; Yang, S.; Na, J. Hybrid Forecasting Model for Short-Term Wind Power Prediction Using Modified Long Short-Term Memory. Energies 2019, 12, 3901. [Google Scholar] [CrossRef] [Green Version]
  5. He, B.; Ye, L.; Pei, M.; Lu, P.; Dai, B.; Li, Z.; Wang, K. A combined model for short-term wind power forecasting based on the analysis of numerical weather prediction data. Energy Rep. 2022, 8, 929–939. [Google Scholar] [CrossRef]
  6. Ahmad, I.; Basheri, M.; Iqbal, M.J.; Rahim, A. Performance Comparison of Support Vector Machine, Random Forest, and Extreme Learning Machine for Intrusion Detection. IEEE Access 2018, 6, 33789–33795. [Google Scholar] [CrossRef]
  7. Wang, W.; Zhang, B.; Chai, S.; Xia, Y. An Extreme Learning Machine-Based Community Detection Algorithm in Complex Networks. Complexity 2018, 2018, 10. [Google Scholar] [CrossRef] [Green Version]
  8. Welper, G. Universality of gradient descent neural network training. Neural Netw. 2022, 150, 259–273. [Google Scholar] [CrossRef]
  9. Slowik, A.; Kwasnicka, H. Nature Inspired Methods and Their Industry Applications—Swarm Intelligence Algorithms. IEEE Trans. Ind. Inform. 2018, 14, 1004–1015. [Google Scholar] [CrossRef]
  10. Naruei, I.; Keynia, F.; Sabbagh Molahosseini, A. Hunter–prey optimization: Algorithm and applications. Soft Comput. 2022, 26, 1279–1314. [Google Scholar] [CrossRef]
  11. An, G.; Jiang, Z.; Cao, X.; Liang, Y.; Zhao, Y.; Li, Z.; Dong, W.; Sun, H. Short-Term Wind Power Prediction Based On Particle Swarm Optimization-Extreme Learning Machine Model Combined With Adaboost Algorithm. IEEE Access 2021, 9, 94040–94052. [Google Scholar] [CrossRef]
  12. Ding, J.; Chen, G.; Yuan, K. Short-Term Wind Power Prediction Based on Improved Grey Wolf Optimization Algorithm for Extreme Learning Machine. Processes 2020, 8, 109. [Google Scholar] [CrossRef] [Green Version]
  13. Li, J.; Wu, Y. Improved Sparrow Search Algorithm with the Extreme Learning Machine and Its Application for Prediction. Neural Process. Lett. 2022, 54, 4189–4209. [Google Scholar] [CrossRef]
  14. Zhang, Y.; Zhang, C.; Zhao, Y.; Gao, S. Wind speed prediction with RBF neural network based on PCA and ICA. J. Electr. Eng. 2018, 69, 148–155. [Google Scholar] [CrossRef] [Green Version]
  15. Wang, Y.; Wang, J.; Cao, M.; Li, W.; Yuan, L.; Wang, N. Prediction method of wind farm power generation capacity based on feature clustering and correlation analysis. Electr. Power Syst. Res. 2022, 212, 108634. [Google Scholar] [CrossRef]
  16. Li, H.; Zou, H. Short-Term Wind Power Prediction Based on Data Reconstruction and Improved Extreme Learning Machine. Arab. J. Sci. Eng. 2022, 47, 3669–3682. [Google Scholar] [CrossRef]
  17. Meng, Y.; Chang, C.; Huo, J.; Zhang, Y.; Al-Neshmi, H.M.M.; Xu, J.; Xia, T. Research on Ultra-Short-Term Prediction Model of Wind Power Based on Attention Mechanism and CNN-BiGRU Combined. Front. Energy Res. 2022, 10, 920835. [Google Scholar] [CrossRef]
  18. Ndisya, J.; Gitau, A.; Mbuge, D.; Arefi, A.; Bădulescu, L.; Pawelzik, E.; Hensel, O.; Sturm, B. Vis-NIR Hyperspectral Imaging for Online Quality Evaluation during Food Processing: A Case Study of Hot Air Drying of Purple-Speckled Cocoyam (Colocasia esculenta (L.) Schott). Processes 2021, 9, 1804. [Google Scholar] [CrossRef]
  19. Thejas, G.S.; Joshi, S.R.; Iyengar, S.S.; Sunitha, N.R.; Badrinath, P. Mini-Batch Normalized Mutual Information: A Hybrid Feature Selection Method. IEEE Access 2019, 7, 116885. [Google Scholar] [CrossRef]
  20. Afanador, N.L.; Tran, T.N.; Buydens, L.M.C. An assessment of the jackknife and bootstrap procedures on uncertainty estimation in the variable importance in the projection metric. Chemom. Intell. Lab. Syst. 2014, 137, 162–172. [Google Scholar] [CrossRef]
  21. Wold, H. Soft Modelling by Latent Variables: The Non-Linear Iterative Partial Least Squares (NIPALS) Approach. J. Appl. Probab. 1975, 12, 117–142. [Google Scholar] [CrossRef]
  22. Wang, X.; Zhou, Y. Multi-Label Feature Selection with Conditional Mutual Information. Comput. Intell. Neurosci. 2022, 2022, 9243893. [Google Scholar] [CrossRef] [PubMed]
  23. Tang, C.; Yang, X.; Lv, J. Zero-shot learning by mutual information estimation and maximization. Knowl. Based Syst. 2020, 194, 105490. [Google Scholar] [CrossRef]
  24. Han, Y.; Ordentlich, O.; Shayevitz, O. Mutual Information Bounds via Adjacency Events. IEEE Trans. Inf. Theory 2016, 62, 6068–6080. [Google Scholar] [CrossRef] [Green Version]
  25. Liu, T.; Fan, Q.; Kang, Q.; Niu, L. Extreme Learning Machine Based on Firefly Adaptive Flower Pollination Algorithm Optimization. Processes 2020, 8, 1583. [Google Scholar] [CrossRef]
  26. Sun, Z.; Zhao, S.; Zhang, J. Short-Term Wind Power Forecasting on Multiple Scales Using VMD Decomposition, K-Means Clustering and LSTM Principal Computing. IEEE Access 2019, 7, 166917–166929. [Google Scholar] [CrossRef]
  27. Gamal, H.; Elkatatny, S. Prediction Model Based on an Artificial Neural Network for Rock Porosity. Arab. J. Sci. Eng. 2022, 47, 11211–11221. [Google Scholar] [CrossRef]
  28. Li, L.; Liu, Z.; Tseng, M.; Jantarakolica, K.; Lim, M.K. Using enhanced crow search algorithm optimization-extreme learning machine model to forecast short-term wind power. Expert Syst. Appl. 2021, 184, 115579. [Google Scholar] [CrossRef]
  29. Zdiri, S.; Chrouta, J.; Zaafouri, A. An Expanded Heterogeneous Particle Swarm Optimization Based on Adaptive Inertia Weight. Math. Probl. Eng. 2021, 2021, 24. [Google Scholar] [CrossRef]
  30. Yang, L.; Fang, X.; Wang, X.; Li, S.; Zhu, J. Risk Prediction of Coal and Gas Outburst in Deep Coal Mines Based on the SAPSO-ELM Algorithm. Int. J. Environ. Res. Public Health 2022, 19, 12382. [Google Scholar] [CrossRef]
  31. Liu, W.; Wang, Z.; Yuan, Y.; Zeng, N.; Hone, K.; Liu, X. A Novel Sigmoid-Function-Based Adaptive Weighted Particle Swarm Optimizer. IEEE Trans. Cybern. 2021, 51, 1085–1093. [Google Scholar] [CrossRef]
  32. Li, Z.; Luo, X.; Liu, M.; Cao, X.; Du, S.; Sun, H. Wind power prediction based on EEMD-Tent-SSA-LS-SVM. Energy Rep. 2022, 8, 3234–3243. [Google Scholar] [CrossRef]
Figure 1. VIP values for each variable.
Figure 1. VIP values for each variable.
Sustainability 15 00991 g001
Figure 2. A plot of the NMI correlation coefficients.
Figure 2. A plot of the NMI correlation coefficients.
Sustainability 15 00991 g002
Figure 3. Scenario of four seasons: (a) Scenario in spring; (b) scenario in summer; (c) scenario in autumn; (d) scenario in winter (1 day).
Figure 3. Scenario of four seasons: (a) Scenario in spring; (b) scenario in summer; (c) scenario in autumn; (d) scenario in winter (1 day).
Sustainability 15 00991 g003
Figure 4. Scenario of four seasons: (a) Scenario in spring; (b) scenario in summer; (c) scenario in autumn; (d) scenario in winter (1 week).
Figure 4. Scenario of four seasons: (a) Scenario in spring; (b) scenario in summer; (c) scenario in autumn; (d) scenario in winter (1 week).
Sustainability 15 00991 g004
Table 1. The six reference functions.
Table 1. The six reference functions.
FunctionRangeOptimum
F1 = i = 1 n ( j = 1 i x j ) 2 [−100, 100]0
F2 = m a x { | x i ,     1 i n | } [−100, 100]0
F3 = i = 1 n i x i 4 + r a n d o m [ 0 , 1 ) [−1.28, 1.28]0
F4 = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] [−5.12, 5.12]0
F5 = 20 e x p ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e [−32, 32]0
F6 = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 [−600, 600]0
Table 2. The convergence value of the algorithm.
Table 2. The convergence value of the algorithm.
FunctionAlgorithmOptimal Convergence ValueWorst Convergence ValueAverage Convergence Value
F1IHPO1.2182 × 10−262.4647 × 10−211.9875 × 10−22
HPO5.6303 × 10−152.7493 × 10−91.5267 × 10−10
GWO980.71215733.54453040.2407
NGO16.4937346.493380.0268
WOA54,467.3047164,031.251197,579.3826
F2IHPO9.3678 × 10−149.0612 × 10−121.1818 × 10−12
HPO2.4916 × 10−95.4666 × 10−66.837 × 10−7
GWO2.822514.68768.9671
NGO0.0116980.0264450.018307
WOA11.347591.252268.499
F3IHPO9.6073 × 10−50.00448180.0016135
HPO0.000712590.0394740.0075724
GWO0.0283650.103820.066081
NGO0.00167440.0194880.0075643
WOA0.00144690.0155190.0067311
F4IHPO000
HPO00.000283819.4613 × 10−6
GWO38.4681101.599659.9697
NGO0.000143180.349310.047396
WOA2.3541 × 10−62.3541 × 10−627.324
F5IHPO4.4409 × 10−153.4861 × 10−125.0502 × 10−13
HPO3.3507 × 10−104.7186 × 10−74.5896 × 10−8
GWO1.99983.81243.2196
NGO0.000501210.00378180.0018254
WOA0.00050120.00378180.0018254
F6IHPO000
HPO08.5953 × 10−62.8651 × 10−7
GWO1.06331.62721.2007
NGO3.913 × 10−50.00415770.00032095
WOA3.913 × 10−50.00415770.00032095
Table 3. Wilcoxon signed-rank test (a = 0.05).
Table 3. Wilcoxon signed-rank test (a = 0.05).
No.ComparisonpSignificant
1IHPO VS HPO7.5552 × 10−10Yes
2IHPO VS GWO7.5569 × 10−10Yes
3IHPO VS NGO7.5570 × 10−10Yes
4IHPO VS WOA7.5484 × 10−10Yes
Table 4. Relationship between the node number and coefficient of determination of the IHPO-ELM training set.
Table 4. Relationship between the node number and coefficient of determination of the IHPO-ELM training set.
Node NumberCoefficient of Determination
80.98924
90.98609
100.98943
110.98799
120.98892
130.9885
140.99028
150.98877
160.98954
170.99054
180.99162
190.99095
200.99227
210.99253
220.99222
230.99231
240.99273
250.9929
260.99298
270.99285
280.99309
290.99304
300.99334
310.993
320.99326
330.99315
340.99314
350.99305
Table 5. The model input characteristic variables.
Table 5. The model input characteristic variables.
NumberCharacteristic Variables
1Power factor
2Wind speed
3Acceleration in the X-direction
4Full voltage of variable paddle ultracapacitor 1, 2, 3
5Variable propeller ultracapacitor sub low voltage 1, 3
6Variable paddle ultracapacitor second high voltage 1, 2
7Variable pitch angle 1
8Variable paddle 24 V brake voltage 1, 2, 3
9Inverter temperature on the converter grid side
10Temperature of inverter #1 on the converter side
Table 6. Evaluation indicators for each model (1 day).
Table 6. Evaluation indicators for each model (1 day).
Scene Model MAE (kW)RMSE (kW)MAPE (%)
Spring ELM166.75210.650.08
HPO-ELM 82.57111.440.03
EEMD-Tent-SSA-LS-SVM43.9762.830.02
IHPO-ELM 38.3650.550.02
SummerELM172.1251.170.14
HPO-ELM 121.74173.350.09
EEMD-Tent-SSA-LS-SVM105.671540.08
IHPO-ELM 78.53110.20.06
AutumnELM380.4432.115.76
HPO-ELM 132.99172.961.46
EEMD-Tent-SSA-LS-SVM123.94176.871.3
IHPO-ELM 109.7171.60.68
Winter ELM157.16201.040.19
HPO-ELM 93.55131.760.11
EEMD-Tent-SSA-LS-SVM91.1129.220.11
IHPO-ELM 87.31121.350.08
Table 7. Evaluation indicators for each model (1 week).
Table 7. Evaluation indicators for each model (1 week).
Scene Model MAE (kW)RMSE (kW)MAPE (%)
Spring ELM159.81203.110.43
HPO-ELM 53.5872.10.23
EEMD-Tent-SSA-LS-SVM47.8470.330.19
IHPO-ELM 35.8147.040.14
SummerELM138.37183.790.58
HPO-ELM 80.6112.340.36
EEMD-Tent-SSA-LS-SVM73.79105.470.31
IHPO-ELM 62.7793.380.23
AutumnELM195.01265.340.67
HPO-ELM 133.52181.390.45
EEMD-Tent-SSA-LS-SVM125.2186.820.44
IHPO-ELM 111.65159.160.39
Winter ELM121.54161.090.1
HPO-ELM 79.07105.80.07
EEMD-Tent-SSA-LS-SVM70.0495.40.06
IHPO-ELM 63.1786.010.05
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, X.; Li, J.; Shao, L.; Liu, H.; Ren, L.; Zhu, L. Short-Term Wind Power Prediction by an Extreme Learning Machine Based on an Improved Hunter–Prey Optimization Algorithm. Sustainability 2023, 15, 991. https://doi.org/10.3390/su15020991

AMA Style

Wang X, Li J, Shao L, Liu H, Ren L, Zhu L. Short-Term Wind Power Prediction by an Extreme Learning Machine Based on an Improved Hunter–Prey Optimization Algorithm. Sustainability. 2023; 15(2):991. https://doi.org/10.3390/su15020991

Chicago/Turabian Style

Wang, Xiangyue, Ji Li, Lei Shao, Hongli Liu, Lei Ren, and Lihua Zhu. 2023. "Short-Term Wind Power Prediction by an Extreme Learning Machine Based on an Improved Hunter–Prey Optimization Algorithm" Sustainability 15, no. 2: 991. https://doi.org/10.3390/su15020991

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop