Next Article in Journal
The Existence and Uniquenes Solution of Nonlinear Integral Equations via Common Fixed Point Theorems
Next Article in Special Issue
Bearing Fault Diagnosis Using a Grad-CAM-Based Convolutional Neuro-Fuzzy Network
Previous Article in Journal
Integral Equations of Non-Integer Orders and Discrete Maps with Memory
Previous Article in Special Issue
Developing a Novel Fuzzy Evaluation Model by One-Sided Specification Capability Indices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fuzzy Seasonal Long Short-Term Memory Network for Wind Power Forecasting

1
Department of Industrial Education and Technology, National Changhua University of Education, Changhua 50007, Taiwan
2
Department of Industrial Engineering and Enterprise Information, Tunghai University, Taichung 40704, Taiwan
3
Faculty of Finance and Banking, Ton Duc Thang University, Ho Chi Minh City 758307, Vietnam
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(11), 1178; https://doi.org/10.3390/math9111178
Submission received: 14 April 2021 / Revised: 19 May 2021 / Accepted: 21 May 2021 / Published: 23 May 2021
(This article belongs to the Special Issue Fuzzy Applications in Industrial Engineering)

Abstract

:
To protect the environment and achieve the Sustainable Development Goals (SDGs), reducing greenhouse gas emissions has been actively promoted by global governments. Thus, clean energy, such as wind power, has become a very important topic among global governments. However, accurately forecasting wind power output is not a straightforward task. The present study attempts to develop a fuzzy seasonal long short-term memory network (FSLSTM) that includes the fuzzy decomposition method and long short-term memory network (LSTM) to forecast a monthly wind power output dataset. LSTM technology has been successfully applied to forecasting problems, especially time series problems. This study first adopts the fuzzy seasonal index into the fuzzy LSTM model, which effectively extends the traditional LSTM technology. The FSLSTM, LSTM, autoregressive integrated moving average (ARIMA), generalized regression neural network (GRNN), back propagation neural network (BPNN), least square support vector regression (LSSVR), and seasonal autoregressive integrated moving average (SARIMA) models are then used to forecast monthly wind power output datasets in Taiwan. The empirical results indicate that FSLSTM can obtain better performance in terms of forecasting accuracy than the other methods. Therefore, FSLSTM can efficiently provide credible prediction values for Taiwan’s wind power output datasets.

1. Introduction

Wind power generation is replacing power generation via extensive gas-flow and uses wind to drive wind turbines. In 2000, to protect the environment, the Taiwanese government actively promoted the use of clean energy to reduce the greenhouse gas emissions generated by traditional power generation methods such as thermal power generation. The Taiwanese government’s expectations for wind power generation are very high. The government has developed an offshore wind power facility, the main goal of which is to generate enough electricity so that renewable energy can replace nuclear power generation. The vision of the Taiwanese government is to build a strong support industry by manufacturing the necessary wind turbine components, towers, and underwater cables for coastal engineering; by building underwater foundation pile; and by installing generators several miles offshore. According to statistics from the International Energy Agency, offshore wind power currently accounts for only 0.3% of global power generation, but experts have noted that wind power generation is expected to rapidly grow in the next 20 years, representing a business opportunity of up to one trillion US dollars. Therefore, the accuracy of wind power forecasting is a very important issue that could help governments to engage in effective policy planning. In recent years, many studies have investigated wind speed and power forecasting and adopted various prediction models to improve wind power generation forecasting. Lu et al. [1,2] used the Takagi–Sugeno fuzzy model to predict wind speed and power. Yu et al. [3] developed hybrid models that combine the wavelet transform (WT) with the support vector machine (SVM), gated recurrent units network (GRU), standard recurrent neural network (RNN), and LSTM models for wind-speed forecasting. Using the WT can decompose the original wind time series into several subseries with better behavior and greater predictability. The results indicated that the hybrid models WT-RNN-SVM, WT-LSTM-SVM, and WT-GRU-SVM obtained the best performance. Zjavka and Mišák [4] noted that wind output power forecasting entails chaotic large-scale patterns and has a high correlation with atmospheric circulation processes. The authors adopted the polynomial decomposition of the general differential equation, which represents the elementary Laplace transformations of a searched function, to predict the daily wind power. The results showed that their method can obtain lower casual errors due to using the decomposition method. Liu et al. [5] used combined wavelet packet decomposition with a convolutional neural network and convolutional long short-term memory network to forecast one-day wind speed. In the one-day wind speed time series, the model was able to obtain robust and effective performance. Toubeau et al. [6] adopted the LSTM to efficiently capture the complex temporal dynamics needed for wind power prediction.
When the RNN is applied to long-term dependence, the processing unit will continue to add and calculate the previously memorized information, causing the neural network to explode or disappear and eventually leading the network to collapse. Recurrent neural networks are weak in terms of learning long-term dependence [7]. To improve the shortcomings of recurrent neural networks, Hochreiter and Schmidhuber [8] proposed the long short-term nemory (LSTM) network in 1997. The long short-term memory model was developed on the basis of recurrent neural networks and is of a cyclical type. LSTM is a neural network architecture that adds the forget gate, input gate, and output gate to the processing unit in the hidden layer, the purpose of which is to read more past information. The model will first determine whether the information is useful, and then decide whether to add or delete the information to increase the ability of the neural network to be reliable over a long period.
Comparing the cyclic neural network and the long short-term memory model, it can be seen that the cyclic neural network will only receive the information calculated in the previous pass, while the long short-term memory model not only receives the information calculated in the last iteration but also all past messages. The long short-term memory model not only retains the advantages of the recurrent neural network but is also able to handle short-term dependencies. Therefore, the LSTM model can solve many tasks that the recurrent neural network could not solve in the past [7]. The development of short-term memory models has thus far provided help in various processing tasks, and they are widely used in various fields such as speech recognition [9,10], handwriting recognition [11], and predictions [12,13,14]. Table 1 summarizes the long and short-term memory models used to make predictions in related literature since 2015. Tian and Pan [15] used LSTM to predict the time series of car traffic, and the prediction time interval was divided into four types; the prediction accuracy of the five models was then compared and generalized. The study indicated that the prediction accuracy rate of LSTM was the best. This showed that LSTM has a high predictive ability and high generalization ability and that LSTM is better than RNN. Liu et al. [16] explores the prediction accuracy of LSTM on neonatal brainwave maps with different numbers of neurons. To verify the feasibility of LSTM, the study also compared LSTM with RNN. The results showed that LSTM is better than RNN. Janardhanan and Barrett [17] used LSTM to predict the computer CPU usage of Google’s data center in time series. Based on the results of multiple computers, the mean absolute percentage error (MAPE) value of the LSTM model was 17–23%, and that of the ARIMA model was 37–42%. Siami-Namini et al. [18] adopted LSTM to predict six financial indexes and six economic indexes. In the financial and economic average forecast results, the error rates of LSTM forecasting were 87% and 84% lower than that of ARIMA, respectively. Phyo et al. [19] used LSTM and the Deep Confidence Network (DBN) to predict the power load in Thailand, and the prediction accuracy of the two models was compared. The results showed that LSTM offers better predictive ability. Fan et al. [20] developed an integrated method that combined the ARIMA model with the LSTM model. Shahid et al. [21] developed a novel genetic long short-term memory (GLSTM) method; this method improved wind power predictions from 6% to 30% compared to existing techniques. Zhang et al. [22] developed a convolutional neural network model based on a deep factorization machine and attention mechanism (FA-CNN). The results indicated that FA-CNN obtained better performance than the traditional LSTM.
In this study, the prediction model adopts three LSTMs with fuzzy seasonal indexes to approach the fuzzy set’s upper and lower bounds, respectively, as well as mode prediction values. This is a novel prediction model for wind power output forecasting. The rest of this paper is organized as follows. Section 2 introduces the fuzzy seasonal LSTM (FSLSTM) in detail, which include fuzzy seasonal decomposition and fuzzy LSTM technology. Section 3 presents the experimental results of the FSLSTM for wind power output prediction. Finally, we draw conclusions and make suggestions for future research in Section 4.

2. Fuzzy Seasonal LSTM for Wind Power Output

In this study, the wind power output dataset is examined. This dataset is further divided into training, validation, and testing datasets, respectively. Firstly, the fuzzy seasonal index is calculated by seasonal trend decomposition. This method can define the fuzzy seasonal membership function with time series, and then the fuzzy trend dataset can be estimated using the multiplicative model. LSTM is employed to predict the fuzzy trend datasets for the upper bound, lower bound, and mode values. Based on the fuzzy LSTM and fuzzy seasonal index, the final forecasting report can be obtained using the measure index. A flowchart of the fuzzy seasonal LSTM for wind power output is shown in Figure 1.

2.1. Fuzzy Seasonal Decomposition

Chang [21] proposed the fuzzy seasonality index S k * , which is defined as possessing a triangular membership function, from the seasonality index set. Chang [23] determined S k * as follows:
S k * = ( s k L , s k M , s k U ) = ( min ( s k + ( T W + 1 ) × m , s k + ( T W + 2 ) × m , , s k + T × m ) , s k , max ( s k + ( T W + 1 ) × m , s k + ( T W + 2 ) × m , , s k + T × m ) ) , k = 1 , , m
where s k L , s k M , s k U are the W-period lower bound, W-period smoothing-operators (1 ≤ WT), and W-period upper bound, respectively.
In a time-series problem, reducing seasonality for time series predictions is very important. The fuzzy seasonality index has been shown to effectively obtain good performance in time series predictions. Therefore, this study proposes a novel fuzzy seasonal LSTM that uses the fuzzy seasonality index and a decomposition method to solve the seasonal time series problem. The multiplicative model is employed in the time series problem. Moreover, the IFLR with a spread unrestricted model is combined with symmetrical triangular FNs for forecasting and can obtain an accurately estimated value using Equation (3). Therefore, in the proposed model, a multiplicative model is used to obtain FNs based on a fuzzy seasonality index, as follows:
F k + ( T + v ) I ˜ = ( f k + ( T + v ) L T r × s k L × ε , f k + ( T + v ) M T r × s k M × ε , f k + ( T + v ) U T r × s k U × ε )
where F k + ( T + v ) ˜ represents the fuzzy seasonal LSTM forecast value, f k + ( T + v ) L T r is the lower-bound estimated value of the trend, f k + ( T + v ) M T r is the mode estimated value of the trend, f k + ( T + v ) U T r is the upper-bound estimated value of the trend, and ε is the model noise. The proposed fuzzy seasonal LSTM model can effectively use fuzzy seasonal decomposition to reduce seasonal effects in time series problems.

2.2. Fuzzy Seasonal LSTM Model

In the fuzzy seasonal LSTM model, the fuzzy trend lower bound, upper bound, and mode values must be trained. Therefore, the construction of the fuzzy LSTM method with the input lower bound of {(xi, f i L T r ), i = 1, 2, …, N}, mode {(xi, f i M T r ), i = 1, 2, …, N}, and the upper bound of {(xi, f i U T r ), i = 1, 2, …, N}, respectively, can be represented as follows:
f L T r ( x i ) = Y i L = o L i tan h ( c L i )
f M T r ( x i ) = Y i M = o M i tan h ( c M i )
f U T r ( x i ) = Y i U = o U i tan h ( c U i )
where the fuzzy long-term state c ˜ is (cLi, cMi, cUi), and the output gate o ˜ (oLi, oMi, oUi) can be estimated.
A fully connected fuzzy LSTM unit contains four layers, as with the traditional LSTM, and the fuzzy input vector (xi) and previous fuzzy short-term memory ( h i 1 ˜ ) are imported into these four layers (Figure 2). g i ˜ is the main layer of the fuzzy LSTM and uses the tanh activation function, and the fuzzy output data are stored in fuzzy long-term memory c ˜ i . The other three layers use logic activation functions, and their output ranges from 0 to 1. f ˜ i fi is the fuzzy forget gate that controls which parts of long-term memory should be deleted. I ˜ i is the fuzzy input gate that determines which parts of the fuzzy input should be added. o i ˜ is the gate that controls which parts of the fuzzy long-term memory should be read and the fuzzy output at this time step, f i L T r = h i L , f i M T r = h i M , and f i U T r = h i U .
The operation can be written as follows:
Fuzzy input gate:
I i ˜ = ( σ ( W x L I T x L i + W h L I T h L ( i 1 ) + b L I ) , σ ( W x M I T x M i + W h M I T h M ( i 1 ) + b M i ) , σ ( W x U I T x U i + W h I T h U ( i 1 ) + b U I ) )
Fuzzy forget gate:
f i ˜ = ( σ ( W x L f T x L i + W h f T h L ( i 1 ) + b L f ) , σ ( W x M f T x M i + W h M f T h M ( i 1 ) + b M f ) , σ ( W x U f T x U i + W h U f T h U ( i 1 ) + b U f ) )
Output gate:
o i ˜ = ( σ ( W x L o T x L i + W h o T h L ( i 1 ) + b L o ) , σ ( W x M o T x M i + W h M o T h M ( i 1 ) + b M o ) , σ ( W x U o T x U i + W h U o T h U ( i 1 ) + b U o ) )
Neuron input and cell input:
g i ˜ = ( σ ( W x L g T x L i + W h g T h L ( i 1 ) + b L g ) , σ ( W x M g T x M i + W h M g T h M ( i 1 ) + b M g ) , σ ( W x U g T x U i + W h U g T h U ( i 1 ) + b U g ) )
where W x I = ( ˜ W x L I T , W x M I T , W x U I T ) , W x f = ( ˜ W x L f T , W x M f T , W x U f T ) , W x o = ( ˜ W x L o T , W x M o T , W x U o T ) , and W x g = ( ˜ W x L g T , W x M g T , W x g T ) are the fuzzy weight matrices of each of the four layers used for their connections with the fuzzy input vector, and the fuzzy weight matrices are connected to the fuzzy short-term state h i 1 ˜ .   b i , b f , b o , b g   b I = ( ˜ b L I , b M I , b U I ) , b f = ( ˜ b L f , b M f , b U f ) , b o = ( ˜ b L o , b M o , b U o ) and b g = ( ˜ b L g , b M g , b U g ) are the deviation terms of each of the four layers, tanh is the hyperbolic tangent function (e(x) − e(−x))/(e(x) + e(−x)), and σ is the sigmoid function 1/(1 + e(−x)).
Finally, the long-term and short-term states are calculated as follows:
Fuzzy long-term state:
c i ˜ = f i ˜ × c i 1 ˜ + I i ˜ × g i ˜
Fuzzy short-term state:
Y i ˜ = h i ˜ = o i ˜ × tanh ( c i ˜ )
Moreover, this FSLSTM adopts the adaptive moment estimation (Adam) optimization algorithm from Kingma and Ba [24], which employs stochastic optimization, to search for the proper parameters of the FSLSTM. The Adam optimization algorithm was demonstrated empirically to show that convergence meets the expectations of the theoretical analysis. The proposed FSLSTM can achieve robust performance based on the Adam optimization algorithm. The maximum epochs, initial parameters learning rate, gradient threshold, learn rate drop period, and learn rate drop factor of FSLSTM are 250, 0.005, 1, 125, and 0.2, respectively.

3. A Wind Power Output Example and Empirical Results

Energy conservation and decreasing carbon are very important management issues for the global power industry. To demonstrate its concern regarding the global warming issue and to comply with the government’s Sustainable Energy Guidelines, the Taiwanese government is actively promoting the use of clean energy. The Taipower company has built 17 wind energy power stations in Taiwan that record monthly data on the total power output. All experimental data can be download from the National Development Council in Taiwan (https://data.gov.tw (accessed on 21 July 2020)). In this study, we selected three wind energy power stations: the Shimen, Taichung, and Mailiao wind power plants. Figure 3 and Table 2 depict the monthly generated output power (units: kilowatt-hours) from these wind power stations during the period from January 2017 to June 2020 (the total number is 42). In this study, the monthly data were divided into three sets: firstly, a training set was employed to determine the optimum forecasting model during the period from January 2017 to December 2018 (the number of samples in the training set was 24); secondly, a validation set was employed to prevent the overfitting of the different models during the period from January 2019 to December 2019 (the number of samples in the validation set was 24); finally, a testing set was employed to investigate the performance of the different models during the period from January 2020 to June 2020 (the number of samples in the testing set was 24). The percentages of training, validation, and testing sets were 57%, 29%, and 14%, respectively.
Figure 3 clearly indicates that the measured time series feature seasonal data and three types of cycles for the different wind power plants. The Mailiao wind power plant can generate greater power because the Mailiao wind power plant features the largest number of wind-driven generators in Taiwan. Moreover, a larger power output can be obtained during winter in Taiwan as a result of the northeast monsoon. Table 3 depict the fuzzy seasonality index with k ranging from 1 to 12 from the selected wind power stations. In addition, the mean absolute percentage error MAPE(%) was used to measure the forecasting accuracy. Equation (12) illustrates the expression of MAPE(%):
M A P E ( % ) = 100 M i = 1 M | A i P i A i |
where M is the number of forecasting periods, Ai is the actual production value at period i, and Pi is the forecasting production value at period i. Moreover, the RMSE is employed to evaluate the training error of FSLSTM, which can be expressed as follows:
RMSE = 1 M i = 1 M ( A i P i ) 2
Figure 4 shows the training error of the FSLSTM in three wind power stations, adopting the Adam algorithm. We can observe that the FSLSTM can obtain a lower RMSE training error (smaller than 0.2) in three wind power stations.
In this study, the FSLSTM, LSTM [8], ARIMA(1, 0, 0) [25], generalized regression neural network (GRNN) [26], back propagation neural network (BPNN) [27], least square support vector regression (LSSVR) [28], and seasonal autoregressive integrated moving average (SARIMA (1, 0, 0) (1, 0, 0)12) [25] models were used to forecast the monthly wind power output datasets in selected stations in Taiwan. The construction of LSTM is similar to that of the FSLSTM (see Section 2.1). The LSTM network also adopted the Adam optimization algorithm to search for optimal parameters. The ARIMA is similar to SARIMA (see Appendix A), with a difference in seasonal parameters. The construction and parameter (σ) of the GRNN is shown in Appendix B. The parameter (σ) of the GRNN was set to 1. In this study, a well-known intelligent computing machine, BPNN, is also adopted to compare prediction models. In the BPNN, the input layer has one input neuron to catch the input patterns, the hidden layer has ten neurons to propagate the intermediate signals, and the output layer has one neuron. For more training assignments in the BPNN, the hyperbolic tangent sigmoid function is employed as the activation function in the hidden layer, the pure-line transfer function is employed in the output layer as the activation function, and the gradient training is adopted as the learning algorithm for the BPNN. The LSSVR is a popular prediction model in time series problems. For the main constructs of the LSSVR, readers can be refer to [28], and the regularization parameter in the experiment was set to 1. The Radial Basis Function (RBF) Kernel Trick was employed in the LSSVR, and the parameter (σ) of the RBF was set to 0.01. Table 4 depicts the training error with various prediction models. The proposed FSLSTM, LSTM, and GRNN approaches could obtain lower training errors, which means that the training models of the three approach achieved better performance.
Table 5 illustrates the actual values and experimental results of the FSLSTM model with the mode (M) and upper (U) and lower (L) bounds from January 2020 to June 2020 for the Shimen wind power plant. Figure 5a makes a point-to-point comparison of the actual values and predicted values of FSLSTM. As shown in Figure 4, the peak power output was in April 2020, which was not easily observed in the training dataset. Figure 5b shows a comparison of the actual values and predicted values of ARIMA, SARIMA, GRNN, BPNN, LSTM, and FSLSTM-L for the Shimen wind power plant.
Table 5 shows the experimental results and MAPE(%) obtained by various models. The ranking of MAPE(%) is as follows: GRNN < FSLSTM-L < FSLSTM-M < LSTM < ARIMA < SARIMA < FSLSTM-U < BPNN < LSSVR. Table 4 indicates that the GRNN obtained the smallest MAPE(%), showing the best performance. However, Figure 5 shows that the predicted value of the GRNN could not capture the trend of power output at the Shimen wind power plant. The FSLSTM-L model was able to efficiently capture the trends of the data by using the fuzzy seasonal index, although the MAPE(%) of the FSLSTM-L was higher than that of the GRNN in the example. Thus, the proposed FSLSTM model is suggested to serve as a prediction model for power output for the Shimen wind power plant.
Table 6 illustrates the actual values and experimental results of the proposed model for the Taichung wind power plant. Figure 6a makes a point-to-point comparison between the actual values and predicted values of the FSLSTM at the Taichung wind power plant. Downward trends of power output can be observed in Figure 5. Figure 6b illustrates a comparison between the actual values and predicted values of the ARIMA, SARIMA, GRNN, BPNN, LSTM, and FSLSTM-M models for the Taichung wind power plant.
Table 6 shows the experimental results and MAPE(%) obtained by various models for the Taichung wind power plant. The ranking of MAPE(%) is FSLSTM-M < SARIMA < FSLSTM-U < GRNN < LSTM < LSSVR < FSLSTM-L < BPNN < ARIMA. Table 5 indicates that the FSLSTM-M obtained the smallest MAPE(%), which means that FSLSTM-M achieved the best performance in this example. Moreover, Figure 6 shows that the predicted value of the FSLSTM-M was able to capture the trend of power output at the Taichung wind power plant. Moreover, the two seasonal models, FSLSTM-M and SARIMA, obtained better performance than the other models, possibly because the power output at the Taichung wind power plant has a seasonal influence. Thus, the proposed FSLSTM model is also suggested to serve as a prediction model for power output at the Taichung wind power plant.
Table 7 illustrates the actual values and experimental results of the proposed model for the Mailiao wind power plant. Figure 7a also makes a point-to-point comparison of the actual values and predicted values of FSLSTM at the Mailiao wind power plant. As with the Taichung wind power plant, downward trends of power output can be observed in Figure 7 for the Mailiao wind power plant. Because the Mailiao wind power plant provides a larger quantity of wind power generation than the other power plants, Mailiao outputs more power in KWh. Figure 7b shows a comparison between the actual values and predicted values of ARIMA, SARIMA, GRNN, BPNN, LSTM, and FSLSTM-M models at the Mailiao wind power plant.
Table 7 shows the experimental results and MAPE(%) obtained using various models for the Mailiao wind power plant. The ranking of MAPE(%) is FSLSTM-M < SARIMA < FSLSTM-U < GRNN < LSTM < LSSVR < FSLSTM-L < BPNN < ARIMA. Table 6 indicates that the FSLSTM-M obtained the smallest MAPE(%), which means that the FSLSTM-M achieved the best performance in this example. Both seasonal models, FSLSTM-M and SARIMA, obtained better performance than the other models and were able to capture the trend of power output for the Mailiao wind power plant, possibly for the same reasons as those of the Taichung wind power plant. Moreover, the Mailiao region is very close to the Taichung region in Taiwan. Thus, the ranking of MAPE(%) in the Mailiao region is the same as that of the Taichung region. Again, the proposed FSLSTM model is suggested to serve as a prediction model for wind power output at the Mailiao wind power plant.
By reviewing the three forecasting examples using the FSLSTM model, some findings can be concluded, as follows: (1) the FSLSTM model can efficiently handle seasonal influence. In the Taichung and Mailiao regions, the seasonal influence of wind power output can be observed. (2) In all examples, the FSLSTM model could obtain better performance and more accurately capture the trends of wind power output. This performance was not observed for the traditional LSTM in the three examples. (3) For the three different types of wind power output, the FSLSTM-M model obtained better performance than almost all other models. The FSLSTM-M model is thus recommended as a prediction model for wind power output.

4. Managerial Implications

A wind power forecasting system is implemented for the government in this study. The government can use the forecasting results of monthly wind power output to reduce the risk of insufficient power supply. The wind power forecasting system can provide an early warning of insufficient power supply to decision-makers in the government to reduce the risk of an insufficient power supply. The government has developed an offshore wind power facility, the main goal of which is to generate enough electricity so that renewable energy can replace nuclear power generation. The mechanism of early warning by the proposed wind power forecasting system can accurately predict the wind power output. The decision-makers in the government can therefore conduct proper planning to avoid the risk of an insufficient power supply.

5. Conclusions

Due to the Sustainable Development Goals, wind power prediction has become increasingly crucial in Taiwan. Moreover, LSTM models have been successfully used in time series forecasting problems. However, they have not been widely explored in seasonal time series prediction. This study developed a novel FSLSTM model to exploit the unique strength of the fuzzy seasonal index and the LSTM technique in order to predict wind power output in Taiwan. In all examples, the FSLSTM model could obtain better performance and more accurately capture the trends of wind power output. This performance was not observed for the traditional LSTM in the three examples. The simultaneous results indicate that the FSLSTM model represents a promising alternative for analyzing wind power output in Taiwan. The superior performance of the FSLSTM model can be ascribed to two causes: first, the FSLSTM benefits from the advantages of LSTM and can effectively capture the time series dataset by the mechanism of a recurrent neural network; second, the fuzzy decomposition method enhances the ability of the FSLSTM models to capture seasonal nonlinear data patterns under an uncertain environment. The limitation of the FSLSTM model is that it is only suitable for strong monthly seasonal patterns. Forecasting other types of time series data using an LSTM-related model would be a challenging issue for future studies. Future research directions could consider using data preprocessing techniques to achieve improvements in the forecasting accuracy of the FSLSTM model for seasonal time series data. The parameters of FSLSTM also could be searched by a heuristic algorithm to improve the performance.

Author Contributions

Conceptualization, I.-C.W. and K.-P.L.; methodology, K.-P.L.; validation, Y.-J.L.; investigation, Y.-J.L.; writing—original draft preparation, I.-C.W. and K.-P.L.; writing—review and editing, C.-W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Science and Technology of the Republic of China, Taiwan, grant number MOST-109-2221-E-029-016.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Generalized Regression Neural Network (GRNN)

The GRNN is based on nonlinear regression theory and is a well-established statistical technique for function estimation. By definition, the regression of a dependent variable y on an independent variable x estimates the most probable value for y, given x and a training set. The training set consists of values for x, each with a corresponding value for y (x and y are, in general, vectors). However, variable y may be corrupted by additive noise. Despite this, the regression method will produce the estimated value of y which minimizes the mean-squared error. The GRNN is, in essence, a method for estimating f(x, y) given only a training set. Because the probability distribution function is derived from the data with no preconceptions about its form, the system is perfectly general. There is no problem if the functions are composed of multiple disjointed non-Gaussian regressions in any number of dimensions, as well as those of simpler distributions. The variable yi is estimated optimally as follows:
y i = i = 1 n h i w i j / i = 1 n h i
where wij is the target output corresponding to input training vector xi and output j. hi is exp [ D i 2 / ( 2 σ 2 ) ] , the output of a hidden layer neuron. D i 2 is (xui)T(xui) (the squared distance between the input vector x and the training vector u). x is the input vector (a column vector). ui is the training vector of i, the center of neuron i (a column vector). σ is a constant controlling the size of the respective region. Equation (A1) is the radial basis function (with normalization). However, this is different to the RBN in that the target values are used as the weights of the output network.

Appendix B. Seasonal Autoregressive Integrated Moving Average Model (SARIMA)

The SARIMA model is a popular tool in time series forecasting for data with a seasonal pattern. The SARIMA (p, d, q) × (P, D, Q)S process generates a time series, { X t ,   t = 1 , 2 , ,   N   } ,with the mean μ of the Box and Jenkins time series model satisfying
φ ( B ) ϕ ( B S ) ( 1 B ) d ( 1 B S ) D ( X t u ) = θ ( B ) Θ ( B S ) a t
where p, d, q, P, D and Q are nonnegative integers; S is the seasonal length; φ ( B ) = ( 1 φ 1 B φ 2 B 2 φ p B p ) represents a regular autoregressive operator of order p, ϕ ( B S ) = ( 1 ϕ 1 B S ϕ 2 B 2 S ϕ p B P S ) is a seasonal autoregressive operator of order P, θ ( B ) = ( 1 θ 1 B θ 2 B 2 θ q B q ) denotes a regular moving average operator of order q, and Θ ( B S ) = ( 1 Θ 1 B S Θ 2 B 2 S   Θ Q B Q S ) expresses a seasonal moving average operator of order Q. Additionally, B indicates the backward shift operator, d denotes the number of regular differences, D represents the number of seasonal differences, and a t is the forecasted residual at time t. When fitting a SARIMA model to data, the first task is to estimate values of d and D, which are the orders of differentiation needed to make the series stationary and to remove most of the seasonality. The suitable values of p, P, q, and Q can be evaluated by the autocorrelation function and partial autocorrelation function of the differentiated series. The parameter selection of the SARIMA model includes the following four iterative steps:
(a)
Identifying a tentative SARIMA model;
(b)
Estimating parameters in the tentative model;
(c)
Evaluating the adequacy of the tentative model;
(d)
If an appropriate model is obtained, then applying this model for forecasting; otherwise, returning to step (a).

References

  1. Liu, F.; Li, R.; Dreglea, A. Wind speed and power ultra short-term robust forecasting based on Takagi–Sugeno fuzzy model. Energies 2019, 12, 3551. [Google Scholar] [CrossRef] [Green Version]
  2. Liu, F.; Li, R.; Li, Y.; Cao, Y.; Panasetsky, D.; Sidorov, D. Short-term wind power forecasting based on TS fuzzy model. In Proceedings of the 2016 IEEE PES Asia-Pacific Power and Energy Engineering Conference (APPEEC), Xi’an, China, 25–28 October 2016; pp. 414–418. [Google Scholar]
  3. Yu, C.; Li, Y.; Bao, Y.; Tang, H.; Zhai, G. A novel framework for wind speed prediction based on recurrent neural networks and support vector machine. Energy Convers. Manag. 2018, 178, 137–145. [Google Scholar] [CrossRef]
  4. Zjavka, L.; Mišák, S. Direct wind power forecasting using a polynomial decomposition of the general differential equation. IEEE Trans. Sustain. Energy 2018, 9, 1529–1539. [Google Scholar] [CrossRef]
  5. Liu, H.; Mi, X.; Li, Y. Smart deep learning based wind speed prediction model using wavelet packet decomposition, convolutional neural network and convolutional long short term memory network. Energy Convers. Manag. 2018, 166, 120–131. [Google Scholar] [CrossRef]
  6. Toubeau, J.-F.; Dapoz, P.-D.; Bottieau, J.; Wautier, A.; De Grève, Z.; Vallée, F. Recalibration of recurrent neural networks for short-term wind power forecasting. Electr. Power Syst. Res. 2021, 190, 106639. [Google Scholar] [CrossRef]
  7. Gers, F.A.; Schmidhuber, J.; Cummins, F. Learning to forget: Continual prediction with LSTM. In Proceedings of the Ninth International Conference on Artificial Neural Networks, Edinburgh, UK, 7–10 September 1999; pp. 850–855. [Google Scholar]
  8. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  9. Graves, A.; Mohamed, A.; Hinton, G. Speech recognition with deep recurrent neural networks. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 6645–6649. [Google Scholar]
  10. Sak, H.; Senior, A.; Beaufays, F. Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv 2014, arXiv:1402.1128. [Google Scholar]
  11. Graves, A.; Schmidhuber, J. Offline Arabic Handwriting Recognition with Multidimensional Recurrent Neural Networks. In Proceedings of the Twenty-Second Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–11 December 2008; pp. 1–8. [Google Scholar]
  12. Duan, Y.; Lv, Y.; Wang, F. Travel time prediction with LSTM neural network. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 1053–1058. [Google Scholar]
  13. Nelson, D.M.Q.; Pereira, A.C.M.; de Oliveira, R.A. Stock market’s price movement prediction with LSTM neural networks. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 1419–1426. [Google Scholar]
  14. Abdel-Nasser, M.; Mahmoud, K. Accurate photovoltaic power forecasting models using deep LSTM-RNN. Neural Comput. Appl. 2019, 31, 2727–2740. [Google Scholar] [CrossRef]
  15. Tian, Y.; Pan, L. Predicting short-term traffic flow by long short-term memory recurrent neural network. In Proceedings of the 2015 IEEE International Conference on Smart City/SocialCom/SustainCom (SmartCity), Chengdu, China, 19–21 December 2015; pp. 153–158. [Google Scholar]
  16. Liu, L.; Chen, W.; Cao, G. Prediction of neonatal amplitude-integrated EEG based on LSTM method. In Proceedings of the 2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Shenzhen, China, 15–18 December 2016; pp. 497–500. [Google Scholar]
  17. Janardhanan, D.; Barrett, E. CPU workload forecasting of machines in data centers using LSTM recurrent neural networks and ARIMA models. In Proceedings of the 12th International Conference for Internet Technology and Secured Transactions (ICITST), Cambridge, UK, 11–14 December 2017; pp. 55–60. [Google Scholar]
  18. Siami-Namini, S.; Tavakoli, N.; Namin, A.S. A comparison of ARIMA and LSTM in forecasting time series. In Proceedings of the 17th IEEE International Conference on Machine Learning and Applications (ICMLA), Orlando, FL, USA, 17–20 December 2018; pp. 1394–1401. [Google Scholar]
  19. Phyo, P.P.; Jeenanunta, C.; Hashimoto, K. Electricity Load Forecasting in Thailand Using Deep Learning Models. Int. J. Electr. Electron. Eng. Telecommun. 2019, 8, 221–225. [Google Scholar] [CrossRef]
  20. Fan, D.; Sun, H.; Yao, J.; Zhang, K.; Yan, X.; Sun, Z. Well production forecasting based on ARIMA-LSTM model considering manual operations. Energy 2021, 220, 119708. [Google Scholar] [CrossRef]
  21. Shahid, F.; Zameer, A.; Muneeb, M. A novel genetic LSTM model for wind power forecast. Energy 2021, 223, 120069. [Google Scholar] [CrossRef]
  22. Zhang, X.; Liu, S.; Zheng, X. Stock Price Movement Prediction Based on a Deep Factorization Machine and the Attention Mechanism. Mathematics 2021, 9, 800. [Google Scholar] [CrossRef]
  23. Chang, P.-T. Fuzzy seasonality forecasting. Fuzzy Sets Syst. 1997, 90, 1–10. [Google Scholar] [CrossRef]
  24. Kingma, D.P.; Ba, L.J. Adam: A Method for Stochastic Optimization. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015; pp. 1–13. [Google Scholar]
  25. Box, G.; Jenkins, G. Time Series Analysis: Forecasting and Control; Holden-Day: San Francisco, CA, USA, 1976. [Google Scholar]
  26. Specht, D.F. A general regression neural network. IEEE Trans. Neural Netw. 1991, 2, 568–576. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Pai, P.-F.; Lin, K.-P. Application of neural fuzzy systems in reliability prediction. Qual. Reliab. Eng. Int. 2006, 22, 199–211. [Google Scholar] [CrossRef]
  28. Van Gestel, T.; Suykens, J.A.K.; Baestaens, D.-E.; Lambrechts, A.; Lanckriet, G.; Vandaele, B.; De Moor, B.; Vandewalle, J. Financial time series prediction using least squares support vector machines within the evidence framework. IEEE Trans. Neural Netw. 2001, 12, 809–821. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. A flowchart of the fuzzy seasonal LSTM prediction model.
Figure 1. A flowchart of the fuzzy seasonal LSTM prediction model.
Mathematics 09 01178 g001
Figure 2. A construction diagram of the fuzzy LSTM prediction model.
Figure 2. A construction diagram of the fuzzy LSTM prediction model.
Mathematics 09 01178 g002
Figure 3. Illustration of the actual power output of the three wind power stations.
Figure 3. Illustration of the actual power output of the three wind power stations.
Mathematics 09 01178 g003
Figure 4. Illustration of the training error of the FSLSTM for the three wind power stations.
Figure 4. Illustration of the training error of the FSLSTM for the three wind power stations.
Mathematics 09 01178 g004
Figure 5. Illustration of the actual and forecasted power outputs of various models for the Shimen wind power plant.
Figure 5. Illustration of the actual and forecasted power outputs of various models for the Shimen wind power plant.
Mathematics 09 01178 g005
Figure 6. Illustration of the actual and forecasting power outputs of various models for the Taichung wind power plant.
Figure 6. Illustration of the actual and forecasting power outputs of various models for the Taichung wind power plant.
Mathematics 09 01178 g006
Figure 7. Illustration of the actual and forecasting power output of various models for the Mailiao wind power plant.
Figure 7. Illustration of the actual and forecasting power output of various models for the Mailiao wind power plant.
Mathematics 09 01178 g007
Table 1. Long short-term memory (LSTM) applications from 2015.
Table 1. Long short-term memory (LSTM) applications from 2015.
AuthorApplied FieldMethodologyCompared Methodology
Tian and Pan [15]TrafficLSTM SVR, Random Walker (RW), Fuzzy Neural Network (FNN), and Stacked Autoencoder (SAE)
Liu et al. [16]ElectroencephalographyLSTMRNN
Janardhanan and Barrett [17]CPU usage of Google’s data centerLSTMARIMA
Siami-Namini et al. [18]Financial index and economic indexesLSTM ARIMA
Phyo et al. [19]Power loadLSTM Deep Confidence Network (DBN)
Fan et al. [20]Production forecastingIntegrates the ARIMA model and the LSTM model. ARIMA
Shahid et al. [21]Wind powerGLSTMLSTM
Zhang et al. [22]Stock price movement predictionFA-CNNLSTM
Table 2. The actual power output of the three wind power stations (units: kilowatt-hours).
Table 2. The actual power output of the three wind power stations (units: kilowatt-hours).
DateShimenTaichungMailiao
2017/01463,283696,64016,804,318
2017/02407,258630,97814,938,148
2017/03419,363440,90010,740,085
2017/04336,234220,9806,313,574
2017/05350,142166,3903,919,309
2017/06195,069124,6004,899,275
2017/07172,42978,3302,342,427
2017/08349,34192,6202,677,669
2017/09330,570109,6002,426,662
2017/101,239,911599,70013,307,824
2017/11741,605516,78414,782,631
2017/12987,529688,27012,469,649
2018/01699,627578,82014,875,834
2018/02573,615527,32012,316,626
2018/03428,185339,6348,437,488
2018/04267,650169,8664,483,353
2018/05260,618127,6923150,546
2018/06268,476217,3536,132,870
2018/07461,025113,9792,889,834
2018/08197,81481,1253,290,543
2018/09391,113289,6508,800,633
2018/10631,983627,87714,854,486
2018/11498,720381,9569,282,346
2018/12791,605722,62418,553,027
2019/01737,047719,89618,322,239
2019/02520,684408,3609,729,809
2019/03651,798328,9967,658,895
2019/04388,595207,1145,157,800
2019/05386,277199,8864,300,127
2019/06159,193109,5852,964,455
2019/07211,329149,0403,339,550
2019/08339,098178,1235,414,371
2019/09589,619418,6829,185,376
2019/10491,982541,49611,115,453
2019/11845,222659,64615,558,639
2019/12664,395340,70214,464,274
2020/01482,289421,82612,276,955
2020/02398,281396,2849,756,250
2020/03345,802328,1778,338,542
2020/04531,022347,5796,938,541
2020/05192,98781,7142,937,135
2020/06195,06163,8843,445,273
Table 3. The fuzzy seasonality index of the three wind power stations.
Table 3. The fuzzy seasonality index of the three wind power stations.
Fuzzy Seasonality IndexShimenTaichungMailiao
s k L s k M s k U s k L s k M s k U s k L s k M s k U
k = 10.67851.47091.47090.52721.90391.90390.52431.88021.8802
k = 20.67851.11481.11480.45361.36591.36590.40551.24931.2493
k = 30.46111.10291.10290.45360.95120.95120.40550.88750.8875
k = 40.44640.67850.68100.27340.52720.52720.30720.52430.5243
k = 50.44640.68100.68480.24970.45360.46670.30720.40550.4979
k = 60.44640.46110.86940.24970.46670.58190.30720.49790.9466
k = 70.44640.44641.36260.24970.27341.81280.30720.30721.5730
k = 80.68480.68481.45870.24970.24971.81280.34840.34841.7477
k = 90.86940.86941.66860.58190.58192.07900.94660.94661.7477
k = 101.36261.36261.66861.33421.81282.07901.57301.57301.8802
k = 111.11481.45871.66861.33421.33422.07901.24931.74771.8802
k = 121.10291.66861.66860.95122.07902.07900.88751.63181.8802
Table 4. Comparison of the training error (RMSE) with various prediction models for different wind power plants.
Table 4. Comparison of the training error (RMSE) with various prediction models for different wind power plants.
FSLTMLSTMBPNNGRNNLSSVRARIMASARIMA
Shimen0.07700.1689508,03960121,1881,307,183866,451
Mailiao0.13240.0592498,950302,065,17322,174,28715,002,000
Taichung0.40320.0129208,901090,886980,862591,422
Table 5. Comparison of the forecasting results for the Shimen wind power plant.
Table 5. Comparison of the forecasting results for the Shimen wind power plant.
DateActual ValueARIMASARIMAGRNNBPNNLSSVRLSTMFSLSTM
MUL
2020/01482,289265,130807,339463,283484,678538,345663,547644,173506,533458,776
2020/02398,281264,905536,142407,258484,678495,248545,863488,201599,497564,286
2020/03345,802264,996447,263419,363484,678486,974390,044482,980472,889345,594
2020/04531,022264,959267,414336,234484,678486,252274,106297,131430,324426,518
2020/05192,987264,974265,935350,142484,678486,224291,349298,220438,839297,130
2020/06195,061264,968264,577195,069484,678486,223263,500201,939270,340343,187
MAPE (%) 37.5142.4124.2661.7864.4036.9732.9846.2032.69
Ranking (5)(6)(1)(8)(9)(4)(3)(7)(2)
Table 6. Comparison of the forecasting results for the Taichung wind power plant.
Table 6. Comparison of the forecasting results for the Taichung wind power plant.
DateActual ValueARIMASARIMAGRNNBPNNLSSVRLSTMFSLSTM
MUL
2020/01421,826325,087901,020696,640355,700355,520366,148294,629678,346209,460
2020/02396,284310,188319,967630,978355,700355,510698,414297,072479,634340,958
2020/03328,177295,972370,365440,900355,700355,500466,354247,767297,040799,896
2020/04347,579282,408186,543220,980355,700355,491615,672149,127170,964942,933
2020/0581,714269,465209,163166,390355,700355,481462,467133,024502,598327,882
2020/0663,884257,115104,727124,600355,700355,471411,278141,578161,424268,070
MAPE(%) 100.909968.657765.6349138.1218151.4939203.070553.5326134.9830166.70
Ranking (9)(2)(4)(8)(6)(5)(1)(3)(7)
Table 7. Comparison of the forecasting results for the Mailiao wind power plant.
Table 7. Comparison of the forecasting results for the Mailiao wind power plant.
DateActual ValueARIMASARIMAGRNNBPNNLSSVRLSTMFSLSTM
MUL
2020/0112,276,95514,382,85219,238,92816,804,3188,886,11510,655,72511,490,89115,802,90217,649,3612,738,426
2020/029,756,25014,301,8889,495,57014,938,1488,886,1159,381,1109,797,0618,654,35411,218,3797,435,086
2020/038,338,54214,221,3797,698,41910,740,0858,886,1159,060,7883,481,3368,168,1946,652,35321,668,492
2020/046,938,54114,141,3245,138,1576,313,5748,886,1159,027,7591,163,3175,596,0753,765,75416,162,712
2020/052,937,13514,061,7204,296,4103,919,3098,886,1159,026,3615,262,6653,750,8422,366,4792,354,684
2020/063,445,27313,982,5632,959,3474,899,2758,886,1159,026,3373,460,0683,445,2733,011,6382,864,925
MAPE(%) 153.784625.564533.906871.939970.855537.985214.851526.118471.8270
Ranking (9)(2)(4)(8)(6)(5)(1)(3)(7)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liao, C.-W.; Wang, I.-C.; Lin, K.-P.; Lin, Y.-J. A Fuzzy Seasonal Long Short-Term Memory Network for Wind Power Forecasting. Mathematics 2021, 9, 1178. https://doi.org/10.3390/math9111178

AMA Style

Liao C-W, Wang I-C, Lin K-P, Lin Y-J. A Fuzzy Seasonal Long Short-Term Memory Network for Wind Power Forecasting. Mathematics. 2021; 9(11):1178. https://doi.org/10.3390/math9111178

Chicago/Turabian Style

Liao, Chin-Wen, I-Chi Wang, Kuo-Ping Lin, and Yu-Ju Lin. 2021. "A Fuzzy Seasonal Long Short-Term Memory Network for Wind Power Forecasting" Mathematics 9, no. 11: 1178. https://doi.org/10.3390/math9111178

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop