Next Article in Journal
Tree-Structured Model with Unbiased Variable Selection and Interaction Detection for Ranking Data
Previous Article in Journal
A Reinforcement Learning Approach for Scheduling Problems with Improved Generalization through Order Swapping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence-Based Prediction of Spanish Energy Pricing and Its Impact on Electric Consumption

by
Marcos Hernández Rodríguez
1,
Luis Gonzaga Baca Ruiz
2,*,
David Criado Ramón
1 and
María del Carmen Pegalajar Jiménez
1
1
Department of Computer Science and Artificial Intelligence, University of Granada, 18014 Granada, Spain
2
Department of Software Engineering, University of Granada, 18014 Granada, Spain
*
Author to whom correspondence should be addressed.
Mach. Learn. Knowl. Extr. 2023, 5(2), 431-447; https://doi.org/10.3390/make5020026
Submission received: 20 March 2023 / Revised: 1 May 2023 / Accepted: 1 May 2023 / Published: 2 May 2023
(This article belongs to the Section Network)

Abstract

:
The energy supply sector faces significant challenges, such as the ongoing COVID-19 pandemic and the ongoing conflict in Ukraine, which affect the stability and efficiency of the energy system. In this study, we highlight the importance of electricity pricing and the need for accurate models to estimate electricity consumption and prices, with a focus on Spain. Using hourly data, we implemented various machine learning models, including linear regression, random forest, XGBoost, LSTM, and GRU, to forecast electricity consumption and prices. Our findings have important policy implications. Firstly, our study demonstrates the potential of using advanced analytics to enhance the accuracy of electricity price and consumption forecasts, helping policymakers anticipate changes in energy demand and supply and ensure grid stability. Secondly, we emphasize the importance of having access to high-quality data for electricity demand and price modeling. Finally, we provide insights into the strengths and weaknesses of different machine learning algorithms for electricity price and consumption modeling. Our results show that the LSTM and GRU artificial neural networks are the best models for price and consumption modeling with no significant difference.

1. Introduction

Energy pricing and electric consumption are two of the most important factors that affect the functioning of modern societies [1]. The energy sector is constantly evolving, and it is essential to have accurate predictions of energy prices and consumption to ensure stability and affordability [2]. In recent years, artificial intelligence (AI) has emerged as a powerful tool for making predictions in various fields, including energy [3].
There are several factors that can contribute to an increase in electricity prices, such as fuel costs, supply and demand, infrastructure investment, government policies, or natural disasters [4,5,6]. The energy industry is currently facing several difficulties, including the need to address climate change by reducing greenhouse gas emissions and transitioning to clean energy sources, which can affect energy costs; government regulations that can greatly impact electricity prices, leading to conflicting opinions on the best course of action; and geopolitical conflict that can also have a major impact on both energy pricing and supply [7]. A report by Fitch Rations [8] states that the 2023 electricity forward prices are about three times higher compared to the historical average of Europe in most Western European countries. The report also expects gas and electricity prices to remain much higher than historical levels in 2023 and 2024. Another report conducted by Ember [9] highlights the proposed 45% renewable energy goal for 2030, which would see 69% of the EU’s electricity generated from renewables by that year. It also mentions that EU electricity generation is still heavily reliant on fossil fuels. These challenges highlight the importance of continued innovation and investment in the energy sector to ensure a reliable and affordable energy supply.
From December 2020 to the present, wholesale electricity prices have experienced a substantial increase, reaching double their previous levels. This increase is largely attributed to European Union policies regarding the reduction of CO2 emissions, the significant appreciation of natural gas prices, and the current conflict in Ukraine as of February 2022 [10,11].
The ongoing conflict between Russia and Ukraine has highlighted the need for increased stability in the energy markets and the importance of ensuring a consistent and affordable energy supply. In this context, the use of AI models to predict energy pricing and electric consumption is particularly relevant [12,13]. The prediction of real-time prices has been previously proposed as a potential solution for enhancing the efficiency of electric planning, budget preparation, and network performance [14,15].
In the current energy market situation in Spain, which is characterized by high levels of renewable energy penetration and price volatility, there is a need for accurate and reliable models to predict electricity prices and consumption. In this context, our study aims to address this need by evaluating the performance of various machine learning algorithms for electricity price and consumption modeling in the Spanish market. Specifically, we analyze and compare the performance of linear regression, random forest, XGBoost, LSTM, and GRU algorithms using real-life data on Spanish electricity consumption and prices from 1 January 2014 to 30 April 2022. Our study provides valuable insights for the energy market in Spain. Firstly, our analysis indicates that using advanced methods, specifically LSTM and GRU artificial neural networks, can significantly enhance the accuracy and reliability of electricity price modeling. This finding can inform the development of more effective pricing strategies for electricity in Spain. Secondly, our study highlights the importance of having access to high-quality data for electricity demand and price modeling, emphasizing the need for policymakers to prioritize the development of reliable and up-to-date Spanish energy data systems. Finally, our comparison of machine learning algorithms for electricity consumption modeling suggests that XGBoost occasionally obtains the most accurate method for forecasting energy demand in Spain. This information can be used to improve energy demand forecasting and inform decision-making in the Spanish energy market.
We would like to point out that while day-ahead markets are crucial in determining electricity prices in advance, intraday markets also play a crucial role in the electricity market, offering flexibility to market participants to adjust their positions in response to changing demand and supply conditions within the same day. These markets enable electricity traders to manage their risks and optimize their profits by providing real-time price signals that reflect the current market conditions. Intraday markets are particularly important in this context, for energy resources can introduce greater volatility and uncertainty in the supply of electricity [16]. By allowing for short-term adjustments to supply and demand, intraday markets can help ensure the stability and reliability of the power system. The use of advanced forecasting methods to predict intraday electricity prices is becoming increasingly important, as it can provide market participants with valuable information for their trading strategies [17]. Nevertheless, intraday electricity market data prediction is a topic that has been little explored in the literature. The majority of studies focus on daily, weekly, or even monthly forecasting. There is limited research on hourly electricity prices and consumption predictions. Our study fills this gap in the literature by using hourly data, which provides a more detailed analysis of intraday market behavior.
Our study makes several original contributions to the field of intraday electricity market data prediction. Firstly, we focus on the prediction of hourly electricity prices and consumption, which has been a relatively unexplored area in the literature. Using this level of detail, we provide a more comprehensive analysis of intraday market behavior. Secondly, we compare the performance of different machine learning algorithms for this purpose. Our results provide insights into the strengths and weaknesses of these algorithms for this specific task. Finally, we use real-life data on Spanish electricity consumption and prices on an hourly basis, which has not been previously analyzed in the literature. Overall, our study contributes to the understanding of intraday electricity market data prediction and provides valuable insights for energy policymakers and industry practitioners. Our study contributes to the literature by addressing the gaps in existing research on electricity price and consumption modeling in the Spanish market and providing valuable insights into the potential of AI to improve energy efficiency and inform policy decisions related to energy in Spain.

Related Works

Countless authors acknowledge the challenges associated with predicting electricity prices, including its volatility and uncertainty [18,19] and the difficulties in applying it at a large scale within the electric market [20,21]. Electric demand is influenced by various factors, such as local meteorological conditions, the intensity of commercial and daily activities, energy supply and distribution strategies, and the variability of renewable energy production [18,20,22].
According to Lu et al. [20], the goals of electricity price prediction can be divided into two categories: point predictions and probabilistic predictions. Probabilistic predictions assign a probability to each possible forecast outcome. When the output variable is not discrete, the forecast is usually made using intervals. On the other hand, point predictions are deterministic estimates that provide an exact result, for example, the electricity price at every 30-minute interval for the next 24 h, resulting in 48 data points. The authors assert that most studies in this field [23,24,25,26] focus on point predictions and use evaluation metrics such as root mean squared error (RMSE) and mean absolute error (MAE) to assess the accuracy of their predictions [27,28].
An important reference to consider is the study conducted by [29], which reviews the state-of-the-art algorithms and best practices for forecasting day-ahead electricity prices and proposes an open-access benchmark for the evaluation of new predictive algorithms. Assessing the accuracy of electricity price forecasting models is crucial, but it is equally important to determine whether any difference in accuracy is statistically significant. This is crucial to ensuring that the difference in accuracy is not due to chance variations between the forecasts. However, statistical testing is often overlooked in the literature on electricity price forecasting [18]. Many studies focus solely on comparing the accuracy of models based on error metrics and do not evaluate the statistical significance of differences in accuracy. This approach should be revised to ensure that forecasting methods are compared with the necessary statistical rigor. Lenha et al. [30] report that more than two-thirds of studies on electricity price prediction make use of time series techniques, artificial neural networks (ANNs), or a combination of both. According to the authors in [10], autoregressive models are the most commonly used models for electricity price forecasting. In [31], a method for predicting next-day electricity prices using ARIMA models was presented, with results from both mainland Spain and California markets. A day-ahead electricity price forecasting model in the Denmark-West region using ARIMA and ANNs was presented in [32]. Keles et al. [33] analyzed a predictive system using ANNs to estimate electricity prices on a daily basis. Similarly, Panapakidis and Dagoumas [34] proposed diverse ANN topologies based on clustering algorithms to make their predictions. Many other techniques can be found in the literature for the same purpose [35]; some examples are deep learning [25,36], fuzzy logic [37], and tree-based [38] solutions. In [39], the authors presented a hybrid model called EA-XGB for building energy prediction and compared its performance with ARIMA and XGB models. The experiment showed that the EA-XGB hybrid model performed best in forecasting building energy consumption using the dataset provided by the US National Renewable Energy Laboratory. The study [40] introduces a deep learning framework for building energy consumption forecasts that combines convolutional neural networks (CNN) and long short-term memory (LSTM) networks. The proposed framework was tested on two datasets and showed better performance than traditional machine learning models. Additionally, in [41], the authors proposed a multi-energy load forecasting method based on parallel architecture CNN-GRU and transfer learning for data-deficient integrated energy systems. The proposed method was tested on two datasets and showed better performance than other traditional machine learning models.
In this study, we aimed to provide a comprehensive comparison of different machine learning techniques and their performance in predicting Spanish energy pricing and consumption. To achieve this, we include several machine learning techniques in our analysis, such as linear regression [42], random forests [43], XGBoost [39,44], LSTM [45], and GRU [46]. The inclusion of these models allowed us to evaluate their strengths and weaknesses and identify the most suitable approach for our problem. By including a range of models with varying levels of complexity, we were able to provide a more complete picture of the performance of different machine learning approaches in the context of Spanish energy pricing and consumption.
Our study differs from previous research in several ways. Firstly, while most papers in this area use daily or monthly data, our analysis is based on hourly data. This level of granularity provides a more accurate representation of energy consumption patterns and allows for a more precise analysis of the relationship between consumption and prices. Furthermore, our study is unique because it examines the relationship between energy consumption and prices simultaneously, whereas previous research typically focused on either consumption or prices alone. This approach allows for a more comprehensive understanding of the factors that influence energy consumption in the Spanish market. Therefore, our study contributes to the literature by providing a more detailed analysis of energy consumption patterns and their relationship with prices, which can help inform energy policies and improve energy efficiency in Spain.
The rest of the document is structured as follows: The proposed methodology is detailed in Section 2. Section 3 introduces the experiments conducted. Section 4 presents the main results. Finally, the conclusions are gathered in Section 5.

2. Methodology

This section outlines the methodology adopted in this study, including the data description, pre-processing, and evaluation of the predictive models. Figure 1 illustrates the overall procedure followed in our study. The flowchart outlines the different stages involved in data collection, processing, and analysis. First, the dataset is downloaded from the ESIOS API and stored locally. Next, the data are pre-processed, which includes adding the decomposition of the time series, lag features, and normalizing the data. Thirdly, the walk-forward validation method is employed to evaluate the performance of the models. Fourthly, an experimental hyperparameter search is iteratively performed to identify the optimal hyperparameters for each model. Finally, the results are obtained and analyzed.

2.1. Dataset

In this study, data were obtained from the Spanish Electricity Network (SEN) through the REData and Esios APIs. The SEN website provides various tools for extracting information, including a calendar to select specific days, a graph for visualizing daily demand, a data table for numerical information, accumulated demand from different energy sources, and the option to display different electrical systems. We gathered data covering the period from 1 January 2014 to 30 April 2022, at hourly intervals, resulting in a total of 73.119 observations. While the data used in our study are publicly available on the official website, we suggest using the dataset we utilized for future studies and comparisons. It can be downloaded from [47].
The decision to focus solely on Spain in our study was intentional, as we wanted to investigate the unique context of the Spanish energy market and consumption trends. Furthermore, the lack of updated public datasets in the literature made it challenging to compare our results with those of other research studies in different countries. Therefore, in this study, we focused on Spain, where we were able to obtain the required data. We recommend other researchers use our dataset to address this issue for future studies and enable better comparisons across different research projects.
While our study is primarily focused on Spain, we believe that the presented approach and methodology can be applicable to other regions as well. Nonetheless, it is important to note that the success of our approach in other regions may depend on various factors, including the similarities in energy market structure and regulations as well as the availability and quality of relevant data.

2.2. Preprocessing

The first step of our data preparation was to decompose the original time series into trend, season, and residual. This process allows us to separate the underlying patterns of the data from the random fluctuations, providing a more accurate representation of the time series. The trend component represents the long-term behavior of the series and captures any upward or downward trend over time. The seasonal component provides the recurring patterns in the data. The residual component captures the unexplained variability or noise in the time-series that is not registered by the other two components. As an example, Figure 2 illustrates the decomposition of the electricity price time series, including the trend, seasonal, and residual components. It is important to note that these components were solely used for analytical purposes and not incorporated into the models presented in this study, which utilize the original time series data.
As opposed to other problems, time series observations are not independent of each other. Hence, we will not split the data randomly. Instead, the data will be divided chronologically into three parts: a training set, a validation set, and a test set, to preserve the temporal relationship between observations. To improve the performance of supervised learning models, lag features may be created by adding columns that represent previous time stamps t 1 ,   t 2 ,   t 3 ,   etc . to the dataset in order to provide additional information for the current time stamp t . In time series analysis, «lag feature» refers to a variable that is delayed or shifted in time relative to another variable. That is to say, it is the value of a variable at a previous time step that is included as a predictor in a model to capture temporal dependencies and autocorrelation in the data. The creation of lag features in time series data is a commonly used preprocessing step in predictive modeling. The idea behind this is that past samples of a time series contain information that can be useful for predicting future values. By adding columns to the dataset that represent the values of previous time stamps, the model can use this information to make better estimates. The assumption is that the relationship between past and future values is not completely random and that past patterns can be used to inform predictions about forthcoming values [48].
After creating the lag features, the next step in the preprocessing stage is normalizing the data. Data normalization is important in order to ensure that all the features have the same scale, which helps the predictors perform better. In this study, normalization was performed between 0 ,   1 , which is a common range used in machine learning. We used the following equation to this end:
Y i = X i min X max X min X
where Y i is the normalized value, X i is the value of the series, and max and min are the maximum and minimum of the time series. Following Nielsen’s recommendations [49], we normalized the data for each feature individually, scaling the values so that they fall within the range 0 ,   1 . This method is useful when each feature has a different scale and units, as it allows them to be compared and processed on a similar basis.

2.3. Techniques

The current section briefly introduces the models used in this research. We implemented linear regression (LR), random forest (RF), extreme gradient boosting (XGB), long short-term memory (LSTM), and gated recurrent unit (GRU) algorithms.
LR is a commonly used statistical model for predictive tasks. It assumes a linear relationship between the dependent and independent variables and aims to fit a line or a hyperplane to the data. The goal is to use the relationship established by the fitted model to make predictions about the dependent variable based on the values of the explanatory variables. LR is simple to implement and interpret, making it a popular choice for many regression problems [50,51].
RF is a type of ensemble machine learning algorithm that combines the predictions of multiple decision trees to make a final prediction. It was introduced by Breiman [52] as an improvement over traditional decision trees. RF algorithms are known for their ability to generalize well, reduce overfitting, and capture a wider variety of patterns in the data, making them suitable for both regression and classification problems [53].
The XGB was the third algorithm implemented in this study. XGB is a gradient-boosting tree method that combines decision trees in an ensemble model, where the prediction of one tree serves as input for the next tree. This sequential learning process can lead to improved predictions compared to single decision trees. The algorithm has been successful in both regression and classification problems [38,39] and is known for its ability to handle a large number of features and its ability to capture non-linear relationships in data.
Two neural network-based models were implemented in this study, LSTM and GRU. LSTM networks are a type of recurrent neural network (RNN) designed to handle the issue of vanishing gradients in traditional RNNs. LSTMs are well suited for tasks involving sequences of data, such as time series prediction, language translation, and speech recognition [24,30,45]. The LSTM architecture allows them to remember important information from the past for an extended period of time, making them ideal for long-term dependencies in time series data.
GRUs are another type of RNN, similar in concept to LSTM. Both use gate mechanisms to control the flow of information. The main difference between these two is how information is retained over time. While LSTMs use three gates: an input, output, and forget gate, GRUs use two gates: an update gate and a reset gate. This makes GRU faster and computationally more efficient compared to LSTM. Nevertheless, GRU may not perform as well as LSTM on very long sequences, as they may struggle to retain information over extended periods [54,55].
The machine learning algorithms employed in this study were chosen for their ability to handle complex nonlinear relationships and temporal dependencies in the data. Linear regression was included as a baseline model to provide a benchmark for comparison with the more advanced machine learning algorithms. Random forest and XGBoost were chosen for their ability to capture complex interactions between variables and handle large feature spaces. LSTM and GRU were chosen for their ability to model time series data with long-term dependencies, which are characteristic of electricity price and consumption data. The inclusion of lagged variables in the models allowed us to capture the persistence of electricity prices and consumption over time and to account for seasonality and other temporal effects. Therefore, all the models, including LSTM and GRU, will make use of the delayed inputs. As mentioned, the use of lagged inputs can capture the dependencies of past observations on future values, which is important in the forecasting task at hand.

3. Experiments

In this section, we describe the experiments carried out to evaluate the performance of the implemented models for predicting electricity consumption and electricity prices.
To ensure the reproducibility of our experiments, we provide details on the technologies used in our study. We conducted our experiments on a machine with an Intel(R) Core (TM) i7-10750H CPU @ 2.60GHz, 2592 MHz, 6 processors, and 12 logic processors. The operating system used was Microsoft Windows 10 Home version 21H2. The machine had 32 GB of RAM memory and a 1 TB HDD (model SAMSUNG MZVLB1T0HBLR-000H1). Additionally, the machine had a dedicated NVIDIA GeForce RTX 2060 GPU and integrated Intel(R) UHD Graphics. For our data processing, we used Python 3.9.7, Numpy, Pandas, and JSON. To visualize our results, we used Matplotlib and Seaborn. For traditional machine learning, we used Scikit-learn, while for neural networks, we used Tensorflow and Keras.
For the simpler models, LR, RF, and XGB, we conducted a series of experiments to evaluate their performance. In these experiments, we tested different configurations of the models, with a focus on the number of lags used in the features, as this is usually an important factor in the performance of time series models. For the LR, we conducted experiments to evaluate the intercept parameter, the number of jobs, and the sign of the coefficients. In the case of the RF, we tested several hyperparameters, including the number of estimators, the Gini, entropy, and log loss criterion, the maximum depth of the tree, and the minimum number of samples required to split an internal node. We also conducted experiments to test the XGB model using similar hyperparameters to the RF, with a particular focus on the number of estimators. However, due to space limitations, we only report the most important results in the paper. For the LSTM and GRU, we conducted a more extensive hyperparameter search, including the number of epochs, patience, learning rate, batch size, and number of neurons. We evaluated the impact of these hyperparameters on the predictive performance of the models. Due to limited space, we could not include all the results, though we present a summary of the most significant outcomes in the next section. In summary, we conducted a basic grid search to optimize our models. Specifically, we calculated the model intercept for the LR algorithm and utilized all available processors, constraining the coefficients to be positive and not using intercept in calculations. Regarding the RF algorithm, we employed 500 estimators and the squared error criterion. The maximum tree depth was 7, and we set the maximum number of features to 0.8 and the maximum number of samples to 0.6. XGB employed 500 estimators too; the learning rate used to weight each model was 0.4, with a maximum depth of 5, and the number of samples used in each tree was 0.7. The remaining two models’ parameters were described in more detail in the following section.
In order to evaluate the predictions made by the models, it is important to consider certain relevant elements. These elements will help determine the accuracy and performance of the models.
Firstly, the evaluation of the models’ estimates was conducted using the walk-forward validation method. This method consists of dividing the time series into several folds, training the model with a portion of the data, and then evaluating the performance on a validation set. A sliding window approach was used to select the different subsets of data for validation. The time series was divided into multiple windows, and for each window, the previous windows were used for training and the current window was used for validation. This process is repeated several times, each time using a different subset of the data as validation and the remaining data as training. In doing so, we can assess the models’ ability to generalize to new unseen data, making them particularly appropriate for time series forecasting. This approach ensures that the model is tested on different time periods and reduces the risk of overfitting to a specific period.
The accuracy of the predictions made by the models is assessed by comparing the estimated values with the actual values. To evaluate the performance of the models, three metrics were used: the mean absolute error (MAE):
MAE = 1 n i = 1 n y i y ^ i
The root mean squared error (RMSE):
RMSE = 1 n i = 1 n y i y ^ i 2
Additionally, the mean absolute percentage error (MAPE):
MAPE = 100 n i = 1 n y i y ^ i y i
where y i is the actual value, y ^ i is the estimation, and n is the number of samples. These three metrics provide different perspectives on the performance of the models and help to understand the accuracy of the predictions.

4. Results

In the following section, we present the evaluation of the prediction performance of our implemented models, LR, RF, XGB, LSTM, and GRU. The prediction of electricity prices and electricity consumption is evaluated using different evaluation metrics: MAE, RMSE, and MAPE. For each model, we provide two tables to showcase the forecasting results: one for the electricity pricing estimation and another for the prediction of electricity consumption. The metrics were calculated with the denormalized values of the data, providing a comprehensive assessment of the prediction’s performance.
Table 1 presents the performance of the electricity price and consumption predictions obtained using LR. Each row shows the results for a different number of lags used as input. The input delays in our models are sequential. For instance, when we set the lag to be 4, the input features used in the models will be x t 1 ,   x t 2 ,   x t 3   and x t 4 . This means that we consider the values of the variable in the previous four-time steps as inputs to the model. The columns related to the price depict that the three errors (MAE, RMSE, and MAPE) remain consistent across different lag values, indicating that the model is stable. On the other hand, the consumption errors show some variation in lag values. For example, for lags 12 and 16, the MAE and RMSE are significantly higher compared to other lags. The minimum RMSE for price modeling is 28.70, which corresponds to a lag of 24. Similarly, the best RMSE for consumption was obtained with 24 lags. However, for the remaining two errors, the optimal parameter is obtained with fewer lags, specifically with 2 and 4. It should be noted that there does not appear to be a clear trend in the errors for either the price or consumption models.
The results of the electricity price and consumption forecasting using RF are shown in Table 2. The results of the analysis indicate a clear increasing trend in the errors for both the price and consumption models, suggesting that as the number of lags increases, the models tend to perform worse. Interestingly, the RF model shows better performance with fewer lags. It is important to note that hyperparameters can have a significant impact on the results. The performance of a model is highly dependent on the choice of hyperparameters, and therefore it is essential to carefully tune them to achieve the best performance. In fact, the results indicate that RF with fewer lags has even better errors than LR. This highlights the importance of selecting an appropriate number of lags when using this model. Furthermore, it is noteworthy that the behavior of the errors is similar for both the price and consumption models. This may suggest that there are underlying factors influencing both variables in a similar way and that the models are capturing these factors to some extent.
Table 3 shows the results of the electricity price and consumption prediction using XGB. The results of the analysis show that the XGB model exhibits more variability in errors compared to the previous models. Additionally, the results indicate that, in most cases, XGB performs worse with fewer lags. Surprisingly, increasing the number of lags up to 12 tends to enhance the error, but beyond this point, the errors tend to become worse. This may be caused by overfitting in the model. These findings suggest that careful consideration should be given to the selection of appropriate parameters when using XGB in order to avoid overfitting and obtain more reliable results.
The following Table 4 shows the prediction results of electricity prices using LSTM with various hyperparameter settings. In this case, the hyperparameters are the number of epochs, patience, learning rate, batch size, and number of neurons. Upon examination of this table, there does not seem to be a clear pattern or trend in the performance of the prediction with different hyperparameters. In this case, the best configuration was found with 100 epochs, 20 patience, a 0.001 learning rate, 16 batch sizes, and 8 neurons, with a MAE of 9.17, a RMSE of 12.83, and a MAPE of 4.73.
The corresponding table for electricity consumption using LSTM can be seen in Table 5. Based on the results, it appears that increasing the number of epochs generally leads to improvement in prediction, with the lowest MAE and MAPE being achieved when the number of epochs is between 400 and 600. Regarding other parameters, it is not possible to draw a clear pattern. The best number of neurons and batch size, as well as the optimal learning rate and patience values, vary and seem to depend on other parameters as well.
Finally, we evaluated the performance of GRU, as can be seen in Table 6. Based on the data presented in the table, it can be concluded that there is no clear pattern in the prediction of electricity prices using GRU. Nevertheless, some observations can be made. The number of epochs, patience, and learning rate do not appear to have a significant impact on the prediction. The batch size and the number of neurons seem to have some effect; the lowest values of MAE and MAPE were achieved with a batch size of 64 and 4 neurons, respectively.
The last table, Table 7, displays the prediction performance of the GRU model for electricity consumption. In this case, the results suggest a consistent pattern of lower prediction error with smaller batch sizes and a higher number of neurons. Specifically, the best configuration according to the lowest MAE and MAPE values was achieved with a batch size of 16 and a number of neurons equal to 8 when using 700 epochs with 150 patience and a learning rate of 0.001.
Finally, we gathered all the best results in Table 8 in order to determine which one presented the best results. After further analysis, it can be observed that the performance of the models varied depending on the evaluation metric used. For the price modeling task, the LSTM model showed the best performance according to both the MAE and RMSE metrics. However, when considering the MAPE metric, the best results were obtained with the GRU model. Additionally, it is worth noting that although both LSTM and GRU models showed similar and good results for price modeling, XGB was able to achieve even better results for the consumption task. This suggests that different models may perform better for different tasks and that careful consideration should be given to selecting the most appropriate model for the specific application at hand. Overall, these findings demonstrate the utility of using multiple models and evaluation metrics to gain a comprehensive understanding of the performance of different time series prediction models.
We compared the performance of these five different models on our task using a two-tailed t-test. We computed the p-value for each pair of models and set a significance level of 0.05 using the RMSE metric. The results of Table 9 show that for both price and consumption prediction, the statistical tests indicated that LR and XGB were not significant. RF and XGB were not significant in predicting the differences between the actual and predicted values of price and consumption. Furthermore, LSTM and GRU also did not show significant differences in their performance in predicting both price and consumption. However, the statistical tests revealed that the rest of the models were significant in their performance for both price and consumption prediction.
It is worth noting that XGB achieved the best results in consumption prediction, which is an interesting finding. However, the statistical tests showed that there was no significant difference between the performance of RF and XGB in both price and consumption prediction. This is an important result, as it suggests that RF, which is a more interpretable model than XGB, may be a good alternative to XGB in some applications. It is also worth mentioning that the tests showed that LSTM and GRU performed significantly better than the other models in either price or consumption prediction. Therefore, these results provide valuable insights into the strengths and weaknesses of different machine learning algorithms for intraday electricity price and consumption prediction, which can inform the development of more effective energy policies and pricing strategies.
Figure 3 illustrates a comparison of price and consumption predictions from five different models. Figure 3a depicts the predicted values of all models against the actual price values for future time points t + 1 ,   t + 2 ,   t + 4 ,   t + 8 ,   t + 24 , and t + 32 h. Figure 3b shows the same comparison but for the consumption data. The figure provides a visual representation of the performance of each model and its ability to capture the dynamics of the underlying data. The comparison allows us to identify the models that performed best in terms of accuracy and precision. This figure illustrates a useful summary of the model predictions and their ability to forecast the future values of the two different time series.
As a final remark, it is worth noting that pricing and consumption of electricity are not independent variables. Rather, they are closely related and can influence each other in various ways. For instance, when electricity prices are high, consumers may adjust their behavior to reduce their costs, which can lead to a decrease in energy consumption. Furthermore, electricity suppliers often offer different price ranges at different times of the day, which can encourage consumers to use electricity during off-peak hours, resulting in a reduction in total energy consumption. Therefore, investigating the relationship between pricing and consumption can provide insights into the drivers of electricity demand and inform policies aimed at promoting energy efficiency and reducing energy consumption.

5. Conclusions

In this study, we evaluated the performance of various machine learning methods for predicting electricity consumption and electricity prices. The models included LR, RF, XGB, LSTM, and GRU. Our results showed that LSTM and GRU were the best models for predicting electricity prices, with similar performance and high accuracy, suggesting that they are well-suited for this task. However, for electricity consumption modeling, XGB achieved the best results, indicating that it is a strong contender for this application. Despite these differences, the results of all three models (LSTM, GRU, and XGB) were relatively close, with low error rates and high accuracy, highlighting the potential of machine learning methods for predicting electricity consumption and pricing. In contrast, the LR model had significantly worse performance than the other models, with a relatively high error rate.
In conclusion, this research highlights the importance of using machine learning techniques for the prediction of electricity prices and consumption and the superior performance of XGB, LSTM, and GRU models compared to other machine learning methods. It stresses the potential of these models for real-world applications and provides a foundation for future research in the energy field. Future work can focus on exploring new methods, such as fuzzy neural networks, to efficiently handle uncertainty in prediction tasks. Additionally, the proposed methodology might be tested and applied to other regions with similar characteristics. A more exhaustive hyperparameter search can also be performed to improve model performance.
Finally, we suggest including additional performance measures that provide information about the behavior of the predicted model in the tails. It may be used with the Kupiec test or other tail tests to assess the model’s performance beyond the mean. These measures can provide valuable insights into the model’s behavior, especially in cases where the tails of the prediction error distribution are of interest.

Author Contributions

M.H.R., L.G.B.R., D.C.R. and M.d.C.P.J. have contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the I+D+i FEDER 2020 project B-TIC-42-UGR20 "Consejería de Universidad, Investigación e Innovación de la Junta de Andalucía" and from "the Ministerio de Ciencia e Innovación" (Spain) (Grant PID2020-112495RB-C21 funded by MCIN/ AEI /10.13039/501100011033).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available on https://github.com/eleion/Papers/tree/main/Energy%20pricing (accessed on 19 March 2023).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

AIArtificial Intelligence
ANNArtificial Neural Network
GRUGated Recurrent Unit
LRLinear Regression
LSTMLong Short-Term Memory
MAEMean Absolute Error
MAPEMean Absolute Percentage Error
RFRandom Forest
RMSERoot Mean Squared Error
RNNRecurrent Neural Network
SENSpanish Electricity Network
XGBExtreme Gradient Boosting

References

  1. Xu, X.; Wei, Z.; Ji, Q.; Wang, C.; Gao, G. Global renewable energy development: Influencing factors, trend predictions and countermeasures. Resour. Policy 2019, 63, 101470. [Google Scholar] [CrossRef]
  2. Battistini, N.; Di Nino, V.; Dossche, M.; Kolndrekaj, A. Energy prices and private consumption: What are the channels? Econ. Bull. Artic 2022, 3. Available online: https://www.ecb.europa.eu/pub/economic-bulletin/articles/2022/html/ecb.ebart202203_01~f7466627b4.en.html (accessed on 15 April 2023).
  3. Martínez-Álvarez, F.; Troncoso, A.; Asencio-Cortés, G.; Riquelme, J.C. A survey on data mining techniques applied to electricity-related time series forecasting. Energies 2015, 8, 13162–13193. [Google Scholar] [CrossRef]
  4. Mulder, M.; Scholtens, B. The impact of renewable energy on electricity prices in the netherlands. Renew. Energy 2013, 57, 94–100. [Google Scholar] [CrossRef]
  5. Kolb, S.; Dillig, M.; Plankenbühler, T.; Karl, J. The impact of renewables on electricity prices in germany—An update for the years 2014–2018. Renew. Sustain. Energy Rev. 2020, 134, 110307. [Google Scholar] [CrossRef]
  6. Hirth, L. What caused the drop in european electricity prices? A factor decomposition analysis. Energy J. 2018, 39, 143–157. [Google Scholar] [CrossRef]
  7. Nerlinger, M.; Utz, S. The impact of the russia-ukraine conflict on energy firms: A capital market perspective. Financ. Res. Lett. 2022, 50, 103243. [Google Scholar] [CrossRef]
  8. Rations, F. Investment-grade emea generation companies. Relat. Credit Anal. 2023. Available online: http://www.fitchratings.com/research/corporate-finance/investment-grade-emea-generation-companies-relative-credit-analysis-13-04-2023 (accessed on 15 April 2023).
  9. Ember. Ember’s analysis of the eu electricity transition in 2022: What happened in 2022, what can we expect for 2023? Eur. Electr. Rev. 2023. Available online: http://ember-climate.org/insights/research/european-electricity-review-2023/ (accessed on 15 April 2023).
  10. Gürtler, M.; Paulsen, T. Forecasting performance of time series models on electricity spot markets: A quasi-meta-analysis. Int. J. Energy Sect. Manag. 2018, 12, 103–129. [Google Scholar] [CrossRef]
  11. Steffen, B.; Patt, A. A historical turning point? Early evidence on how the russia-ukraine war changes public support for clean energy policies. Energy Res. Soc. Sci. 2022, 91, 102758. [Google Scholar] [CrossRef]
  12. Patel, H.; Shah, M. Energy consumption and price forecasting through data-driven analysis methods: A review. SN Comput. Sci. 2021, 2, 315. [Google Scholar] [CrossRef]
  13. Lu, H.; Ma, X.; Ma, M.; Zhu, S. Energy price prediction using data-driven models: A decade review. Comput. Sci. Rev. 2021, 39, 100356. [Google Scholar] [CrossRef]
  14. Müller, G.; Seibert, A. Bayesian estimation of stable carma spot models for electricity prices. Energy Econ. 2019, 78, 267–277. [Google Scholar] [CrossRef]
  15. Bose, A.; Hsu, C.-H.; Roy, S.S.; Lee, K.C.; Mohammadi-ivatloo, B.; Abimannan, S. Forecasting stock price by hybrid model of cascading multivariate adaptive regression splines and deep neural network. Comput. Electr. Eng. 2021, 95, 107405. [Google Scholar] [CrossRef]
  16. Kremer, M.; Kiesel, R.; Paraschiv, F. An econometric model for intraday electricity trading. Philos. Trans. R. Soc. A 2021, 379, 20190624. [Google Scholar] [CrossRef]
  17. Narajewski, M.; Ziel, F. Econometric modelling and forecasting of intraday electricity prices. J. Commod. Mark. 2020, 19, 100107. [Google Scholar] [CrossRef]
  18. Weron, R. Electricity price forecasting: A review of the state-of-the-art with a look into the future. Int. J. Forecast. 2014, 30, 1030–1081. [Google Scholar] [CrossRef]
  19. Özen, K.; Yıldırım, D. Application of bagging in day-ahead electricity price forecasting and factor augmentation. Energy Econ. 2021, 103, 105573. [Google Scholar] [CrossRef]
  20. Lu, X.; Qiu, J.; Lei, G.; Zhu, J. Scenarios modelling for forecasting day-ahead electricity prices: Case studies in australia. Appl Energ 2022, 308, 118296. [Google Scholar] [CrossRef]
  21. Guo, Y.; Luo, Y.; He, J.; He, Y. Real-time deep learning-based market demand forecasting and monitoring. Comput. Electr. Eng. 2022, 100, 107878. [Google Scholar] [CrossRef]
  22. Deng, S.; Chen, F.; Wu, D.; He, Y.; Ge, H.; Ge, Y. Quantitative combination load forecasting model based on forecasting error optimization. Comput. Electr. Eng. 2022, 101, 108125. [Google Scholar] [CrossRef]
  23. Dong, Y.; Wang, J.; Jiang, H.; Wu, J. Short-term electricity price forecast based on the improved hybrid model. Energy Convers. Manag. 2011, 52, 2987–2995. [Google Scholar] [CrossRef]
  24. Peng, L.; Liu, S.; Liu, R.; Wang, L. Effective long short-term memory with differential evolution algorithm for electricity price prediction. Energy 2018, 162, 1301–1314. [Google Scholar] [CrossRef]
  25. Lago, J.; De Ridder, F.; De Schutter, B. Forecasting spot electricity prices: Deep learning approaches and empirical comparison of traditional algorithms. Appl. Energy 2018, 221, 386–405. [Google Scholar] [CrossRef]
  26. Nowotarski, J.; Weron, R. On the importance of the long-term seasonal component in day-ahead electricity price forecasting. Energy Econ. 2016, 57, 228–235. [Google Scholar] [CrossRef]
  27. Jonsson, T.; Pinson, P.; Nielsen, H.A.; Madsen, H.; Nielsen, T.S. Forecasting electricity spot prices accounting for wind power predictions. IEEE Trans. Sustain. Energy 2013, 4, 210–218. [Google Scholar] [CrossRef]
  28. Yang, Z.; Ce, L.; Lian, L. Electricity price forecasting by a hybrid model, combining wavelet transform, arma and kernel-based extreme learning machine methods. Appl. Energy 2017, 190, 291–305. [Google Scholar] [CrossRef]
  29. Lago, J.; Marcjasz, G.; De Schutter, B.; Weron, R. Forecasting day-ahead electricity prices: A review of state-of-the-art algorithms, best practices and an open-access benchmark. Appl. Energy 2021, 293, 116983. [Google Scholar] [CrossRef]
  30. Lehna, M.; Scheller, F.; Herwartz, H. Forecasting day-ahead electricity prices: A comparison of time series and neural network models taking external regressors into account. Energy Econ. 2022, 106, 105742. [Google Scholar] [CrossRef]
  31. Contreras, J.; Espinola, R.; Nogales, F.J.; Conejo, A.J. Arima models to predict next-day electricity prices. IEEE Trans. Power Syst. 2003, 18, 1014–1020. [Google Scholar] [CrossRef]
  32. Karabiber, O.A.; Xydis, G. Electricity price forecasting in the danish day-ahead market using the tbats, ann and arima methods. Energies 2019, 12, 928. [Google Scholar] [CrossRef]
  33. Heydari, A.; Majidi Nezhad, M.; Pirshayan, E.; Astiaso Garcia, D.; Keynia, F.; De Santoli, L. Short-term electricity price and load forecasting in isolated power grids based on composite neural network and gravitational search optimization algorithm. Appl. Energy 2020, 277, 115503. [Google Scholar] [CrossRef]
  34. Panapakidis, I.P.; Dagoumas, A.S. Day-ahead electricity price forecasting via the application of artificial neural network based models. Appl. Energy 2016, 172, 132–151. [Google Scholar] [CrossRef]
  35. Huang, J.; Koroteev, D.D.; Rynkovskaya, M. Building energy management and forecasting using artificial intelligence: Advance technique. Comput. Electr. Eng. 2022, 99, 107790. [Google Scholar] [CrossRef]
  36. Hadjout, D.; Torres, J.F.; Troncoso, A.; Sebaa, A.; Martínez-Álvarez, F. Electricity consumption forecasting based on ensemble deep learning with application to the algerian market. Energy 2022, 243, 123060. [Google Scholar] [CrossRef]
  37. Itaba, S.; Mori, H. A fuzzy-preconditioned grbfn model for electricity price forecasting. Procedia Comput. Sci. 2017, 114, 441–448. [Google Scholar] [CrossRef]
  38. Agrawal, R.K.; Muchahary, F.; Tripathi, M.M. Ensemble of relevance vector machines and boosted trees for electricity price forecasting. Appl. Energy 2019, 250, 540–548. [Google Scholar] [CrossRef]
  39. Yucong, W.; Bo, W. Research on Ea-Xgboost Hybrid Model for Building Energy Prediction. J. Phys. Conf. Ser. 2020, 1518, 012082. [Google Scholar] [CrossRef]
  40. Somu, N.; Raman M R, G.; Ramamritham, K. A deep learning framework for building energy consumption forecast. Renew. Sustain. Energy Rev. 2021, 137, 110591. [Google Scholar] [CrossRef]
  41. Li, C.; Li, G.; Wang, K.; Han, B. A multi-energy load forecasting method based on parallel architecture cnn-gru and transfer learning for data deficient integrated energy systems. Energy 2022, 259, 124967. [Google Scholar] [CrossRef]
  42. López, G.; Arboleya, P. Short-term wind speed forecasting over complex terrain using linear regression models and multivariable lstm and narx networks in the andes mountains, ecuador. Renew. Energy 2022, 183, 351–368. [Google Scholar] [CrossRef]
  43. Veeramsetty, V.; Reddy, K.R.; Santhosh, M.; Mohnot, A.; Singal, G. Short-term electric power load forecasting using random forest and gated recurrent unit. Electr. Eng. 2022, 104, 307–329. [Google Scholar] [CrossRef]
  44. Li, X.; Ma, L.; Chen, P.; Xu, H.; Xing, Q.; Yan, J.; Lu, S.; Fan, H.; Yang, L.; Cheng, Y. Probabilistic solar irradiance forecasting based on xgboost. Energy Rep. 2022, 8, 1087–1095. [Google Scholar] [CrossRef]
  45. Ordóñez, F.J.; Roggen, D. Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 2016, 16, 115. [Google Scholar] [CrossRef]
  46. Zhao, Z.; Yun, S.; Jia, L.; Guo, J.; Meng, Y.; He, N.; Li, X.; Shi, J.; Yang, L. Hybrid vmd-cnn-gru-based model for short-term forecasting of wind power considering spatio-temporal features. Eng. Appl. Artif. Intell. 2023, 121, 105982. [Google Scholar] [CrossRef]
  47. Hernández-Rodríguez, M.; Ruiz, L.G.B.; Criado-Ramón, D.; Pegalajar, M. Dataset Esios—Energy Pricing and Consumption. Available online: https://github.com/eleion/Papers/tree/main/Energy%20pricing (accessed on 15 April 2023).
  48. Lazzeri, F. Machine Learning for Time Series Forecasting with Python; John Wiley & Sons: Hoboken, NJ, USA, 2020. [Google Scholar]
  49. Nielsen, A. Practical Time Series Analysis: Prediction with Statistics and Machine Learning; O’Reilly Media: Sebastopol, CA, USA, 2019. [Google Scholar]
  50. El Mouatasim, A. Simple and multi linear regression model of verbs in quran. Am. J. Comput. Math. 2018, 8, 68–77. [Google Scholar] [CrossRef]
  51. Saber, A.Y.; Alam, A.K.M.R. Short term load forecasting using multiple linear regression for big data. In Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA, 27 November–1 December 2017; pp. 1–6. [Google Scholar]
  52. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  53. Liu, J.; Li, Y. Study on environment-concerned short-term load forecasting model for wind power based on feature extraction and tree regression. J. Clean. Prod. 2020, 264, 121505. [Google Scholar] [CrossRef]
  54. Lee, K.; Kim, J.; Kim, J.; Hur, K.; Kim, H. Cnn and gru combination scheme for bearing anomaly detection in rotating machinery health monitoring. In Proceedings of the 2018 1st IEEE International Conference on Knowledge Innovation and Invention (ICKII), Jeju Island, Republic of Korea, 23–27 July 2018; pp. 102–105. [Google Scholar]
  55. Potter, K.C.; Brunhart-Lupo, N.J.; Bush, B.W.; Gruchalla, K.M.; Bugbee, B.; Krishnan, V.K. Coupling Visualization, Simulation, and Deep Learning for Ensemble Steering of Complex Energy Models: Preprint; National Renewable Energy Lab. (NREL): Golden, CO, USA, 2017. [Google Scholar]
Figure 1. Flowchart depicting the overall procedure followed in the study.
Figure 1. Flowchart depicting the overall procedure followed in the study.
Make 05 00026 g001
Figure 2. Decomposition of the electricity price time series into trend, seasonal, and residual components: raw price data, trend component, seasonal component, and residual component.
Figure 2. Decomposition of the electricity price time series into trend, seasonal, and residual components: raw price data, trend component, seasonal component, and residual component.
Make 05 00026 g002
Figure 3. Comparison of Price and Consumption Predictions: (a) Predicted values of the six different models compared to the actual price values for future time points t + 1, t + 2, t + 4, t + 8, t + 24, and t + 32 h. (b) Same comparison, but for the consumption data.
Figure 3. Comparison of Price and Consumption Predictions: (a) Predicted values of the six different models compared to the actual price values for future time points t + 1, t + 2, t + 4, t + 8, t + 24, and t + 32 h. (b) Same comparison, but for the consumption data.
Make 05 00026 g003
Table 1. Performance of the electricity pricing and consumption according to the number of lags used in the linear regression model. Best error in bold.
Table 1. Performance of the electricity pricing and consumption according to the number of lags used in the linear regression model. Best error in bold.
PriceConsumption
LagMAERMSEMAPEMAERMSEMAPE
124.6728.8013.762875.593207.5314.41
224.6628.7913.752874.783206.8014.40
424.6628.7913.752874.893206.8614.40
829.1634.1416.433004.143320.8415.01
1227.5633.3415.624460.864705.0421.89
1627.6133.4115.665238.585443.2425.58
2025.3330.4014.323275.933504.5416.16
2424.9028.7013.982909.113087.0014.19
Table 2. Performance of the electricity pricing and consumption according to the number of lags used in the random forest model. Best error in bold.
Table 2. Performance of the electricity pricing and consumption according to the number of lags used in the random forest model. Best error in bold.
PriceConsumption
LagMAERMSEMAPEMAERMSEMAPE
120.6125.3211.591892.332384.219.72
221.0826.6311.941527.431886.157.72
420.1126.0211.421920.422408.469.78
821.0626.9311.972002.172582.9110.26
1221.2726.8212.021920.352403.989.75
1621.9127.3712.371984.452443.7110.05
2022.2327.8912.552290.892644.0711.50
2422.2129.5912.283400.894041.8416.30
Table 3. Performance of the electricity pricing and consumption according to the number of lags used in the XGBoost model. Best error in bold.
Table 3. Performance of the electricity pricing and consumption according to the number of lags used in the XGBoost model. Best error in bold.
PriceConsumption
LagMAERMSEMAPEMAERMSEMAPE
137.3041.4920.702517.952714.6712.04
222.4528.2011.084334.614731.3120.80
440.7741.9021.694153.404878.9019.50
841.5549.5623.322756.073109.0113.06
1210.6914.405.77432.91594.962.07
1615.4318.648.671967.102280.309.35
2023.1530.0412.41692.411070.683.24
2424.0731.6813.481181.681689.585.54
Table 4. Performance of the electricity pricing according to several parameters used in the LSTM model. P is patience, and N is the number of neurons. Best error in bold.
Table 4. Performance of the electricity pricing according to several parameters used in the LSTM model. P is patience, and N is the number of neurons. Best error in bold.
EpochsPLearning
Rate
Batch
Size
NMAERMSEMAPE
5050.00013249.5713.274.96
5050.00013289.9813.445.15
5050.000132169.7413.515.04
5050.00132169.3612.794.86
10050.00132169.3612.794.86
100100.00132169.3112.84.82
100200.00132169.3212.844.81
100200.000132169.5413.324.94
100200.000116169.5313.154.92
100200.00116169.4813.014.86
100200.0011689.1712.834.73
100200.0011649.5512.894.91
100200.001889.713.254.93
200300.0011689.2512.874.77
Table 5. Performance of the energy consumption forecasting according to several parameters used in the LSTM model. P is patience, and N is the number of neurons. Best error in bold.
Table 5. Performance of the energy consumption forecasting according to several parameters used in the LSTM model. P is patience, and N is the number of neurons. Best error in bold.
EpochsPLearning
Rate
Batch
Size
NMAERMSEMAPE
100100.001324554.47681.842.63
100100.001328582.76690.72.77
100100.001168623.67735.242.95
100100.001164661.41784.483.11
100100.00184796.85943.943.71
100100.0001644659.28802.593.16
200200.001324548.34668.882.61
200200.05324658.52741.553.15
200200.001328570.54664.712.73
200500.001322606.06763.312.86
300500.00144731.92946.653.44
4002000.001324534.79695.742.53
6003000.001324507.56672.842.39
Table 6. Performance of the electricity pricing according to several parameters used in the GRU model. P is patience, and N is the number of neurons. Best error in bold.
Table 6. Performance of the electricity pricing according to several parameters used in the GRU model. P is patience, and N is the number of neurons. Best error in bold.
EpochsPLearning
Rate
Batch
Size
NMAERMSEMAPE
6001000.0013249.4712.944.86
6001000.0013289.3412.924.79
400500.0016449.2013.14.76
400500.00112849.3513.14.87
400500.00112889.2213.214.82
6001000.0016449.4813.544.91
400500.0011649.3513.034.83
400500.00516414.3718.527.15
400500.00051649.4712.894.88
400500.00056449.2813.144.83
Table 7. Performance of the energy consumption forecasting according to several parameters used in the GRU model. P is patience, and N is the number of neurons. Best error in bold.
Table 7. Performance of the energy consumption forecasting according to several parameters used in the GRU model. P is patience, and N is the number of neurons. Best error in bold.
EpochsPLearning
Rate
Batch
Size
NMAERMSEMAPE
400500.001644597.78761.042.82
400500.001648594.35768.12.78
400500.001328524.93631.752.51
400500.001168522.14620.342.47
400500.0011612578.71712.592.77
400500.00188576.16678.362.73
400500.005168733.8854.213.5
400500.0005168629.11768.052.96
6001000.001168637.61718.233.04
7001500.001168517.05626.612.44
Table 8. Comparative analysis of price and consumption prediction performance using LR, RF, XGB, LSTM, and GRU models. Best error in bold.
Table 8. Comparative analysis of price and consumption prediction performance using LR, RF, XGB, LSTM, and GRU models. Best error in bold.
PriceConsumption
ModelMAERMSEMAPEMAERMSEMAPE
LR24.6628.7013.752874.783087.0014.19
RF20.1125.3211.421527.431886.157.72
XGB10.6914.405.77432.91594.962.07
LSTM9.1712.794.73507.56664.712.39
GRU9.2012.892.39517.05620.342.44
Table 9. Results of a statistical test comparing performance metrics for five different machine learning models (LR, RF, XGB, LSTM, and GRU) on the dataset.
Table 9. Results of a statistical test comparing performance metrics for five different machine learning models (LR, RF, XGB, LSTM, and GRU) on the dataset.
Model
Comparison
PriceConsumption
t-Valuep-ValueResultt-Valuep-ValueResult
LR vs. RF3.8430.002Significant2.9260.011Significant
LR vs. XGB−0.2760.786Not significant1.6940.112Not significant
LR vs. LSTM20.4930.000Significant9.5210.000Significant
LR vs. GRU15.5610.000Significant9.6350.000Significant
RF vs. XGB−1.1550.267Not significant−0.0570.955Not significant
RF vs. LSTM29.8650.000Significant8.2530.000Significant
RF vs. GRU16.2600.000Significant8.4150.000Significant
XGB vs. LSTM4.4630.001Significant3.3740.005Significant
XGB vs. GRU4.2450.001Significant3.4320.004Significant
LSTM vs. GRU−1.0180.326Not significant0.7540.463Not significant
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hernández Rodríguez, M.; Baca Ruiz, L.G.; Criado Ramón, D.; Pegalajar Jiménez, M.d.C. Artificial Intelligence-Based Prediction of Spanish Energy Pricing and Its Impact on Electric Consumption. Mach. Learn. Knowl. Extr. 2023, 5, 431-447. https://doi.org/10.3390/make5020026

AMA Style

Hernández Rodríguez M, Baca Ruiz LG, Criado Ramón D, Pegalajar Jiménez MdC. Artificial Intelligence-Based Prediction of Spanish Energy Pricing and Its Impact on Electric Consumption. Machine Learning and Knowledge Extraction. 2023; 5(2):431-447. https://doi.org/10.3390/make5020026

Chicago/Turabian Style

Hernández Rodríguez, Marcos, Luis Gonzaga Baca Ruiz, David Criado Ramón, and María del Carmen Pegalajar Jiménez. 2023. "Artificial Intelligence-Based Prediction of Spanish Energy Pricing and Its Impact on Electric Consumption" Machine Learning and Knowledge Extraction 5, no. 2: 431-447. https://doi.org/10.3390/make5020026

Article Metrics

Back to TopTop