Next Article in Journal
Iterated Orthogonal Simplex Cubature Kalman Filter and Its Applications in Target Tracking
Next Article in Special Issue
Design Optimization of Underground Mining Vehicles Based on Regenerative Braking Energy Recovery
Previous Article in Journal
Antibacterial Activity of Endodontic Gutta-Percha—A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Analysis of Neural Networks Used by Artificial Intelligence in the Energy Transition with Renewable Energies

by
Íñigo Manuel Iglesias-Sanfeliz Cubero
1,
Andrés Meana-Fernández
1,
Juan Carlos Ríos-Fernández
1,*,
Thomas Ackermann
2 and
Antonio José Gutiérrez-Trashorras
1
1
Department of Energy, University of Oviedo, 33203 Gijon, Spain
2
Department of Building Physics and Construction, University of Bielefeld, 32427 Minden, Germany
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(1), 389; https://doi.org/10.3390/app14010389
Submission received: 14 November 2023 / Revised: 7 December 2023 / Accepted: 28 December 2023 / Published: 31 December 2023

Abstract

:

Highlights

  • The application of different types of RNA is very effective in the energy transition.
  • ANNs are a very effective/useful tool in the fight against climate change.
  • High capacity of ANNs to make predictions in different meteorological conditions.

Abstract

Artificial neural networks (ANNs) have become key methods for achieving global climate goals. The aim of this review is to carry out a detailed analysis of the applications of ANNs to the energy transition all over the world. Thus, the applications of ANNs to renewable energies such as solar, wind, and tidal energy or for the prediction of greenhouse gas emissions were studied. This review was conducted through keyword searches and research of publishers and research platforms such as Science Direct, Research Gate, Google Scholar, IEEE Xplore, Taylor and Francis, and MDPI. The dates of the most recent research were 2018 for wind energy, 2022 for solar energy, 2021 for tidal energy, and 2021 for the prediction of greenhouse gas emissions. The results obtained were classified according to the type of structure and the architecture used, the inputs/outputs used, the region studied, the activation function used, and the algorithms used as the main methods for synthesizing the results. To carry out the present review, 96 investigations were used, and among them, the predominant structure was that of the multilayer perceptron, with Purelin and Sigmoid as the most used activation functions.

1. Introduction

Currently, the most accurate, most efficient, and most powerful machine for performing operations is the human brain, which can provide solutions to problems that PCs are not capable of solving. Researchers and scientists have developed artificial intelligence (AI) models to reproduce, to some extent, the processes that take place in the human brain [1]. Currently, AI is divided into different groups: artificial neural networks (ANNs) and different hybrid systems. Among them, ANNs are the best method as they are accurate, fast, and simple and have the ability to model a multivariate system [2].
The neural network (NN) concept has more than half a century of history; however, it is only in the last 20 years that the largest number of applications have been developed in the fields of defense, engineering, mathematics, economics, medicine, meteorology, and many others.
The history of neural networks dates to the 1940s. It was Warren McCulloch and Walter Pitts who first built a very simple neural network using electrical circuits [3]. Later, Donald Hebb proposed that neural pathways strengthen with each use, an important concept in human learning [4]. Then, in the 1950s, Nathaniel Rochester of IBM Research Laboratories first attempted to simulate complex neural networks [5]. In 1959, Bernard Widrow and Marcian Hoff developed models called “ADALINE” and “MADALINE” [6,7]. After the publication of the book “Perceptrons” by Marvin Minsky and Seymour Papert in 1969, there was a period of slowdown in research. This book argued that the concept of a single perception approach to neural networks did not have an effective correlation in multilayer neural networks [8]. In the 1970s, two competing models emerged in the conception of neural networks, called symbolism and connectionism [9]. The controversy ended with the acceptance of the symbolic paradigm as the most viable line of research. In the early 1980s, however, connectionism resurfaced, based on Werbos’s 1974 studies. These studies made it possible to rapidly develop the formation of multilayer neural networks using the so-called “backpropagation” algorithm [10,11]. Since then, the field of neural networks has seen significant advances. Some of these advances were the introduction and development of max-pooling in three-dimensional data recognition [12]. On the other hand, advances included the development of deep learning and its application to a wide variety of fields such as renewable energy [12].
However, ANNs are not the only ones that learn by example. There are other methods, such as the following. Supervised learning: this method trains algorithms on the basis of sample input and output data labeled by humans [13]; deep learning: it uses neural networks to learn from the data and to improve performance by increasing the number of samples that are available during the learning process [14]; machine learning paradigms for unsupervised classification such as conceptual clustering [15], which is an unsupervised learning method that focuses on generating concept descriptions for the generated classes. Other machine learning paradigms that learn by example are semi-supervised learning [16], active learning [17], transfer learning [18], and online learning [19].
The data collection needed to train ANNs must be a sufficiently complete and consistent set of information [20]. The development of machine learning models requires historical data from several years, supplemented by information that is more recent. This information amounts to thousands of data points [20]. In the context of different energy transition scenarios and geographical locations, it is essential to ensure that the data collection is as complete, impartial, and representative as possible. This is achieved by managing diverse and reliable sources of information. Some of the most used sources are public repositories, data from official agencies and organizations, research centers, and geographic databases [21,22]. To ensure that the data are impartial, complete, and representative of all the energy transition scenarios analyzed, it is necessary to perform careful data selection and apply measures such as domain adaptation and data augmentation [23]. Model performance can also be improved, and training data can be augmented by using pretrained models in other domains [23]. The authors, based on the objective pursued and the exact geographical location, have analyzed all data obtained in the various studies, with latitude and longitude coordinates provided in many of them. Similarly, much of the data provided are based on measurements made by the authors themselves when using AI.
ANNs have gained momentum to the point where they have become popular and useful models for classification, clustering, recognition, and prediction in a wide variety of applications [24]. ANNs are increasingly being used for different applications due to their ability and effectiveness in solving different problems. They have proven to be very efficient when it is complex to cull through a mass of existing data, for example, in the evaluation of public transportation of people and goods [25], image recognition [26], medical analysis [27], efficiency analysis in nonlinear contexts, or to adjust production functions, among other applications [28,29].
ANN consists, in most cases, of an input layer, at least one hidden layer (in the case of a simplified model), an output layer, the weight, the connection biases, the activation function, and the sum node. The layers in turn are made up of several connected units (called neurons) [30], considered to be the fundamental building blocks for the correct functioning of a neural network. The link between neurons is achieved by so-called connecting links [2]. The basic diagram of a neuron is shown in Figure 1 [31].
The main characteristic feature of ANNs compared to other approaches is their ability to learn by example. ANNs can be applied to any situation where there is a relationship between input and output variables [32]. The procedure for the learning process is what is known as a learning algorithm, the purpose of which is to alter the synaptic weights of the networks in order to achieve a previously set goal [33].
ANNs must be trained by feeding the network a set of quantified data to achieve the desired output using a pool of input data [34]. The learning process continues until the NN output matches the expected output [35]. The problem with ANN models lies precisely in overtraining, i.e., when the network capacity for training is too high or too many training iterations are allowed per network [36]. The degree of training accuracy obtained in the different applications where the ANN technique is used is very high, in the order of 10−5 to complete the training processes [2]. NNs can be grouped into different categories depending on their structure [37]. This classification is shown in Figure 2. The most commonly used are single-layer feed-forward networks, multilayer feed-forward networks, radial basis networks, and dynamic (differential) or recurrent neural networks. Of these, single-layer power supply networks are the best known and most widely used. Single-layer power supply networks were the first and simplest networks devised. Information travels in only one direction: from input nodes, through hidden nodes, to output nodes. This type of NN can be designed based on different unities, and among them, the perceptron is the most famous and simplest example [38]. Rosenblatt created the perceptron in 1958, thanks to the creation of the training algorithm [39]. The perceptron is composed of a single neuron with adjustable synaptic weights and thresholds [40]. The most frequently used algorithm is the so-called backpropagation (BP) algorithm [41]. The BP algorithm consists of training and correcting the weights until the error function is below the desired tolerance limit [37].
ANN models have proven to be a useful tool with great applications in different engineering systems. Unlike mathematical models, ANNs are able to adapt to real-world conditions [42]. Applications of NNs include forecasting [43,44], control [45], modeling [46], and pattern classification [47]. ANNs have been applied to different branches of engineering. In this work, given the wide variety of applications, it has been decided to classify them by energy type.
This article aims to provide a comprehensive review, as there is a lack of research in the literature that unifies in a single article the different ways of applying ANNs to the energy transition. It is of vital importance due to the increasingly frequent and significant environmental impacts and for the achievement of the United Nations (UN) Sustainable Development Goals, thus allowing researchers to know the state of the art so far on the different modalities of applying ANNs in the field of energy transition.
This review will focus on the contribution of renewable energies to the energy transition as the main contributors to mitigating the effects of climate change, as well as the applications of renewable energies to specific sectors such as buildings and transport. Finally, the different ways of using ANNs for the prediction of greenhouse gas emissions will be studied. The review has been carried out through the search and research of keywords in publishers and research platforms such as Science Direct, Research Gate, Google Scholar, IEEE Xplore, Taylor and Francis, and MDPI.
The study models had different applications in terms of scale. Some were designed to illuminate a single household, while others were designed for large-scale networks. The only difference lies in the size of the inputs provided by the authors to the neural networks. The models are highly versatile and have the potential to address energy planning needs and contribute to improvements in the energy transition. To ensure the reliability and robustness of neural networks in predicting or optimizing renewable energy systems, various methods and metrics can be used. Some of these methods are adversarial robustness evaluation [48]; robustness measurement and assessment (RoMA) [49]; extreme value theory approach [50]; and weight alterations [51]. These methods and metrics help evaluate the reliability and robustness of neural networks, especially in the context of adversarial attacks and environmental uncertainty.
The use of AI-powered neural networks in energy transition planning raises several ethical considerations. These considerations include decision making and accountability [52], as AI technology raises ethical questions related to decision making and accountability [53]; fairness and bias [54]; data privacy and security [55]; social responsibility [56], including job displacement and changes in economic structures; and environmental and climate implications [53], as the energy consumption of training large amounts of neural networks can be significant. By taking these ethical considerations into account, it would be possible to ensure that the use of AI-powered neural networks in energy transition planning is responsible, fair, and consistent with social values and environmental sustainability. Human experts are essential in the design, programming, and operation of AI to avoid unpredictable errors and to ensure that decisions made by AI are traceable [57]. In addition, AI development must adhere to principles such as accountability, transparency, verifiability, and predictability to serve society and fulfill human rights [57]. The concept of collaborative intelligence is important, where AI enhances human creativity and capabilities [58]. All of this achieves a responsible and ethical balance between the human experience and the capabilities of AI to serve the needs and preferences of society.

2. Applications of ANNs to Renewable Energies

This section details the different research carried out in the field of renewable energies and more specifically in wind, solar, and tidal energy. These three types of renewable energies have been selected as they are the ones that have the largest contributions to the national energy balances [59] as well as due to the greater abundance of works found.

2.1. Applications of ANNs for Wind Power and Speed Prediction

Within the renewable energy mix, wind energy is currently considered the most economical way to generate electricity. Recently, there has been new research into methods capable of predicting wind speed. This is of great importance due to the continuous growth of wind power generation worldwide [60]. For proper operation of wind farms, a constant stream of data about wind speed and wind direction is required. Artificial neural networks are an excellent method for short-, medium-, and long-term wind speed forecasting.
The following Table 1 summarizes the main research pieces found when performing the review. The studies have been classified according to the ANN structure, journal and region, input and outputs for the network, and the activation function employed.
The applications have different characteristics in several aspects, such as the ANN structure, the input data, the activation function used, and the training algorithm. As can be seen from the literature, various studies on wind speed prediction have been carried out for more than twenty years in different parts of the world, most of them located in Turkey, India, China, or Iran.
The main characteristics of the networks studied are detailed below.
  • ANN type: from the 23 references analyzed, the MLP network has been used in 17 of them, followed by the RBF in two of them. Five of them did not specify the type of ANN used.
  • Structure of the ANN: the predominant type is simple with one hidden layer (70%) and the rest with two hidden layers (26%) with the exception of the investigation of [41], which uses three hidden layers. The number of neurons in the hidden layer is usually around 15, while in other cases more than 63 are selected [44].
  • Amount of data: the percentage of research that makes use of data for validation is 8.69%.
  • I/O configuration: the inputs to the models usually take in situ measured features such as past wind speeds, temperature, relative humidity, altitude, month, or pressure.
  • Activation function: only 13 of the 23 cases detail the activation function used. In the hidden layer, linear functions are used, with tansig and logsig being the most commonly used, while in the output layer, linear functions of the purelin type are adopted.
Figure 3 details the most common inputs and outputs used by ANNs in wind power and wind speed prediction and the operating scheme.
ANNs are highly recommended for predicting wind speed and power generation for several reasons, including self-learning, low error, and high efficiency predictability [84].

2.2. Applications of ANNs for Solar Energy Systems

Within solar energy, the ANN technique has proven to be an alternative to conventional methods, providing great benefits in terms of precision, performance, and modeling. The study indicates that the advantage of ANN techniques over conventional techniques is that they do not require knowledge of internal system parameters, require less computational effort, and offer robust outputs to multivariate problems. NN modeling requires data representing the history, the current performance of the real system, and a correct selection of a NN model. Mellit et al. [85] conducted an overview of the different AI techniques for sizing PV systems. The research shows that one of the advantages of AI in modeling PV systems is that it allows good optimization in isolated areas, where meteorological data are not always available. Mellit and Kalogiriu [86] have applied AI techniques to model, predict, simulate, optimize, and control photovoltaic systems.
The applications of ANNs to solar energy go beyond that, as there is also research such as the one carried out in [42], in which the application of the ANN technique seeks to optimize and predict the performance of the different devices involved in a solar energy system such as solar collectors, heat pumps, or solar air. The research shows how the application of ANNs can save time and reduce the financial costs of the system since it is not necessary to carry out so many experimental tests to determine the relationship between the input and output variables. Another application of ANNs is shown in the research of [87], where the performance of solar collectors is predicted, thus improving the efficiency of the system as a whole. The developed model also showed advantages over conventional computational methods in terms of calculation and prediction time.
Solar radiation data are very important because in most cases they are not available due to the lack of a meteorological station. It is therefore necessary to have techniques to accurately predict solar radiation. ANNs are the solution to the problems of conventional methods [88].
Different ANN models have been applied for solar irradiance prediction, such as the MLP neural network, the RBF neural network, or the general regression neural network (GRNN). The different studies have been classified, taking into account different factors such as network structure and type, input/output configuration, or the activation function and tuning algorithm employed, as is shown in Table 2.
In contrast to the previous case, there is more literature available and the existing research from 1998 to 2012 has been collected. Most of the studies focus on countries that enjoy strong and prolonged hot climates such as the countries bordering the Mediterranean Sea as well as Saudi Arabia and China. As in the previous section and as mentioned at the beginning, the different applications are analyzed. The main characteristics of the networks studied are detailed below.
  • ANN type: in most of the investigations, the MLP network has been used (24 out of 29 cases) followed by the RBF.
  • Structure of the ANN: most studies use simple structures with a single hidden layer (96%), and the remaining with two hidden layers. The number of neurons in the hidden layer is usually in the order of 10, reaching 50 neurons in the research of [116]. In some cases, the number of neurons in the hidden layer is not specified, as in [104,105].
  • Amount of data: the percentage of research that make use of data for validation is 6.9%.
  • I/O configuration: altitude, latitude, longitude, relative humidity, or month of the year are used as the most common inputs.
  • Activation function: only 19 of the 29 investigations detail the activation function used. In the hidden layer, linear functions are used, with tansig and logsig being the most commonly used, while in the output layer, linear functions of the purelin type are adopted.
Figure 4 details the most common inputs and outputs used by ANNs in solar energy prediction and the operating scheme.

2.3. Applications of ANNs for Wave Prediction

Tidal energy, like other renewable energies, is fundamental to achieving the European climate targets for 2030 and 2050. Recently, the use of NNs for wave height (H) and period prediction has gained importance. ANNs have also been applied in different fields of ocean, coastal, and environmental engineering [118]. The following table summarizes the main research pieces found when performing the review. The studies have been classified according to the ANN structure, journal and region, input and outputs for the network, and the activation function employed. The following Table 3 shows the H predictions.
While it is true that studies appear in the literature since 2001, unlike the two previous cases, there has been an increase in the number of studies carried out in recent years. Most of the research is concentrated in India, Canada, and the United States and applies to both lakes and the open sea. The main characteristics of the networks studied are detailed below.
  • ANN type: as in previous cases, the MLP has been the structure chosen by most researchers (17 out of 20 cases). Research using the DNN [124] and RBF [133] has also been found.
  • Structure of the ANN: most of the studies analyzed use simple structures with a single hidden layer. Research has also been found that uses two hidden layers or even the research of [124], which uses three. The number of neurons in the hidden layer is usually in the order of 10, reaching 300 neurons in the research of [125].
  • Amount of data: in most research, the volume of data is in the order of hundreds or thousands. Normally, a major part of the data is used for training, with the remainder applied to testing. The percentage of studies that make use of data for validation is 5%.
  • I/O configuration: temperature, wind speed, wind direction, and historical wave data are normally used as inputs. Outputs predict wave heights from one hour to 24 h in advance.
  • Activation function: the activation function is specified in 15 out of the 20 research. In the hidden layer, linear functions are used, with tansig and logsig being the most commonly used, while in the output layer, linear functions of the purelin and sigmoid types are adopted.
As a summary of all the previous sections, in the case of renewable energies, the predominant structure chosen is the multilayer perceptron structure with one or two hidden layers, because it may act as a universal function approximator. In addition, together with the backpropagation algorithm, it is able to learn any type of continuous function between a set of input and output variables.
Figure 5 details the most common inputs and outputs used by ANNs in wave height prediction and the operating scheme.

3. Applications of ANNs for GHG Prediction

Traditionally, different techniques have been used to estimate greenhouse gas (GHG) emissions, such as the synergies and interactions model of air pollution and greenhouse gases [139]. The technique of ANNs is different from the GAINS (greenhouse gas and air pollution interactions and synergies) model in that ANNs are less complex, require a smaller amount of input data, and the inputs are undetermined [140]. The different research in the literature from 1996 to 2021 is detailed in Table 4.
Due to limitations on the length of the article, only 24 investigations have been selected as the most representative. However, other findings, such as those of [165,166,167,168], are also in the same direction. The main characteristics of the networks studied are detailed below.
  • ANN type: as in all previous sections, the MLP has been the structure chosen by most researchers (21 out of 24 cases). Research has also been found that has made use of the GRNN [149] and Elman NN [146]. The use of GRNN is motivated by the fact that they only require a selection of parameters [169], do not need training, and work well with small data [170].
  • Structure of the ANN: most of the studies analyzed use simple structures with a single hidden layer (21 out of 24 cases), with two hidden layers used in all other cases. The number of neurons in the hidden layer is of the order of 10.
  • Amount of data: in most research, the volume of data is in the order of hundreds or thousands. Normally, a major part of the data is used for training, with the remainder being applied to testing. The percentage of research that makes use of data for validation is 35.3% (6 out of 17 cases).
  • I/O configuration: the networks take different greenhouse gases such as CO2, CO, CH4, NOx, SO2, O3, PM10, and F-gases as output, while in most research, the inputs are macroeconomic or meteorological variables.
  • Activation function: only 3 of the 24 investigations do not specify the activation function used. In the hidden layer, linear functions are used, with tansig and logsig being the most used, while in the output layer, the purelin and sigmoid types are adopted.

4. Contest Analysis

In this section, the results obtained from the analysis of trends in the use of ANNs in renewable energies and for GHG prediction detailed in the previous sections were compared. As can be seen and already confirmed by Srisamranrungruang and K. Hiyama in 2022, artificial neural networks (ANN) are an essential element of deep learning in artificial intelligence (AI), reaching enormous relevance in applications such as the use of renewable energy.
In addition, Olanrewaju et al. concluded in 2022 that ANNs were very useful when modeling renewable energy systems, such as complex mapping of energy resources, and demonstrated that the errors that could be made with the use of ANNs were within the limits of acceptable tolerances.

4.1. Analysis of the Research Trend Based on the Year of Publication, Country, and Number of Publications by Type of Application

The review of the research articles revealed that the level of publications in relation to the three aspects analyzed was similar. Wind power and speed prediction generated 25 articles, very similar to the 24 articles for GHG prediction, followed by 20 articles on wave prediction and 17 on solar energy prediction. The first research article was published in 1996 and, except for 1997 and 2017, in every year until 2021, research was published on the topics analyzed.
The annual trend for each type of research is shown in Figure 6.

4.2. Countries with the Highest Number of Research

Research carried out by 30 countries has been published and interest in the subject extends to all continents. Figure 7 shows that the interest in the different research analyzed maintains a constant distribution. However, a special interest in wave prediction is observed in the USA as well as in India. This last country also stands out for the number of publications on GHG prediction. Turkey’s first place as the country with the highest number of publications on solar energy prediction is also particularly relevant.

4.3. Methodological Preferences in Research Carried out with ANNs and Types for the Different Applications

Figure 8 presents the different network structures used. Most of the investigations opted for MLP as the ANN type. The use of MLP is especially predominant in GHG prediction and solar energy prediction.

4.4. Trends in Publishing on Applications of Artificial Neural Networks to Energy Transition and Journals with Higher Productivity

The interest in the subject matter is evident from the large number of prestigious journals that have published research articles, as can be seen in Figure 9, Figure 10 and Figure 11. Although some journals are repeated in the three investigations, each one of them presents a type of journal influenced by its area of research.

4.5. Most Used Activation Functions

Figure 12 shows the trend regarding the use of the 37 types of activation functions used in the different investigations. purelin (output layer) and tansig (hidden layer) turned out to be the most used for wind power and speed prediction. For wave prediction, purelin (output layer) and sigmoid (output layer) stood out, while for solar energy prediction, the most used were purelin (output layer), tansig (hidden layer), and logsig (hidden layer). For GHG prediction, the most used were purelin (output layer), tansig (hidden layer), and sigmoid (hidden layer).

5. Conclusions

The ANNs turned out to be a reliable and mature model for forecasting energy resources and the level of associated pollutant emissions.
The great interest in the subject has led to the publication of research in a wide variety of prestigious scientific journals. In particular, the level of productivity of two magazines stands out: Renewable Energy and Applied Energy, with 18% and 10%, respectively, of the total published. These two journals belong to the Elsevier publishing house. Their areas of knowledge are energy and fuel and green and sustainable science and technology and energy and fuel and chemical engineering, presenting JCR indexes and Q1 quartiles.
The descriptive analysis of the journal articles together with the analysis of their content allowed for the finding of similarities regarding the use of ANN type. Both in the applications of ANN to renewable energies and in the applications of ANNs for GHG prediction, the most used ANN type was the MLP, used in 80% of the research.
This article aims to provide a detailed description of the existing literature on the different applications of ANNs to the energy transition. It includes relevant aspects such as the use of renewable energies and the reduction in GHG. Additionally, the analysis of results will provide future research with data on current trends in the management of ANN and existing limitations.
Although the research results have been published throughout the world and in 30 different countries, 80% of the research is concentrated in India, Turkey, Iran, USA, China, Spain, Italy, and Saudi Arabia. Especially relevant is the contribution of India with almost 20% of the total.
The analyzed research began in 1996, focusing on GHG prediction and extended until 2022. The years with the highest number of publications were 2005 and 2009. In addition, the rate of publications remained constant over the years, with a significant resurgence of publications in 2020, confirming the interest in the topic. In this way, the use of ANNs presents a sufficient temporal development to demonstrate its usefulness and compete with the use of other algorithms. Some variables were difficult to model with ANN, for example, environmental variables due to the large number of associated uncertainties. However, the ANN turned out to be a valid prediction aid. Thus, ANNs occupy a central place in the transition due to their ability to learn and decide and are increasingly present in intelligent systems. As it has been shown, ANNs are very useful in the field of renewable energies for predicting wind speed, solar radiation, or wave height. This is of great importance for the design of efficient energy systems, capable of anticipating and adapting to future phenomena. Therefore, it is considered necessary that further research should reinforce efforts to develop networks capable of a wider range of predictions with a smaller number of variables.
Likewise, there is also abundant literature for predicting energy consumption in buildings and traffic as well as forecasting the pollutant emissions of a given activity. Although a large number of contributions have been found in the aforementioned sectors, it would be necessary to extend the use of ANNs to all sectors that make up a country’s energy scenario, such as manufacturing, construction, agriculture, and mining, among others. This is really useful as it opens the way to the development of new applications capable of predicting the energy of a complete national energy system, thus making our energy systems more competitive, safe, and sustainable. As a result, world energy systems will be in a position to meet current global environmental commitments.
On the other hand, the type of architecture most chosen by the researchers was the multilayer perceptron, while the algorithms most used for training were backpropagation and Levenberg–Marquardt. The choice of MLP as the most used network is based on the fact that it is the most suitable ANN for classification and prediction. Moreover, the most used activation functions were purelin (output layer) and tansig (hidden layer). They stood out in the four investigations carried out. In addition, logsig (hidden layer) was also widely used in the case of solar energy prediction, as was sigmoid (output layer) for wave prediction and sigmoid (hidden layer) for GHG prediction. It is also noteworthy that research in which the greatest variety of activation functions was observed was the wave prediction with nine different activation functions, compared to the solar energy prediction with only four types.
The results and analysis performed with ANNs in this study can serve as a guide for the parties involved in the energy transition, in decision making, and in planning concrete actions to achieve the goals of increasing the contribution of renewable energy sources to the energy system, reducing GHG, and improving energy efficiency. Several methods can be used to communicate ANN results and recommendations to stakeholders. These can range from detailed technical reports for technical stakeholders to user manuals or graphs and diagrams for less technical or business-oriented end users. Data and information can also be reflected in technical equipment, through control panels of generation and distribution units, feedback loops, or other types of technical and informational channels.
As it has been seen, the largest number of contributions of ANR to the energy transition are focused on both renewable energy and greenhouse gas emissions. While these are the pillars of EU and international energy policy, it would be desirable to further develop new applications with a view to achieving fully efficient, secure, and sustainable energy systems.
Future research work could analyze research trends with ANNs in other aspects related to the energy transition, such as the generation and use of energy-recoverable biomass. In addition, the study of these techniques in the use of both residual biomass and specific crops could be useful to properly value the use of this resource and avoid increases in food prices. Another line of work could be the analysis of the application of the ANN in new sustainable mobility. Thus, the efficient control of freight traffic and private travel would be of great importance in reducing the level of polluting emissions associated with transport. Other future lines of research could be to try to find sufficient data to measure the diagnostic performance of ANNs using the receiver operating characteristic (ROC) method.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interests.

Abbreviations

ANNsartificial neural networks
MDPIMultidisciplinary Digital Publishing Institute
AIartificial intelligence
GAgenetic algorithms
FLfuzzy logic
BPbackpropagation
UNUnited Nations
MLPmultilayer perceptron
Wswind speed
Lonlongitude
Latlatitude
Aaltitude
RBFradial basis function
Wdwind direction
Wpwind power
Mmonth
RPresilient propagation
LMLevenberg–Marquardt
ICAimperialist competitive algorithm
PSOparticle swarm optimization
BRBayesian regularized
Ttemperature
TAVGaverage temperature
Tmaxmaximum temperature
Tminminimum temperature
Pairair pressure
Gsolar irradiance
Tairair temperature
SCGscaled conjugate gradient
IEEEInternational Conference on Information Science and Technology
Ppressure
I/Oinput/output
GRNNgeneral regression neural network
Pwpower
Ssunshine duration
Ynebulosity
DGPdifferential pressure
KTclearness index
thour or day
GDdaily global solar radiation
kthourly clearness index
S0dtheoretical sunshine duration
TCCtotal cloud cover
KDdiffuse fraction
ε4surface emissivity
ε5surface emissivity
Raterrestrial radiation
Llocation
Hheight
DNNdeep neural network
Hodeep-water wave height
Tewave energy period
Hbbreaking wave height
dbwater depth at the time of breaking
TpH and zero-up-crossing peak wave period
Feenergy flux
Wweather station index
Uwind shear velocity
ANFISadaptive neuro-fuzzy inference system
H1/3significant wave height
H1/10highest one-tenth wave height
Hmaxhighest wave height
Hmeanmean wave height
CGBconjugate gradient Powell–Beale
BFGBroyden–Fletcher–Goldfarb
Tpwave direction
GHGgreenhouse gases
GAINSgreenhouse gas and air pollution interactions and synergies
Nnitrogen
P2O5phosphate
K2Opotassium
FYMfarmyard manure
O3ozone
CO2carbon dioxide
NOnitric oxide
NO2nitrogen dioxide
NOXoxide of nitrogen
LOWlow cloud amount
BASEbase of lowest cloud
VISvisibility
COcarbon monoxide
CH4methane
GDPgross domestic product
GIECgross inland energy consumption
BLHboundary layer height
PrlTpre-injection timing
MITmain injection timing
PITpost-injection timing
UBHCunburned hydrocarbon
TPthrottle position
LHVlower heating value
BSFCbrake-specific fuel consumption
BThbrake thermal efficiency
ηvvolumetric efficiency
EGTexhaust gas temperature
CFScorrelation-based feature selection
AIartificial intelligence

References

  1. Youssef, A.; El-Telbany, M.; Zekry, A. The role of artificial intelligence in photo-voltaic systems design and control: A review. Renew. Sustain. Energy Rev. 2017, 78, 72–79. [Google Scholar] [CrossRef]
  2. Jani, D.; Mishra, M.; Sahoo, P. Application of artificial neural network for predicting performance of solid desiccant cooling systems—A review. Renew. Sustain. Energy Rev. 2017, 80, 352–366. [Google Scholar] [CrossRef]
  3. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  4. Hebb, D.O. The first stage of perception: Growth of the assembly. Organ. Behav. 1949, 4, 60–78. [Google Scholar]
  5. Miller, D.D.; Brown, E.W. Artificial intelligence in medical practice: The question to the answer? Am. J. Med. 2018, 131, 129–133. [Google Scholar] [CrossRef] [PubMed]
  6. Widrow, B.; Hoff, M.E. Adaptive switching circuits. In IRE WESCON Convention Record; Institute of Radio Engineers: New York, NY, USA, 1960; pp. 96–104. [Google Scholar]
  7. Widrow, B. Ancient History. In Cybernetics 2.0: A General Theory of Adaptivity and Homeostasis in the Brain and in the Body; Springer International Publishing: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
  8. Minsky, M.; Papert, S. Perceptrons. An Introduction to Computational Geometry; The MIT Press Ltd.: Cambridge, MA, USA, 1969. [Google Scholar]
  9. Olazaran, M. A sociological history of the neural network controversy. Adv. Comput. 1993, 37, 335–425. [Google Scholar]
  10. Werbos, P. Beyond Regression: New Tools for Prediction and Analysts in the Behavioral Sciences. Ph.D. Thesis, Harvard University, Cambridge, MA, USA, 1974. [Google Scholar]
  11. Werbos, P.J. Generalization of backpropagation with application to a recurrent gas market model. Neural Netw. 1988, 1, 339–356. [Google Scholar] [CrossRef]
  12. Gholamalinezhad, H.; Khosravi, H. Pooling methods in deep neural networks, a review. arXiv 2020, arXiv:2009.07485. [Google Scholar]
  13. Zhu, X.; Goldberg, A.B. Introduction to Semi-Supervised Learning; Springer Nature: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
  14. Matsuo, Y.; LeCun, Y.; Sahani, M.; Precup, D.; Silver, D.; Sugiyama, M.; Uchibe, E.; Morimoto, J. Deep learning, reinforcement learning, and world models. Neural Netw. 2022, 152, 267–275. [Google Scholar] [CrossRef]
  15. Pérez-Suárez, A.; Martínez-Trinidad, J.F.; Carrasco-Ochoa, J.A. A review of conceptual clustering algorithms. Artif. Intell. Rev. 2019, 52, 1267–1296. [Google Scholar] [CrossRef]
  16. Dang, W.; Guo, J.; Liu, M.; Liu, S.; Yang, B.; Yin, L.; Zheng, W. A Semi-Supervised Extreme Learning Machine Algorithm Based on the New Weighted Kernel for Machine Smell. Appl. Sci. 2022, 12, 9213. [Google Scholar] [CrossRef]
  17. Nguyen, V.L.; Shaker, M.H.; Hüllermeier, E. How to measure uncertainty in uncertainty sampling for active learning. Mach. Learn. 2022, 111, 89–122. [Google Scholar] [CrossRef]
  18. Peirelinck, T.; Kazmi, H.; Mbuwir, B.V.; Hermans, C.; Spiessens, F.; Suykens, J.; Deconinck, G. Transfer learning in demand response: A review of algorithms for data-efficient modelling and control. Energy AI 2022, 7, 100126. [Google Scholar] [CrossRef]
  19. Wang, J.; Lu, S.; Wang, S.H.; Zhang, Y.D. A review on extreme learning machine. Multimed. Tools Appl. 2022, 81, 41611–41660. [Google Scholar] [CrossRef]
  20. Whang, S.E.; Roh, Y.; Song, H.; Lee, J.G. Data collection and quality challenges in deep learning: A data-centric AI perspective. VLDB J. 2023, 32, 791–813. [Google Scholar] [CrossRef]
  21. Sze, V.; Chen, Y.H.; Yang, T.J.; Emer, J.S. Efficient processing of deep neural networks: A tutorial and survey. Proc. IEEE 2017, 105, 2295–2329. [Google Scholar] [CrossRef]
  22. Roh, Y.; Heo, G.; Whang, S.E. A survey on data collection for machine learning: A big data—AI integration perspective. IEEE Trans. Knowl. Data Eng. 2019, 33, 1328–1347. [Google Scholar] [CrossRef]
  23. Su, X.; Zhao, Y.; Bethard, S. A comparison of strategies for source-free domain adaptation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland, 22–27 May 2022; pp. 8352–8367. [Google Scholar]
  24. Abiodun, O.; Jantan, A.; Omolara, A.; Dada, K.; Mohamed, N.; Arshad, H. State-of-the-art in artificial neural network applications: A survey. Heliyon 2018, 4, e00938. [Google Scholar] [CrossRef]
  25. Costa, Á.; Markellos, R.N. Evaluating public transport efficiency with neural network models. Transp. Res. Part C Emerg. Technol. 1997, 5, 301–312. [Google Scholar] [CrossRef]
  26. Abdou, M.A. Literature review: Efficient deep neural networks techniques for medical image analysis. Neural Comput. Appl. 2022, 34, 5791–5812. [Google Scholar] [CrossRef]
  27. Le, T.H. Applying Artificial Neural Networks for Face Recognition. Adv. Artif. Neural Syst. 2011, 2011, 673016. [Google Scholar] [CrossRef]
  28. Santín, D.; Delgado, F.J.; Valiño, A. The measurement of technical efficiency: A neural network approach. Appl. Econ. 2004, 36, 627–635. [Google Scholar] [CrossRef]
  29. Labidi, J.; Pelach, M.A.; Turon, X.; Mutje, P. Predicting flotation efficiency using neural networks. Chem. Eng. Process. Process Intensif. 2007, 46, 314–322. [Google Scholar] [CrossRef]
  30. Li, B.; Delpha, C.; Diallo, D.; Migan-Dubois, A. Application of Artificial Neural Networks to photovoltaic fault detection and diagnosis: A review. Renew. Sustain. Energy Rev. 2021, 138, 110512. [Google Scholar] [CrossRef]
  31. Abarghouei, A.A.; Ghanizadeh, A.; Shamsuddin, S.M. Advances of soft computing methods in edge detection. Int. J. Adv. Soft Comput. Its Appl. 2009, 1, 162–203. [Google Scholar]
  32. Hamad, K.; Khalil, M.A.; Shanableh, A. Modeling roadway traffic noise in a hot climate using artificial neural networks. Transp. Res. Part D Transp. Environ. 2017, 53, 161–177. [Google Scholar] [CrossRef]
  33. Karabacak, K.; Cetin, N. Artificial neural networks for controlling wind–PV power systems: A review. Renew. Sustain. Energy Rev. 2014, 29, 804–827. [Google Scholar] [CrossRef]
  34. Rojas, R. Neural Networks: A Systematic Introduction; Springer Science Y Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  35. Mohanraj, M.; Jayaraj, S.; Muraleedharan, C. Applications of artificial neural networks for refrigeration, air-conditioning and heat pump systems—A review. Renew. Sustain. Energy Rev. 2012, 16, 1340–1358. [Google Scholar] [CrossRef]
  36. Yin, C.; Rosendahl, L.; Luo, Z. Methods to improve prediction performance of ANN models. Simul. Model. Pract. Theory 2003, 11, 211–222. [Google Scholar] [CrossRef]
  37. Yang, K. Artificial Neural Networks (ANNs): A New Paradigm for Thermal Science and Engineering. J. Heat Transf. 2008, 130, 093001. [Google Scholar] [CrossRef]
  38. Poznyak, T.I.; Oria, I.C.; Poznyak, A.S. Chapter 3—Background on dynamic neural networks. In Ozonation and Biodegradation in Environmental Engineering; Elsevier: Amsterdam, The Netherlands, 2019; pp. 57–74. [Google Scholar]
  39. Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 1958, 65, 386–408. [Google Scholar] [CrossRef] [PubMed]
  40. Poznyak, A.S.; Sanchez, E.N.; Yu, W. Differential Neural Networks for Robust Nonlinear Control: Identification, State Estimation and Trajectory Tracking; World Scientific: Singapore, 2001. [Google Scholar]
  41. Hecht-Nielsen, R. Theory of the backpropagation neural network. In Neural Networks for Perception; Academic Press: Cambridge, MA, USA, 1992; pp. 65–93. [Google Scholar]
  42. Elsheikh, A.H.; Sharshir, S.W.; Abd Elaziz, M.; Kabeel, A.E.; Guilan, W.; Haiou, Z. Modeling of solar energy systems using artificial neural network: A comprehensive review. Sol. Energy 2019, 180, 622–639. [Google Scholar] [CrossRef]
  43. Wang, S.; Zhou, R.; Zhao, L. Forecasting Beijing transportation hub areas’s pedestrian flow using modular neural network. Discret. Dyn. Nat. Soc. 2015, 2015, 749181. [Google Scholar] [CrossRef]
  44. Bhaskar, K.; Singh, S.N. AWNN-assisted wind power forecasting using feed-forward neural network. IEEE Trans. Sustain. Energy 2012, 3, 306–315. [Google Scholar] [CrossRef]
  45. Tran, D.; Tan, Y.K. Sensorless illumination control of a networked LED-lighting system using feddforward neural network. IEEE Trans. Ind. Electron. 2013, 61, 2113–2121. [Google Scholar] [CrossRef]
  46. Suykens, J.; Vandewalle, J.; Moor, B. Artificial Neural Networks for Modelling and Control of Non-Linear Systems; Springer Science Y Business Media: Berlin/Heidelberg, Germany, 1995. [Google Scholar]
  47. Karthigayani, P.; Sridhar, S. Decision tree based occlusion detection in face recognition and estimation of human age using back propagation neural network. J. Comput. Sci. 2014, 10, 115–127. [Google Scholar] [CrossRef]
  48. Buzhinsky, I.; Nerinovsky, A.; Tripakis, S. Metrics and methods for robustness evaluation of neural networks with generative models. Mach. Learn. 2021, 112, 3977–4012. [Google Scholar] [CrossRef]
  49. Levy, N.; Katz, G. Roma: A method for neural network robustness measurement and assessment. In Proceedings of the International Conference on Neural Information Processing, Virtual, 22–26 November 2022; pp. 92–105. [Google Scholar]
  50. Kamel, A.R.; Alqarni, A.A.; Ahmed, M.A. On the Performance Robustness of Artificial Neural Network Approaches and Gumbel Extreme Value Distribution for Prediction of Wind Speed. Int. J. Sci. Res. Math. Stat. Sci. 2022, 9, 5–22. [Google Scholar]
  51. Savva, A.G.; Theocharides, T.; Nicopoulos, C. Robustness of Artificial Neural Networks Based on Weight Alterations Used for Prediction Purposes. Algorithms 2023, 16, 322. [Google Scholar] [CrossRef]
  52. Nishant, R.; Kennedy, M.; Corbett, J. Artificial intelligence for sustainability: Challenges, opportunities, and a research agenda. Int. J. Inf. Manag. 2020, 53, 102104. [Google Scholar] [CrossRef]
  53. Danish, M.S.S.; Senjyu, T. Shaping the future of sustainable energy through AI-enabled circular economy policies. Circ. Econ. 2023, 2, 100040. [Google Scholar] [CrossRef]
  54. Camilleri, M.A. Artificial intelligence governance: Ethical considerations and implications for social responsibility. Expert Syst. 2023, e13406. [Google Scholar] [CrossRef]
  55. Roberts, H.; Zhang, J.; Bariach, B.; Cowls, J.; Gilburt, B.; Juneja, P.; Tsamados, A.; Ziosi, M.; Taddeo, M.; Floridi, L. Artificial intelligence in support of the circular economy: Ethical considerations and a path forward. AI Soc. 2022, 1–14. [Google Scholar] [CrossRef]
  56. Crane, A.; McWilliams, A.; Matten, D.; Moon, J.; Siegel, D.S. The Oxford Handbook of Corporate Social Responsibility; OUP Oxford: Oxford, UK, 2008. [Google Scholar]
  57. Tai, M.-T. The impact of artificial intelligence on human society and bioethics. Tzu Chi Med. J. 2020, 32, 339–343. [Google Scholar] [CrossRef]
  58. Wilson, H.J.; Daugherty, P.R. Collaborative intelligence: Humans and AI are joining forces. Harv. Bus. Rev. 2018, 96, 114–123. [Google Scholar]
  59. Eurostat. Eurostat Statics Explained. 2019. Available online: https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Renewable_energy_statistics#Wind_and_water_provide_most_renewable_electricity.3B_solar_is_the_fastest-growing_energy_source (accessed on 27 December 2023).
  60. Noorollahi, Y.; Jokar, M.A.; Kalhor, A. Using artificial neural networks for temporal and spatial wind speed forecasting in Iran. Energy Convers. Manag. 2016, 115, 17–25. [Google Scholar] [CrossRef]
  61. Mabel, M.C.; Fernandez, E. Analysis of wind power generation and prediction using ANN: A case study. Renew. Energy 2008, 33, 986–992. [Google Scholar] [CrossRef]
  62. Çam, E.; Arcaklıoğlu, E.; Çavuşoğlu, A.; Akbıyık, B. A classification mechanism for determining average wind speed and power in several regions of Turkey using artificial neural networks. Renew. Energy 2005, 30, 227–239. [Google Scholar] [CrossRef]
  63. Bilgili, M.; Sahin, B.; Yasar, A. Application of artificial neural networks for the wind speed prediction of target using reference stations data. Renew. Energy 2007, 32, 2350–2360. [Google Scholar] [CrossRef]
  64. Assareh, E.; Biglari, M. A novel approach to capture the maximum power from variable speed wind turbines using PI controller, RBF neural netowork and GSA evolutionary algorithm. Renew. Sustain. Energy Rev. 2015, 51, 1023–1037. [Google Scholar] [CrossRef]
  65. Mohandes, M.; Halawani, T.; Rehman, S.; Hussain, A.A. Support vector machines for wind speed prediction. Renew. Energy 2004, 29, 939–947. [Google Scholar] [CrossRef]
  66. Bigdeli, N.; Afshar, K.; Gazafroudi, A.S.; Ramandi, M.Y. A comparative study of optimal hybrid methods for wind power prediction in wind farm of Alberta, Canada. Renew. Sustain. Energy Rev. 2013, 27, 20–29. [Google Scholar] [CrossRef]
  67. Manobel, B.; Sehnke, F.; Lazzús, J.A.; Salfate, I.; Felder, M.; Montecinos, S. Wind turbine power curve modeling based on Gaussian Processes and Artificial Neural Netoworks. Renew. Energy 2018, 125, 1015–1020. [Google Scholar] [CrossRef]
  68. Li, G.; Shi, J. On comparing three artificial neural networks for wind speed forecasting. Appl. Energy 2010, 87, 2313–2320. [Google Scholar] [CrossRef]
  69. Blonbou, R. Very short-term wind power forecasting with neural networks and adaptive Bayesian learning. Renew. Energy 2011, 36, 1118–1124. [Google Scholar] [CrossRef]
  70. Liu, H.; Tian, H.-Q.; Chen, C.; Li, Y.-F. A hybrid statistical method to predict wind speed and wind power. Renew. Energy 2010, 35, 1857–1861. [Google Scholar] [CrossRef]
  71. Salcedo-Sanz, S.; Pérez-Bellido, Á.M.; Ortiz-García, E.G.; Portilla-Figueras, A.; Prieto, L.; Paredes, D. Hybridizing the fifth generation mesoscale model with artificial neural networks for short-term wind speed prediction. Renew. Energy 2009, 34, 1451–1457. [Google Scholar] [CrossRef]
  72. Cadenas, E.; Rivera, W. Short term wind speed forecasting in La Venta, Oaxaca, México, using artificial neural networks. Renew. Energy 2009, 34, 274–278. [Google Scholar] [CrossRef]
  73. Flores, P.; Tapia, A.; Tapia, G. Application of a control algorithm for wind speed prediction and active power generation. Renew. Energy 2005, 30, 523–536. [Google Scholar] [CrossRef]
  74. Monfared, M.; Rastegar, H.; Kojabadi, H.M. A new strategy for wind speed forecasting using artificial intelligent methods. Renew. Energy 2009, 34, 845–848. [Google Scholar] [CrossRef]
  75. Grassi, G.; Vecchio, P. Wind energy prediction using a two-hidden layer neural network. Commun. Nonlinear Sci. Numer. Simul. 2010, 15, 2262–2266. [Google Scholar] [CrossRef]
  76. Ramasamy, P.; Chandel, S.; Yadav, A.K. Wind speed prediction in the mountainous region of India using an artificial neural network model. Renew. Energy 2015, 80, 338–347. [Google Scholar] [CrossRef]
  77. Fadare, D. The application of artificial neural networks to mapping of wind speed profile for energy application in Nigeria. Appl. Energy 2010, 87, 934–942. [Google Scholar] [CrossRef]
  78. Fonte, P.M.; Silva, G.X.; Quadrado, J.C. Wind speed prediction using artificial neural networks. WSEAS Trans. Syst. 2005, 4, 379–384. [Google Scholar]
  79. Kalogirou, S.; Neocleous, C.; Pashiardis, S.; Schizas, C. Wind speed prediction using artificial neural networks. In Proceedings of the European Symposium on Intelligent Techniques, Crete, Greece, 3–4 June 1999. [Google Scholar]
  80. Öztopal, A. Artificial neural network approach to spatial estimation of wind velocity data. Energy Convers. Manag. 2006, 47, 395–406. [Google Scholar] [CrossRef]
  81. Ghorbani, M.A.; Khatibi, R.; Hosseini, B.; Bilgili, M. Relative importance of parameters affecting wind speed prediction using artificial neural networks. Theor. Appl. Clim. 2013, 114, 107–114. [Google Scholar] [CrossRef]
  82. Guo, Z.-H.; Wu, J.; Lu, H.-Y.; Wang, J.-Z. A case study on a hybrid wind speed forecasting method using BP neural network. Knowl.-Based Syst. 2011, 24, 1048–1056. [Google Scholar] [CrossRef]
  83. Lodge, A.; Yu, X. Short term wind speed prediction using artificial neural networks. In Proceedings of the 4th IEEE International Conference on Information Science and Technology, Shenzhen, China, 26–28 April 2014. [Google Scholar]
  84. Le, X.C.; Duong, M.Q.; Le, K.H. Review of the Modern Maximum Power Tracking Algorithms for Permanent Magnet Synchronous Generator of Wind Power Conversion Systems. Energies 2022, 16, 402. [Google Scholar] [CrossRef]
  85. Mellit, A.; Kalogirou, S.; Hontoria, L.; Shaari, S. Artificial intelligence techniques for sizing photovoltaic systems: A review. Renew. Sustain. Energy Rev. 2009, 13, 406–419. [Google Scholar] [CrossRef]
  86. Mellit, A.; Kalogirou, S.A. Artificial intelligence techniques for photovoltaic applications: A review. Prog. Energy Combust. Sci. 2008, 34, 574–632. [Google Scholar] [CrossRef]
  87. Ghritlahre, H.K.; Prasad, R.K. Application of ANN technique to predict the performance of solar collector systems—A review. Renew. Sustain. Energy Rev. 2018, 84, 75–88. [Google Scholar] [CrossRef]
  88. Yadav, A.K.; Chandel, S. Solar radiation prediction using Artificial Neural Network techniques: A review. Renew. Sustain. Energy Rev. 2014, 33, 772–781. [Google Scholar] [CrossRef]
  89. Chen, C.; Duan, S.; Cai, T.; Liu, B. Online 24-h solar power forecasting based on weather type classification using artificial neural network. Sol. Energy 2011, 85, 2856–2870. [Google Scholar] [CrossRef]
  90. Almonacid, F.; Rus, C.; Pérez, P.J.; Hontoria, L. Estimation of the energy of a PV generator using artificial neural network. Renew. Energy 2009, 34, 2743–2750. [Google Scholar] [CrossRef]
  91. Voyant, C.; Muselli, M.; Paoli, C.; Nivet, M.-L. Optimization of an artificial neural network dedicated to the multivariate forecasting of daily global radiation. Energy 2011, 36, 348–359. [Google Scholar] [CrossRef]
  92. Paoli, C.; Voyant, C.; Muselli, M.; Nivet, M.-L. Forecasting of preprocessed daily solar radiation time series using neural networks. Sol. Energy 2010, 84, 2146–2160. [Google Scholar] [CrossRef]
  93. Mellit, A.; Pavan, A.M. A 24-h forecast of solar irradiance using artificial neural network: Application for performance prediction of a grid-connected PV plant at Trieste, Italy. Sol. Energy 2010, 84, 807–821. [Google Scholar] [CrossRef]
  94. Benghanem, M.; Mellit, A. Radial Basis Function Network-based prediction of global solar radiation data: Application for sizing of a stand-alone photovoltaic system at Al-Madinah, Saudi Arabia. Energy 2010, 35, 3751–3762. [Google Scholar] [CrossRef]
  95. Sözen, A.; Arcaklioǧlu, E.; Özalp, M.; Kanit, E. Use of artificial neural networks for mapping of solar potential in Turkey. Appl. Energy 2004, 77, 273–286. [Google Scholar] [CrossRef]
  96. Ouammi, A.; Zejli, D.; Dagdougui, H.; Benchrifa, R. Artificial neural network analysis of Moroccan solar potential. Renew. Sustain. Energy Rev. 2012, 16, 4876–4889. [Google Scholar] [CrossRef]
  97. Rehman, S.; Mohandes, M. Estimation of Diffuse Fraction of Global Solar Radiation Using Artificial Neural Networks. Energy Sources Part A Recovery Util. Environ. Eff. 2009, 31, 974–984. [Google Scholar] [CrossRef]
  98. Koca, A.; Oztop, H.F.; Varol, Y.; Koca, G.O. Estimation of solar radiation using artificial neural networks with different input parameters for Mediterranean region of Anatolia in Turkey. Expert Syst. Appl. 2011, 38, 8756–8762. [Google Scholar] [CrossRef]
  99. Fadare, D. Modelling of solar energy potential in Nigeria using an artificial neural network model. Appl. Energy 2009, 86, 1410–1422. [Google Scholar] [CrossRef]
  100. Khatib, T.; Mohamed, A.; Sopian, K.; Mahmoud, M. Solar Energy Prediction for Malaysia Using Artificial Neural Networks. Int. J. Photoenergy 2012, 2012, 419504. [Google Scholar] [CrossRef]
  101. Yadav, A.K.; Chandel, S.S. Artificial Neural Network based prediction of solar radiation for Indian stations. Int. J. Comput. Appl. 2012, 50, 1–4. [Google Scholar] [CrossRef]
  102. Elminir, H.K.; Azzam, Y.A.; Younes, F.I. Prediction of hourly and daily diffuse fraction using neural network, as compared to linear regression models. Energy 2007, 32, 1513–1523. [Google Scholar] [CrossRef]
  103. Hontoria, L.; Aguilera, J.; Zufiria, P. An application of the multilayer perceptron: Solar radiation maps in Spain. Sol. Energy 2005, 79, 523–530. [Google Scholar] [CrossRef]
  104. Elminir, H.K.; Areed, F.F.; Elsayed, T.S. Estimation of solar radiation components incident on Helwan site using neural networks. Sol. Energy 2005, 79, 270–279. [Google Scholar] [CrossRef]
  105. Tymvios, F.S.; Jacovides, C.P.; Michaelides, S.C.; Scouteli, C. Comparative study of Ångström’s and artificial neural networks’ methodologies in estimating global solar radiation. Sol. Energy 2005, 78, 752–762. [Google Scholar] [CrossRef]
  106. Alam, S.; Kaushik, S.C.; Garg, S.N. Computation of bean solar radiation at normal incidence using artificial neural network. Renew. Energy 2006, 31, 1483–1491. [Google Scholar] [CrossRef]
  107. Jiang, Y. Prediction of monthly mean daily diffuse solar radiation using artificial neural networks and comparison with other empirical models. Energy Policy 2008, 36, 3833–3837. [Google Scholar] [CrossRef]
  108. Mubiru, J.; Banda, E. Estimation of monthly average daily global solar irradiation using artificial neural networks. Sol. Energy 2008, 82, 181–187. [Google Scholar] [CrossRef]
  109. Şenkal, O.; Kuleli, T. Estimation of solar radiation over Turkey using artificial neural network and satellite data. Appl. Energy 2009, 86, 1222–1228. [Google Scholar] [CrossRef]
  110. Şenkal, O. Modeling of solar radiation using remote sensing and artificial neural network in Turkey. Energy 2010, 35, 4795–4801. [Google Scholar] [CrossRef]
  111. Azadeh, A.; Maghsoudi, A.; Sohrabkhani, S. An integrated artificial neural networks approach for predicting global radiation. Energy Convers. Manag. 2009, 50, 1497–1505. [Google Scholar] [CrossRef]
  112. Sözen, A.; Arcaklioǧlu, E. Solar potential in Turkey. Appl. Energy 2005, 80, 35–45. [Google Scholar] [CrossRef]
  113. Rahimikhoob, A. Estimating global solar radiation using artificial neural network and air temperature data in a semi-arid environment. Renew. Energy 2010, 35, 2131–2135. [Google Scholar] [CrossRef]
  114. Hasni, A.; Sehli, A.; Draoui, B.; Bassou, A.; Amieur, B. Estimating Global Solar Radiation Using Artificial Neural Network and Climate Data in the South-western Region of Algeria. Energy Procedia 2012, 18, 531–537. [Google Scholar] [CrossRef]
  115. Rumbayan, M.; Abudureyimu, A.; Nagasaka, K. Mapping of solar energy potential in Indonesia using artificial neural network and geographical information system. Renew. Sustain. Energy Rev. 2012, 16, 1437–1449. [Google Scholar] [CrossRef]
  116. Rehman, S.; Mohandes, M. Splitting Global Solar Radiation into Diffuse and Direct Normal Fractions Using Artificial Neural Networks. Energy Sources Part A Recover. Util. Environ. Eff. 2012, 34, 1326–1336. [Google Scholar] [CrossRef]
  117. Al-Alawi, S.M.; Al-Hinai, H.A. An ANN-based approach for predicting global radiation in locations with no direct measurement instrumentation. Renew. Energy 1998, 14, 199–204. [Google Scholar] [CrossRef]
  118. Makarynskyy, O.; Pires-Silva, A.; Makarynska, D.; Ventura-Soares, C. Artificial neural networks in wave predictions at the west coast of Portugal. Comput. Geosci. 2005, 31, 415–424. [Google Scholar] [CrossRef]
  119. Londhe, S.N.; Panchang, V. One-Day Wave Forecasts Based on Artificial Neural Networks. J. Atmos. Ocean. Technol. 2006, 23, 1593–1603. [Google Scholar] [CrossRef]
  120. Deo, M.; Jagdale, S. Prediction of breaking waves with neural networks. Ocean Eng. 2003, 30, 1163–1178. [Google Scholar] [CrossRef]
  121. Makarynskyy, O. Improving wave predictions with artificial neural networks. Ocean Eng. 2004, 31, 709–724. [Google Scholar] [CrossRef]
  122. Hadadpour, S.; Etemad-Shahidi, A.; Kamranzad, B. Wave energy forecasting using artificial neural netowrks in the Caspian See. Proceeding Inst. Civ. Eng.-Marit. Eng. 2014, 167, 42–52. [Google Scholar]
  123. Deo, M.C.; Jha, A.; Chaphekar, A.S.; Ravikant, K. Neural netoworks for wave forecasting. Ocean Eng. 2001, 28, 889–898. [Google Scholar] [CrossRef]
  124. Bento, P.; Pombo, J.; Mendes, R.; Calado, M.; Mariano, S. Ocean wave energy forecasting using optimised deep learning neural networks. Ocean Eng. 2021, 219, 108372. [Google Scholar] [CrossRef]
  125. Feng, X.; Ma, G.; Su, S.-F.; Huang, C.; Boswell, M.K.; Xue, P. A multi-layer perceptron approach for accelerated wave forecasting in Lake Michigan. Ocean Eng. 2020, 211, 107526. [Google Scholar] [CrossRef]
  126. Agrawal, J.; Deo, M. Wave parameter estimation using neural networks. Mar. Struct. 2004, 17, 536–550. [Google Scholar] [CrossRef]
  127. Kamranzad, B.; Etemad-Shahidi, A.; Kazeminezhad, M.H. Wave height forecasting in Dayyer, the Persian Gulf. Ocean Eng. 2011, 38, 248–255. [Google Scholar] [CrossRef]
  128. Castro, A.; Carballo, R.; Iglesias, G.; Rabuñal, J.R. Performance of artifcial neural networks in nearshore wave power prediction. Appl. Soft Comput. 2014, 23, 194–201. [Google Scholar] [CrossRef]
  129. Malekmohamadi, I.; Bazargan-Lari, M.R.; Kerachian, R.; Nikoo, M.R.; Fallahnia, M. Evaluating the efficacy of SVMs, BNs, ANNs and ANFIS in wave height prediction. Ocean Eng. 2011, 38, 487–497. [Google Scholar] [CrossRef]
  130. Sánchez, A.S.; Rodrigues, D.A.; Fontes, R.M.; Martins, M.F.; de Araújo Kalid, R.; Torres, E.A. Wave resource characterization through in-situ measurement followed by artificial neural networks’ modeling. Renew. Energy 2018, 115, 1055–1066. [Google Scholar] [CrossRef]
  131. Avila, D.; Marichal, G.N.; Padrón, I.; Quiza, R.; Hernández, Á. Forecasting of wave energy in Canary Islands based on Artificial Intelligence. Appl. Ocean Res. 2020, 101, 102189. [Google Scholar] [CrossRef]
  132. Jain, P.; Deo, M. Real-time wave forecasts off the western Indian coast. Appl. Ocean Res. 2007, 29, 72–79. [Google Scholar] [CrossRef]
  133. Kalra, R.; Deo, M.; Kumar, R.; Agarwal, V.K. RBF network for spatial mapping of wave heights. Mar. Struct. 2005, 18, 289–300. [Google Scholar] [CrossRef]
  134. Tsai, C.P.; Lin, C.; Shen, J.N. Neural netowrk for wave forecasting among multi-stations. Ocean. Eng. 2002, 29, 1683–1695. [Google Scholar] [CrossRef]
  135. Agrawal, J.; Deo, M. On-line wave prediction. Mar. Struct. 2002, 15, 57–74. [Google Scholar] [CrossRef]
  136. Londhe, S.N.; Shah, S.; Dixit, P.R.; Nair, T.B.; Sirisha, P.; Jain, R. A Coupled Numerical and Artificial Neural Netowirk Model for Improving Location Specific Wave Forecast. Appl. Ocean. Res. 2016, 59, 483–491. [Google Scholar] [CrossRef]
  137. Mahjoobi, J.; Etemad-Shahidi, A.; Kazeminezhad, M. Hindcasting of wave parameters using different soft computing methods. Appl. Ocean Res. 2008, 30, 28–36. [Google Scholar] [CrossRef]
  138. Etemad-Shahidi, A.; Mahjoobi, J. Comparison between M5′ model tree and neural networks for prediction of significant wave height in Lake Superior. Ocean Eng. 2009, 36, 1175–1181. [Google Scholar] [CrossRef]
  139. Lin, X.; Yang, R.; Zhang, W.; Zeng, N.; Zhao, Y.; Wang, G.; Li, T.; Cai, Q. An integrated view of correlated emissions of greenhouse gases and air pollutants in China. Carbon Balance Manag. 2023, 18, 9. [Google Scholar] [CrossRef]
  140. Antanasijević, D.Z.; Ristić, M.Đ.; Perić-Grujić, A.A.; Pocajt, V.V. Forecasting GHG emissions using an optimized artificial neural network model based on correlation and principal component analysis. Int. J. Greenh. Gas Control 2014, 20, 244–253. [Google Scholar] [CrossRef]
  141. Khoshnevisan, B.; Rafiee, S.; Omid, M.; Mousazadeh, H.; Rajaeifar, M.A. Application of artificial neural networks for prediction of output energy and GHG emissions in potato production in Iran. Agric. Syst. 2014, 123, 120–127. [Google Scholar] [CrossRef]
  142. Yi, J.; Prybutok, V.R. A neural network model forecasting for prediction of daily maximum ozone concentration in a industrialized urban area. Environ. Pollut. 1996, 92, 349–357. [Google Scholar] [CrossRef] [PubMed]
  143. Gardner, M.W.; Dorling, S.R. Neural network modelling and prediction of hourly NOx and NO2 concetrations in urban air in London. Atmos. Environ. 1999, 33, 2627–2636. [Google Scholar] [CrossRef]
  144. Jorquera, H.; Pérez, R.; Cipriano, A.; Espejo, A.; Letelier, M.V.; Acuña, G. Forecasting ozone daily maximum levels at Santiago, Chile. Atmos. Environ. 1998, 32, 3415–3424. [Google Scholar] [CrossRef]
  145. Andretta, M.; Eleuteri, A.; Fortezza, F.; Manco, D.; Mingozzi, L.; Serra, R.; Tagliaferri, R. Neural networks for sulphur dioxide ground level concentrations forecasting. Neural Comput. Appl. 2000, 9, 93–100. [Google Scholar] [CrossRef]
  146. Chelani, A.B.; Rao, C.C.; Phadke, K.; Hasan, M. Prediction of sulphur dioxide concentration using artificial neural networks. Environ. Model. Softw. 2002, 17, 159–166. [Google Scholar] [CrossRef]
  147. Agirre-Basurko, E.; Ibarra-Berastegi, G.; Madariaga, I. Regression and multilayer perceptron-based models to forecast hourly O3 and NO2 levels in the Bilbao area. Environ. Model. Softw. 2006, 21, 430–446. [Google Scholar] [CrossRef]
  148. Elkamel, A.; Abdul-Wahab, S.; Bouhamra, W.; Alper, E. Measurement and prediction of ozone levels around a heavily industrialized area: A neural network approach. Adv. Environ. Res. 2001, 5, 47–59. [Google Scholar] [CrossRef]
  149. Antanasijević, D.Z.; Pocajt, V.V.; Povrenović, D.S.; Ristić, M.; Perić-Grujić, A.A. PM10 emission forecasting using artificial neural networks and genetic algorithm input variable optimization. Sci. Total Environ. 2013, 443, 511–519. [Google Scholar] [CrossRef] [PubMed]
  150. Hooyberghs, J.; Mensink, C.; Dumont, G.; Fierens, F.; Brasseur, O. A neural network forecast for daily average PM10 concentrations in Belgium. Atmos. Environ. 2005, 39, 3279–3289. [Google Scholar] [CrossRef]
  151. Wang, G.; Awad, O.I.; Liu, S.; Shuai, S.; Wang, Z. NOx emissions prediction based on mutual information and back propagation neural network using correlation quantitative analysis. Energy 2020, 198, 117286. [Google Scholar] [CrossRef]
  152. Babu, D.; Thangarasu, V.; Ramanathan, A. Artificial neural netowrk approach on forecasting diesel engine characteristics fuelled with waste frying oil biodiesel. Appl. Energy 2020, 263, 114612. [Google Scholar] [CrossRef]
  153. Arcaklioğlu, E.; Çelıkten, I. A diesel engine’s performance and exhaust emissions. Appl. Energy 2005, 80, 11–22. [Google Scholar] [CrossRef]
  154. Najafi, G.; Ghobadian, B.; Tavakoli, T.; Buttsworth, D.; Yusaf, T.; Faizollahnejad, M. Performance and exhaust emissions of a gasoline engine with ethanol blended gasoline fuels using artificial neural network. Appl. Energy 2009, 86, 630–639. [Google Scholar] [CrossRef]
  155. Sayin, C.; Ertunc, H.M.; Hosoz, M.; Kilicaslan, I.; Canakci, M. Performance and exhaust emissions of a gasoline engine using artificial neural network. Appl. Therm. Eng. 2007, 27, 46–54. [Google Scholar] [CrossRef]
  156. Shivakumar; Pai, P.S.; Rao, B.S. Artificial Neural Network based prediction of performance and emission characteristics of a variable compression ratio CI engine using WCO as a biodiesel at different injection timings. Appl. Energy 2011, 88, 2344–2354. [Google Scholar] [CrossRef]
  157. Mohammadhassani, J.; Dadvand, A.; Khalilarya, S.; Solimanpur, M. Prediction and reduction of diesel engine emissions using a combined ANN–ACO method. Appl. Soft Comput. 2015, 34, 139–150. [Google Scholar] [CrossRef]
  158. Bevilacqua, V.; Intini, F.; Kühtz, S. A model of artificial neural network for the analysis of climate change. In Proceedings of the 28th International Symposium on Forecasting, Nice, France, 22–25 June 2008. [Google Scholar]
  159. Bakay, M.S.; Ağbulut, Ü. Electricity production based forecasting of greenhouse gas emissions in Turkey with deep learning, support vector machine and artificial neural network algorithms. J. Clean. Prod. 2021, 285, 125324. [Google Scholar] [CrossRef]
  160. Gallo, C.; Conto, F.; Fiore, M. A Neural Netowrk Model for Forecasting CO2 Emission. AGRIS-Line Pap. Econ. Inform. 2014, 6, 31–36. [Google Scholar]
  161. Abbasi, T.; Luithui, C.; Abbasi, S.A. A Model to Forecast Methane Emissions from Tropical and Subtropical Reservoirs on the Basis of Artificial Neural Networks. Water 2020, 12, 145. [Google Scholar] [CrossRef]
  162. Heo, J.S.; Kim, D.S. A new method of ozone forecasting using fuzzy expert and neural network systems. Sci. Total Environ. 2004, 325, 221–237. [Google Scholar] [CrossRef] [PubMed]
  163. Cai, M.; Yin, Y.; Xie, M. Prediction of hourly air pollutant concentrations near urban arterials using artificial neural network approach. Transp. Res. Part D Transp. Environ. 2009, 14, 32–41. [Google Scholar] [CrossRef]
  164. Azeez, O.S.; Pradhan, B.; Shafri, H.Z.M.; Shukla, N.; Lee, C.-W.; Rizeei, H.M. Modeling of CO Emissions from Traffic Vehicles Using Artificial Neural Networks. Appl. Sci. 2019, 9, 313. [Google Scholar] [CrossRef]
  165. Ahmadi, M.H.; Jashnani, H.; Chau, K.-W.; Kumar, R.; Rosen, M.A. Carbon dioxide emissions prediction of five Middle Eastern countries using artificial neural networks. Energy Sources Part A Recover. Util. Environ. Eff. 2019, 45, 9513–9525. [Google Scholar] [CrossRef]
  166. Nabavi-Pelesaraei, A.; Rafiee, S.; Hosseinzadeh-Bandbafha, H.; Shamshirband, S. Modeling energy consumption and greenhouse gas emissions for kiwifruit production using artificial neural networks. J. Clean. Prod. 2016, 133, 924–931. [Google Scholar] [CrossRef]
  167. Fang, D.; Zhang, X.; Yu, Q.; Jin, T.C.; Tian, L. A novel method for carbon dioxide emission forecasting based on improved Gaussian processes regression. J. Clean. Prod. 2018, 173, 143–150. [Google Scholar] [CrossRef]
  168. Stamenković, L.J.; Antanasijević, D.Z.; Ristić, M.Đ.; Perić-Grujić, A.A.; Pocajt, V.V. Prediction of nitrogen oxides emissions at the national level based on optimized artificial neural network model. Air Qual. Atmos. Health 2017, 10, 15–23. [Google Scholar] [CrossRef]
  169. Lee, W.-Y.; House, J.M.; Kyong, N.-H. Subsystem level fault diagnosis of a building’s air-handling unit using general regression neural networks. Appl. Energy 2004, 77, 153–170. [Google Scholar] [CrossRef]
  170. Koziel, S.; Leifsson, L.; Couckuyt, I.; Dhaene, T. Reliable reduced cost modeling and design optimization of microwave filters using co-kriging. Int. J. Numer. Model. Electron. Netw. Devices Fields 2013, 26, 493–505. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of a neuron [31].
Figure 1. Schematic diagram of a neuron [31].
Applsci 14 00389 g001
Figure 2. Artificial Neural Networks classification.
Figure 2. Artificial Neural Networks classification.
Applsci 14 00389 g002
Figure 3. Inputs and outputs in ANNs applied to wind energy and wind speed prediction.
Figure 3. Inputs and outputs in ANNs applied to wind energy and wind speed prediction.
Applsci 14 00389 g003
Figure 4. Inputs and outputs in ANNs applied to solar energy systems.
Figure 4. Inputs and outputs in ANNs applied to solar energy systems.
Applsci 14 00389 g004
Figure 5. Inputs and outputs in ANNs applied to wave prediction.
Figure 5. Inputs and outputs in ANNs applied to wave prediction.
Applsci 14 00389 g005
Figure 6. Number of annual publications for each type of research.
Figure 6. Number of annual publications for each type of research.
Applsci 14 00389 g006
Figure 7. Number of research published by country.
Figure 7. Number of research published by country.
Applsci 14 00389 g007
Figure 8. Network structures used with each research.
Figure 8. Network structures used with each research.
Applsci 14 00389 g008
Figure 9. Journals with higher productivity in ANNs for wind power and speed prediction.
Figure 9. Journals with higher productivity in ANNs for wind power and speed prediction.
Applsci 14 00389 g009
Figure 10. Journals with higher productivity in ANNs for solar energy prediction.
Figure 10. Journals with higher productivity in ANNs for solar energy prediction.
Applsci 14 00389 g010
Figure 11. Journals with higher productivity in ANNs for GHD prediction.
Figure 11. Journals with higher productivity in ANNs for GHD prediction.
Applsci 14 00389 g011
Figure 12. Trend in the use of activation functions.
Figure 12. Trend in the use of activation functions.
Applsci 14 00389 g012
Table 1. Uses of artificial neural networks for wind power and speed prediction.
Table 1. Uses of artificial neural networks for wind power and speed prediction.
Authors
and Year
ANN Type and StructureJournalCountry/
Region
I/O
Setting
Activation FunctionNotes
InputOutput
1[61]Multilayer Perceptron (MLP)
3-4-1
Renewable EnergyMuppandal,
India
Wind speed (Ws), relative humidity (RH),
generation hours
Energy output of wind farmslogsig
(hidden layer)
purelin
(output layer)
Trained by BP
algorithm
Input data
normalized to
[0, 1]
2[62]ANN
4-X-2
Renewable EnergyTurkeyLongitude (lon), latitude (lat), altitude (A),
measurement height
Ws, related powerlogsig
(hidden and output layer)
Trained by BP
algorithm
Input and Output data normalized to [0, 1]
3[63]MLP
5-10-5-1
Renewable EnergyTurkeyWs, month (M)Wslogsig
(hidden layer)
purelin
(output layer)
Resilient propagation (RP)
algorithm was adopted
4[64]Radial Basis Function (RBF)
1-7-2
Renewable and
Sustainable Energy
Reviews
IranWsProportional and integral (PI) gains-Use Gaussian function for hidden layer
Gravitational search algorithm (GSA) is adopted
5[65]MLP
3-(2-100)-24
Renewable EnergyMedina city, Saudi ArabiaMean daily WsWs
prediction of the next day
tansig
(hidden
layer)
purelin
(output
layer)
Input data
normalized to
[0, 1]
Trained by Levenberg–Marquardt (LM) BP algorithm
Compared and
outperforms
support vector machine (SVM)
SVM used Gaussian kernel
2000 days used for training and 728 days used for
testing
6[66]MLP
6-7-5-1
MLP
4-7-5-1
Renewable and
Sustainable Energy
Reviews
Alberta,
Canada
Wind power (Wp)
WP1 (t − 1),
WP1 (t − 2),
WP1 (t − 3),
WP1 (t − 4),
WP1 (t − 5),
WP1 (t − 6)
Short-term forecasting of the Wp time seriestansig
(hidden layer)
purelin
(output layer)
Input data
normalized to
[−1, 1]
Imperialist
competitive
algorithm (ICA), GA, and particle swarm
optimization (PSO) are
employed for training the neural
network
1200 data used for training and 168 data used for
testing
7[67]ANN
2-(16-32)-(16-32)-1
Renewable EnergyCoquimbo, ChileWs, wind
direction (Wd)
Turbine power-ADAM algorithm is adopted
103,308 data used for training and 52,560 data used for testing
8[68]RBF
2-3-1
MLP
2-4-1
ADALINE
2-4-1
Applied EnergyNorth Dakota, USAMean hourly WsForecast value of next hourly average Ws-Trained by LM algorithm
5000 data used for training and 120 data used for
testing
9[69]MLP
5-5-3
Renewable EnergyGuadeloupean archipelago, French West IndiesWs, 30 min moving average speedWp (t + kt)tansig
(hidden layer)
purelin
(output layer)
Bayesian regularized (BR)
10[70]ANN
7-20-1
Renewable EnergyChinaActual Ws, WpWs-Trained by BP algorithm
11[71]MLP
6-7-1
Renewable EnergyAlbacete, SpainWsp1, Wsp2, temperature (T) Tp2, solar cicle1, solar cicle2, Wdp1Ws forecast (48 h later)-LM algorithm is adopted
12[72]ANN
3-3-1
ANN
3-2-X
ANN
3-1
ANN
2-1
Renewable EnergyOaxaca, MéxicoPrevious
values of hourly Ws
Current value of Ws-550 data used for training and 194 data used for
testing
13[73]ANN
-
Renewable EnergyBasque
Country, Spain
Ws data in the last 3 hWs in 1 hsigmoid
(output layer)
Trained by BP
algorithm
14[74]MLP
X-8-X
Renewable EnergyRostamabad, IranStandard
deviation,
average, slope
Ws (k + l), …, Ws (k + 2), Ws (k + 1)-Trained by BP
algorithm
672 patterns used for training
15[75]MLP
5-3-3-1
Communications in
Nonlinear Science and Numerical Simulation
ItalyWs, RH, generation hours, T,
maintenance hours
Total wind energytansig (first hidden layer)
sigmoid
(second hidden layer)
purelin
(output layer)
Trained by BP
algorithm
16[76]MLP
6-25-1
Renewable EnergyHimachal
Pradesh, India
Average temperature (TAVG),
maximum temperature (Tmax),
minimum
temperature (ax), air
pressure (Pair), solar
irradiance (G), A
Average daily Ws
for 11 H.P. locations
-Trained by LM algorithm
Scaled conjugate gradient (SCG)
algorithm is adopted
Input and target data are
normalized to
[−1, 1]
60% data used for training, 20% used for testing, and 20% used for
validation
17[77]MLP
4-15-15-1
Applied
Energy
Nigerialat, lon, A, MMean monthly Wstansig
(hidden layers)
purelin
(output layer)
SCG and LM algorithms are adopted
Input and target data normalized to [−1, 1]
18[78]MLP
14-15-1
WSEAS Transactions on SystemsPortugalAverage hourly values of WsAverage hourly Ws-Trained BP algorithm
87.75% patterns used for training, 9.75% used for validation, and 2.5% used for
testing
19[79]MLP
5-6-6-6-2
-CyprusM, mean monthly
values of Ws at two levels
(2 and 7 m)
Mean monthly
values of Ws of a third station
tansig
(hidden layer)
logsig
(output layer)
Trained by BP algorithm
90% patterns used for training and 10% patterns used for testing
20[80]MLP
9-10-1
Energy
Conversion and Management
Marmara,
Turkey
9 stations WsWs-Trained by BP
algorithm
21[81]MLP
4-8-1
Theoretical and Applied ClimatologyTabriz,
Azerbaijan,
Iran
Pair, air
temperature (Tair), RH,
precipitation
Monthly Wslogsig
(hidden layer)
purelin
(output layer)
Trained by LM
algorithm
Input and output data normalized to [0, 1]
75% of data used for training and 25% used for
testing
22[82]MLP
31-63-31
Knowledge-Based
Systems
Minqin, ChinaHistorical daily average Ws during March
previous year
Daily average Ws
during March target year
tansig
(hidden layer)
logsig
(output layer)
Trained by BP
algorithm
23[83]MLP
X-25-1
2014 4th IEEE International Conference on
Information
Science and Technology
Colorado, USAT, RH, Wd, wind gust, pressure (P), historical WsWstansig
(hidden layer)
Trained by BP with momentum
1000 input/output pairs used for training and 200 input/output pairs used for testing
Table 2. Uses of artificial neural networks for solar energy prediction.
Table 2. Uses of artificial neural networks for solar energy prediction.
Authors
and Year
ANN Type and
Structure
JournalCountry/
Region
I/O SettingActivation FunctionNotes
InputOutput
1[89]RBF
6-11-24
RBF
6-15-24
Solar EnergyHuazhong, ChinaG (t + 1), Ws (t + 1), Tair (t + 1), RH (t + 1), t, power (Pw) (t)Pw1 (t + 1), Pw2 (t + 1), …, Pw24 (t + 1)-k-fold (validation)
Input and output data
normalized to [0, 1]
2[90]MLP
2-3-1
Renewable EnergyJaen, SpainG, module cell
temperature
(TC)
G, ambient temperature (Ta)-Trained by LM BP
algorithm
3[91]MLP
3-3-1
MLP
4-3-1
EnergyCorsica Island, France
Bastia
Ajaccio
RH, sunshine duration (S), nebulosity (Y)
Y, S, P,
differential pressure (DGP)
Global
radiation
(GR)
tansig (hidden layer)
purelin (output layer)
Trained by LM algorithm
Input data normalized to
[−1, 1]
80% data used for training, 10% for validation, and 10% used for testing
4[92]MLP
8-3-1
Solar EnergyAjaccio,
Corsica Island, France
Clearness index (KT) KTt−1, KTt−2, KTt−3, KTt−4, KTt−5, KTt−6, KTt−7, KTt−8Daily global solar
radiation (GSR)
purelin (output layer)Trained by LM algorithm
Use Gaussian function for hidden layer
Input data normalized to [0, 1]
80% data used for training, 10% for validation, and 10% used for testing
5[93]MLP
3-11-17-24
Solar EnergyTrieste, ItaliaG, Tair, hour or day (t)G1 (t + 1), G2 (t + 1), …, G24 (t + 1)-Trained by LM BP
Algorithm k-fold validation
Input and output data
normalized to [−1, 1]
6[94]RBFN
(2-3-4)-
(4-5-7)-1
MLP
(2-3-4)-
(2-3-5)-1
EnergyAl-Medina, Saudi ArabiaTair, S, RH, tDaily global solar
radiation
(GD)
-1460 data used for training and 365 data used for testing
7[95]MLP
6-5-1
Applied
Energy
Turkeylat, lon, A, M, S, TGlogsig (hidden layer)Trained by BP algorithm
SCG, Pola–Ribiere
conjugate gradient (CGP), and LM algorithms are adopted
Input and output data normalized to [−1, 1]
8[96]MLP
3-20-1
Renewable and
Sustainable Energy Reviews
Moroccolon, lat, AMean annual and monthly G-Trained by BP algorithm
Input and output data
normalized to [0, 1]
9[97]MLP
2-36-1
MLP
3-20-1
Energy Sources, Part A: Recovery, Utilization, and Environmental
Effects
Abha, Saudi ArabiaTair, RH, hour or day (t)Diffuse solar radiation
(DSR)
logsig (hidden layer)Trained by BP algorithm
1462 days used for training and 250 days used for testing
10[98]MLP
5-8-1
Expert
Systems with Applications
Anatolia,
Turkey
lat, lon, A, S, average cloudinessGtansig (hidden layer)
purelin (output layer)
Trained by BP algorithm
11[99]MLP
7-5-1
Applied
Energy
Nigerialat, lon, A, M, S, T, RHGtansig (hidden layer)
purelin (output layer)
SCG and LM algorithms are adopted
Input data normalized to [−1, 1]
11,700 datasets used for training and 5850 datasets used for validation and
testing
12[100]MLP
4-X-1
International Journal of PhotoenergyMalaysialat, lon, day or hour (t), SKTlogsig (hidden layer)Trained by BP algorithm
13[101]MLP
4-4-1
International Journal of Computer ApplicationsIndialat, lon, S, AGtansig (hidden layer)
purelin (output layer)
LM algorithm is adopted
14[102]MLP
5-40-1
EnergyEgyptGSR, like long-wave atmospheric emission, Tair, RH, PDiffuse
fraction (KD)
sigmoid (output layer)Trained by BP algorithm
15[103]MLP
7-15-1
Solar EnergyJaen, Spaint (day), t (hour), KT, hourly clearness index (kt) kt−1, kt−2, kt−3, SSolar
radiation maps
-Trained by BP algorithm (with momentum and
random presentations)
Input data normalized to [0, 1]
16[104]MLP
6-X-1
Solar EnergyHelwan, EgyptWd, Ws, Ta, RH, cloudiness, water vaporGsigmoid (output layer)Trained by LM BP
algorithm
Input data normalized to [0, 1]
17[105]MLP
2-X-1
MLP
3-X-1
MLP
3-X-X-1
Solar EnergyAthalassa,
Cyprus
S, theoretical sunshine
duration (S0d), M, Tmax
GDtansig (hidden layer)Trained by BP algorithm
90% data used for training and 10% used for testing
18[106]MLP
7-9-1
Renewable EnergyIndialat, lon, A, M, S,
rainfall
ratio, RH
KTtansig (hidden layer)
purelin (output layer)
Trained by BP algorithm
19[107]MLP
2-5-1
Energy PolicyChinaKt, S (%)Monthly mean daily KDtansig (hidden layer)
purelin (output layer)
Trained by BP algorithm
TRAINLM algorithm is adopted
Input and output data normalized to [0, 1]
20[108]MLP
6-15-1
Solar EnergyUgandaS, Tmax, Total Cloud Cover (TCC), lat, lon, AMonthly
average daily GSR on a horizontal surface
tansig (hidden layer)
purelin (output layer)
Trained by LM BP
algorithm
Input data normalized to [−1, 1]
21[109]MLP
6-6-1
Applied
Energy
Turkeylat, lon, A, M, DSR, mean beam radiationGlogsig (hidden layer)
purelin (output layer)
SCG and RP algorithms are adopted
22[110]GRNN
6-1.0-1
EnergyTurkeylat, lon, A, surface emissivity (ε4), surface emissivity (ε5), land surface
temperature
G--
23[111]MLP
7-4-1
Energy
Conversion and Management
IranTmax, Tmin, RH, VP,
total precipitation, Ws, S
GSRlogsig (hidden layer)
purelin (output layer)
Trained by BP algorithm
65 months used for training and 7 months used for testing
24[112]ANN
6-6-1
Applied EnergyTurkeylat, lon, A, M, S, TGlogsig (hidden layer)SCG, CGP, and LM
algorithms are adopted
Trained by BP algorithm
Input and output data normalized to [−1, 1]
25[113]MLP
3-6-1
Renewable EnergyKhuzestan, IranTmax, Tmin, extra-terrestrial
radiation (Ra)
GSRlogsig (hidden layer)Trained by LM BP
algorithm
Input data normalized to [0, 1]
70% data used for training and 30% patterns used for testing
26[114]MLP
5-3-1
Energy
Procedia
Bechar,
Algeria
M, t (day),
t (hour), T, RH
GSRtansig (hidden layer)
purelin (output layer)
Trained by LM BP
algorithm
81% data used for training and 19% used for testing
27[115]MLP
9-11-1
Renewable and
Sustainable Energy
Reviews
Republic of
Indonesia
T, RH, S, Ws,
precipitation, lon, lat, A, M
GSR-Trained by BP algorithm
28[116]RBF
4-50-2
Energy Sources, Part A: Recovery, Utilization, and Environmental EffectsSaudi ArabiaTa, RH, GSR, tDSR, direct normal
radiation (DNR)
-Use Gaussian function for hidden layer
1460 values used for
training and 365 values used for testing
29[117]MLP
8-15-1
Renewable EnergySultanate of OmanLocation (L), M, P, T, VP, RH, WS, SGR-Trained by BP algorithm
Table 3. Uses of artificial neural networks for wave height prediction.
Table 3. Uses of artificial neural networks for wave height prediction.
Authors
and Year
ANN Type
and Structure
JournalCountry/
Region
I/O SettingActivation FunctionNotes
InputOutput
1[119]ANN
28-15-4
28-9-4
28-4-4
28-7-4
Journal of Atmospheric and Oceanic TechnologyGulf of Maine, Gulf of Alaska, Gulf of Mexico7 days of significant H 6, 12, 18, 24 h forecastlogsig
(hidden and output layer)
Input data normalized to [0, 1]
Conjugate gradient
algorithm with Fletcher–Reeves is adopted
2[120]MLP
3-5-5-2
Ocean
Engineering
Bombay, IndiaDeep water wave height (Ho), wave energy
period (Te)
Breaking wave height (Hb), water depth at the time of breaking (db) sigmoid
(output layer)
Trained by BP
algorithm
Input and output data normalized to [0, 1]
3[121]MLP
48-97-24
Ocean
Engineering
Ireland48 h history wave
parameters
H and zero-up-
crossing peak wave period (Tp) over hourly intervals from 1 h to 24 h
logsig
(hidden layer)
purelin
(output layer)
Trained by resilient BP algorithm
4[122]MLP
6-5-1
Proceedings of the
Institution of Civil
Engineers-Maritime Engineering
Anzali, IranH, TpEnergy flux (Fe) over horizon of
1 to 12 h
sigmoid
(output layer)
Conjugate gradient
algorithm is adopted
80% data used for training and 20% used for testing
5[123]MLP
2-4-3
MLP
4-4-4
Ocean
Engineering
Karwar, IndiaWs3-hourly values of H and average cross-period-Trained by BP
algorithm
80% data used for training and 20% data used for testing
6[124]Deep Neural Network (DNN)
6-64-32-32-1
Ocean
Engineering
Pacific and
Atlantic coasts and the Gulf of Mexico
H, Te, Fe, weighted average
period, Tp, Ws, Wd
Fe, Te, H-SCG BP algorithm is adopted
Input data normalized to [0, 1]
75% data used for training and 25% data used for testing
7[125]MLP
3-300-300-2
Ocean
Engineering
Lake Michigan, United Sates of AmericaWind field, db, ice
coverage
H, TeReLU
(hidden
layer)
Stochastic gradient-based algorithm is adopted
80% data used for training and 20% data used for testing
8[126]MLP
1-x-1
Marine StructuresGoa, IndiaHFesigmoid
(output layer)
Trained by BP cascade correlation algorithms
80% patterns used for training and 20%
patterns used for
testing
9[127]MLP
6-5-1
Ocean
Engineering
Persian GulfHt, Ht−1, Ht−2, Utcos(Φt − θ), Ut−1cos(Φt – 1 − θt),
Ut−2cos(Φt − 2 − θ2)
H for the next 3, 6, 12, 24 hsigmoid
(output layer)
Conjugate gradient and LM algorithms are adopted
80% data used for training and 20% data used for testing
10[128]MLP
3-4-4-1
Applied
Soft
Computing
SpainH, Te, θmFetansig
(hidden layer)
purelin
(output layer)
Trained by BP algorithm
67% data used for training and 33% data used for testing
11[129]MLP
3-3-1
Ocean
Engineering
Lake Superior, USAWs, weather station
index (W)
Hsigmoid
(hidden and output layer)
Trained by BP algorithm
Input and output data normalized to [−1, 1]
Compared with SVM, Bayesian networks, and adaptive neuro-fuzzy inference
system (ANFIS)
345 patterns used for training and 54 patterns used for testing
12[130]MLP
5-2-1
Renewable EnergyBrazilWind shear velocity (U) U1, U2, Un, Y (t − 1),
Y (t − i)
Wave energy potentialtansig
(hidden layer)
purelin
(output layer)
Trained by LM BP
algorithm
90% data used for training and 10% data used for testing
13[131]MLP
X-15-1
Applied Ocean
Research
Canary Islands, SpainH, TpPredict Fetansig
(hidden layer)
purelin
(output layer)
Gradient descent with momentum and BP
algorithm are adopted
89% data used for training and 11% data used for testing
Input and output data normalized to [−1, 1]
14[132]MLP
4-4-1
Applied Ocean
Research
IndiaH values of the preceding 3, 6, 12, and 24th hourH subsequent 3, 6, 12 and 24th hour-Trained by LM BP
algorithm
60% data used for training and 40% data used for testing
15[133]RBF
21-13-1
MLP
21-9-1
Marine StructuresIndiaH(1–21)H(SW3)-Use Gaussian function for hidden layer
BP, SCG, conjugate gradient Powell–Beale (CGB), Broyden–Fletcher–Goldfarb (BFG), and LM algorithms are adopted
80% data used for training and 20% data used for testing
16[134]MLP
8-4-1
MLP
2-2-1
Ocean
Engineering
TaiwanSignificant wave height (H1/3),
highest one-tenth wave height (H1/10), highest wave height (Hmax), mean wave height (Hmean)
(stations A and B)
H1/3
(station C)
sigmoid
(output layer)
Trained by BP
algorithm
Input data normalized to [0, 1]
17[135]MLP
2-5-1
Marine StructuresYanam, IndiaHt, Ht−1Ht+1-Trained by BP
algorithm
Conjugate gradient and cascade
correlation algorithms are adopted
80% data used for training and 20% data used for testing
18[136]ANN
9-1-1
ANN
4-1-1
ANN
9-8-1
ANN
9-1-1
Applied Ocean ResearchRatnagiri, Pondicherry,
Gopalpur, Kollam, India
t − 24,
t − 21,
t − 18,
t − 15,
t − 12, t − 9,
t − 6, t − 3
t + 24
(24 h ahead
predicted
error)
logsig
(hidden layer)
purelin
(output layer)
Trained by LM algorithm
Input data normalized to [0, 1]
70% data used for training and 15% used for validation and
testing
19[137]MLP
4-9-3
MLP
4-7-1
MLP
2-5-1
MLP
4-8-1
Applied Ocean ResearchLake Ontario, Canada/USAWs, Wd, fetch length, wind durationH, Tp,
(wave
direction) Θ
tansig/
sigmoid
(hidden layer)
purelin
(output layer)
Trained by BP
algorithm
10-fold cross-
validation used
Input data normalized to [0, 1]
611 data used for
training and 326 data used for testing
20[138]MLP
1-3-1
Ocean
Engineering
Lake Superior, Canada/USAWsHsigmoid (transfer function)Compared and
outperforms with model tree
4045 data used for training and 3259 data used for testing
Table 4. Uses of artificial neural networks for GHG prediction.
Table 4. Uses of artificial neural networks for GHG prediction.
Authors and YearANN Type and Structure JournalCountry/
Region
I/O SettingActivation FunctionNotes
InputOutput
1[141]MLP
12-8-2
Agricultural SystemsIranMachinery, human labor, diesel fuel, pesticide,
nitrogen (N), phosphate (P2O5),
potassium (K2O),
farmyard
manure (FYM), water for irrigation, electricity, seed, farm size
Energy, GHG emissiontansig (hidden layer)Trained by BP algorithm
60% data used for training, 25% data used for cross-validation, and 15% data used for testing
2[142]MLP
9-4-1
Environmental PollutionTexas,
USA
Dummy
variable, ozone (O3) level at 9:00 am,
carbon dioxide (CO2),
nitric oxide (NO), nitrogen
dioxide (NO2),
oxide of
nitrogen (NOX), Ws, Wd, Tmax
Daily
maximum O3 level
tansig (hidden layer)Trained by BP algorithm
3[143]MLP
6-20-20-2
Atmospheric EnvironmentLondon,
UK
Low cloud amount (LOW), base of lowest cloud (BASE), UKMO,
visibility (VIS), Td, VP, Ws
NOx, NO2tansig (hidden layer)
identify (output layer)
SCG algorithm is adopted
Input and output data
normalized to [−1, 1]
4[144]MLP
3-x-1
Atmospheric EnvironmentSantiago, ChileO3,t, Tt+1, TtO3,t+1sigmoid (output layer)Direct algorithm or
series-parallel was used
5[145]MLP
10-25-1
MLP
10-11-1
Neural
Computing Y Applications
Ravenna,
Italy
Ws (t), Wd (t), G (t), G (t − 1), G (t − 2),
G (t − 3),
SO2 (t),
SO2 (t − 1), SO2 (t − 2), SO2 (t − 3)
Sulphur
dioxide
(SO2) (t + 1)
tansig (output layer)
logsig (output layer)
BR
SCG algorithm is adopted
20% data used for testing
6[146]Elman NN
4-13-3
Environmental Modelling Y SoftwareDelhi,
India
Ws, T, RH, WdPredict SO2 concentrationsigmoid (hidden layer)
purelin (output layer)
Trained by LM algorithm
7[147]MLP
5-X-1
MLP
9-X-1
Environmental Modelling Y SoftwareBilbao, SpainWs, Wd, T, RH, P, G, thermal
gradient, O3, NO2, number of vehicles, occupation percentage, velocity sin (2πt/24), cos (2πt/24), sin (2πt/7),
cos (2πt/7), NO2 (t + k),
O3 (t + k)
O3 (t + k)
NO2 (t + k)
(k = 1, …, 8)
tansig (hidden layer)
purelin (output layer)
SCG algorithm is adopted
85% data used for training and 15% data used for validation and testing
8[148]ANN
13-25-1
Advances in Environmental ResearchKuwaitMon-methane hydrocarbons, carbon
monoxide (CO), methane (CH4), CO2, SO2, NO, NO2, T, RH, suspended dust, solar energy, Wd, Ws
O3
concentration
logsig (hidden
layer)
Trained by BP algorithm with momentum
Input data normalized to [0, 1]
90% data used for training and 10% data used for testing
9[149]GRNN
(7-13)-154-1
Science of The Total EnvironmentEU-27Gross domestic product (GDP), gross inland energy consumption (GIEC),
incineration of wood…
Annual
(particle
matter)
PM10 emission
-GA was used
Input data normalized per capita
84% data used for training and 16% data used for validation
10[150]MLP
7-4-1
Atmospheric EnvironmentBelgiumPM10, boundary layer height (BLH), Ws, T, cloud, Wd, tDaily average PM10 day1-Trained by BP algorithm
11[151]MLP
4-5-10-1
EnergyEuropeIntake pressure, ṁ, fuel consumption, engine powerRaw
emissions NOx
sigmoid (hidden layers)
purelin (output layer)
Trained by BP algorithm
Input data normalized to [−1, 1]
70% data used for training and 30% data used for testing
12[152]MLP
4-13-5
Applied EnergyTamil Nadu,
India
Pre-injection timing (PrlT), main injection timing (MIT), post-injection timing (PIT), test fuelsCO, CO2,
unburned
hydrocarbon (UBHC), NO, smoke
sigmoid (hidden layer)Trained by LM BP
algorithm
Input data normalized to [−1, 1]
70% data used for
training, 15% data used for validation, and 15% data used for testing
13[153]MLP
3-(13-7)-1
Applied Energy-Injection pressure, engine speed, throttle position (TP)NOx, CO2, SO2logsig (hidden layer)Trained by BP algorithm
SCG, CGP, and LM
algorithms are adopted
Input and output data normalized to [0, 1]
14[154]MLP
2-20-9
Applied EnergyIranEngine speed, ethanol
gasoline blend
Brake power, torque, brake-specific fuel consumption (BSFC), brake
thermal
efficiency (BTh),
volumetric efficiency (ηv) CO, CO2,
hydrocarbons (HC), NOx
sigmoid (hidden layer)
purelin (output layer)
Trained by BP algorithm
70% data used for
training and 30% data used for testing
15[155]MLP
4-15-5
Applied
Thermal
Engineering
-Lower heating value (LHV), engine torque, engine speed, air inlet
temperature
BSFC, BTh, CO, HC, exhaust gas temperature (EGT)sigmoid (hidden layer)Trained by LM BP
algorithm
70% data used for
training and 30% data used for testing
16[156]MLP
4-22-3
Applied Energy-Load, blend %, compression ratio, injection timingNOx, smoke, UBHCsigmoid (hidden layer)
purelin (output layer)
Trained by BP algorithm
Input data normalized to [−1, 1]
70% data used for
training, 15% data used for validation, and 15% data used for testing
17[157]MLP
4-(7-60)-1
Applied Soft ComputingIranEngine speed, intake air
temperature, mass fuel, brake power
NOxlogsig (hidden and output layer)CGP, CGB, GDM, GD, and LM algorithms are adopted
Output data normalized to [0, 1]
90% patterns used for training and 10% patterns used for testing
18[158]MLP
6-14-1
Proceedings of the 28th
International
Symposium on Forecasting
ItalyOil, solid fuel, electricity, natural gas, population, GDPCO2logsig (output layer)Trained by BP algorithm
Input normalized to [0, 1]
19[159]MLP
6-9-1
Journal of Cleaner
Production
TurkeyYear, coal,
liquid fuels,
natural gas,
renewable energy and wastes, total electricity
production
GHG
emissions
-Compared with SVM
85% data used for
training and 15% data used for testing
20[160]MLP
4-20-1
AGRIS on-line Papers in
Economics and Informatics
Apulia, ItaliaCO2 (t − 1), CO2 (t − 2), CO2 (t − 3),
ma (CO2(t − 3)), CO2 (t − 2),
CO2 (t − 1))
CO2 (t)sigmoid (output layer)Trained by LM algorithm
Input data normalized to [0, 1]
21[161]MLP
5-40-30-1
Water-lat, lon, reservoir age, mean depth, surface areaCH4tansig (hidden layer)
purelin (output layer)
Trained by LM BP
algorithm
Input data normalized to [−1, 1]
22[162]MLP
36-36-1
Science of The Total
Environment
Seoul, South
Korea
Concentrations at 14:00,
meteorological conditions at 14:00,
variation
velocity between 13:00 and 14:00, 08:00 and 14:00, 11:00 and 14:00, O3 concentrations at 08:00, 11:00 and 13:00
O3
concentration at 15:00
sigmoid (output layer)
purelin (output layer)
-
23[163]MLP
16-8-1
Transportation Research Part D: Transport and EnvironmentGuangzhou, ChinaTraffic volume, t (hour), t (day), T, P, Ws, Wd, G, rainfall, RH, concentration 1st hour before, 2nd, 3rd, distance to road center line, street direction, street aspect ratioCO, NO2, PM10, O3sigmoid (output layer)Trained by BP algorithm
Input data normalized to [0, 1]
495 groups used for training, 24 groups used for evaluation, and 42 groups used for testing
24[164]MLP
6-3-1
Applied
Sciences
MalaysiaCar numbers, heavy vehicle numbers, S/M, T, Ws, digital surface model (DSM)Daily traffic CO
emissions
tansig (hidden layer)Correlation-based
feature selection (CFS) model algorithm is adopted
70% data used for training and 30% data used for testing
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Iglesias-Sanfeliz Cubero, Í.M.; Meana-Fernández, A.; Ríos-Fernández, J.C.; Ackermann, T.; Gutiérrez-Trashorras, A.J. Analysis of Neural Networks Used by Artificial Intelligence in the Energy Transition with Renewable Energies. Appl. Sci. 2024, 14, 389. https://doi.org/10.3390/app14010389

AMA Style

Iglesias-Sanfeliz Cubero ÍM, Meana-Fernández A, Ríos-Fernández JC, Ackermann T, Gutiérrez-Trashorras AJ. Analysis of Neural Networks Used by Artificial Intelligence in the Energy Transition with Renewable Energies. Applied Sciences. 2024; 14(1):389. https://doi.org/10.3390/app14010389

Chicago/Turabian Style

Iglesias-Sanfeliz Cubero, Íñigo Manuel, Andrés Meana-Fernández, Juan Carlos Ríos-Fernández, Thomas Ackermann, and Antonio José Gutiérrez-Trashorras. 2024. "Analysis of Neural Networks Used by Artificial Intelligence in the Energy Transition with Renewable Energies" Applied Sciences 14, no. 1: 389. https://doi.org/10.3390/app14010389

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop