Next Article in Journal
Tourist Perspectives on Agricultural Heritage Interpretation—A Case Study of the Qingtian Rice-Fish System
Previous Article in Journal
Stormwater Utility Fee Estimation Method for Individual Land Use Areas
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning LSTM Recurrent Neural Network Model for Prediction of Electric Vehicle Charging Demand

by
Jaikumar Shanmuganathan
1,
Aruldoss Albert Victoire
1,*,
Gobu Balraj
2 and
Amalraj Victoire
3
1
Department of Electrical & Electronics Engineering, Anna University Regional Campus Coimbatore, Coimbatore 641047, India
2
Assistant Engineer, Tamilnadu Generation and Distribution Corporation Ltd., Tamilnadu 641012, India
3
Department of Computer Applications, Sri Manakula Vinayagar Engineering College, Pondicherry 605107, India
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(16), 10207; https://doi.org/10.3390/su141610207
Submission received: 12 July 2022 / Revised: 7 August 2022 / Accepted: 12 August 2022 / Published: 17 August 2022
(This article belongs to the Topic Advanced Electric Vehicle Technology)

Abstract

:
The immense growth and penetration of electric vehicles has become a major component of smart transport systems; thereby decreasing the greenhouse gas emissions that pollute the environment. With the increased volumes of electric vehicles (EV) in the past few years, the charging demand of these vehicles has also become an immediate requirement. Due to which, the prediction of the demand of electric vehicle charging is of key importance so that it minimizes the burden on the electric grids and also offers reduced costs of charging. In this research study, an attempt is made to develop a novel deep learning (DL)-based long-short term memory (LSTM) recurrent neural network predictor model to carry out the forecasting of electric vehicle charging demand. The parameters of the new deep long-short term memory (DLSTM) neural predictor model are tuned for its optimal values using the classic arithmetic optimization algorithm (AOA) and the input time series data are decomposed so as to maintain their features using the empirical mode decomposition (EMD). The novel EMD—AOA—DLSTM neural predictor modeled in this study overcomes the vanishing and exploding gradients of basic recurrent neural learning and is tested for its superiority on the EV charging dataset of Georgia Tech, Atlanta, USA. At the time of simulation, the best results of 97.14% prediction accuracy with a mean absolute error of 0.1083 and a root mean square error of 2.0628 × 10−5 are attained. Furthermore, the mean absolute error was evaluated to be 0.1083 and the mean square error pertaining to 4.25516 × 10−10. The results prove the efficacy of the prediction metrics computed with the novel deep learning LSTM neural predictor for the considered dataset in comparison with the previous techniques from existing works.

1. Introduction

In the past few years, the focus of the automobile sector is on electric vehicles to combat the invariant climatic conditions across the universe and to decrease gas emission to the most possible extent. With the rapid growth of EV technology, it has become a key sector in the employment, economy and power sector. Fundamentally, EVs operate on electric motors, replacing the internal combustion (IC) engines that employ gaseous fuels. These electric vehicles are powered with electricity from off-vehicle resources or possess battery or solar panels within itself to charge itself. Numerous variants of EVs exist, which include plug-in EVs, airborne EV, seaborne EV, on-off road electric vehicle, range extension electric vehicle and so on. The most commonly developed are the plug-in electric vehicles, which fall into two categories—battery powered EVs and plug-in hybrid EVs. Plug-in hybrid electric vehicles are the ones wherein charging of batteries takes place by plugging-in into an external power source or with an on-board module. Pure electric vehicles are the battery EVs wherein it employs the chemical form of energy stored in the rechargeable batteries and they donot possess an internal combustion engine.
In purview of the varied merits of electric vehicles: reduction of greenhouse emissions, health hazards of pollution in the air, decrease in the requirement of diesel or petroleum, overcome the energy consumption at stationary conditions, increased tank-to-wheel efficacy of EVs, minimized vehicle vibration, lesser noise production, no need for gear boxes for torque conversion, simple mechanical design, higher power output for full speed range, and so on. Under this scenario, the rapid supply of millions of electric vehicles all over the globe has achieved its utility rate to the highest extent possible. Globally, in the year 2021, 6.8 million battery vehicles were used and around three million battery electric vehicles were newly manufactured. In India, the EV industry has shown an increase of 168% in the year 2021 with 329,190 vehicles sold compared to 122,607 units sold in the year 2020. Figure 1 provides a comparison of the EVs sold in major countries with China in the lead followed by USA.
At this juncture, with the advent increase in population and millions of electric vehicles across the globe, the charging of these electric vehicles is of major concern. Charging of EVs requires a direct current (DC) supply to the battery of the vehicle and, as electric power distribution is alternating current (AC) in nature, a converter is essential to supply the DC power to the battery source. Table 1 presents the generic power rating and charging modes of an electric vehicle. Conductive charging will be done on EV whether through DC or AC mode.
In the case of AC charging, an on-board charger acts to receive the AC power and converts it into DC. On the other hand, with DC charging, it converts the power externally and the DC power is supplied directly to the battery source without the need of an on-board charger. Thus, it is highly required to charge the battery source as and when required for effective operation and running of the electric vehicle. With the increased requirement of electric power and installation base of EVs, it is needed to predict the electric vehicle charging demand which facilitates the company and the users to have knowledge on the charging needs with respect to the distance travelled and time taken. The EV charging demand prediction will enable the consumers to plan for their distance to be travelled and also to locate other charging stations in the near locations when the battery gets drained.
Fuels are not employed for combustion in electric vehicles, and hence no exhaust of gases is there, which confirms the eco-friendly environment nature of the electric vehicles. As these vehicles operate on electricity, they tend to be a renewable source of energy rather than the burning of gaseous fuels in traditional vehicles. Compared to the price of petrol and diesel, the electricity price is low and the battery recharging is cost-effective when solar power is employed at home and by industries. Electric vehicles are maintenance free, as there is minimal wear and tear of auto parts compared to conventional vehicles. Maintenance expenses are simpler compared to the use of combustion engines. The government has initiated several incentives to make the public utilize electric vehicles as a mandate for go-green technology. On the other hand, the limitations of electric vehicles include: high initial cost and not affordable like the traditional ones, and there is limitation of the charging stations across the travelling zone. More time is required for recharging unlike filling petrol or diesel, which is completed in a few minutes. The driving range for EVs are also minimal and are not suited for long-distance travelling like the traditional combustion vehicles.
Considering the required demand for EV charging, the main aim of this paper is to design and develop a predictive model for forecasting the charging demand of electric vehicles and this will facilitate in maintaining the balance between the distance travelled, time travelled, time to be taken for charging and the cost incurred. With the increased requirement of electric power and installation base of EVs, it is needed to predict the electric vehicle charging demand which facilitates the company and the users to have knowledge on the charging needs with respect to the distance travelled and time taken. The EV charging demand prediction will enable the consumers to plan for their distance to be travelled and also to locate other charging stations in the near locations when the battery gets drained.
The remaining section of the paper is segmented as follows: Section 2 presents detailed related works carried out in this area in the previous literature and provides the motivation for the research study. Section 3 elucidates the development of the proposed predictive model and the datasets employed in this study. Simulated results and the discussions made with respect to the attained solution set are provided in Section 4. Section 5 discusses the comparative analysis made based on the results, and the conclusions are presented in Section 6 of the paper.

2. Related Works and Motivations

Numerous related works have been carried out in the past few years for predicting the charging demand of the electric vehicles including non-linear programming approaches and machine learning based prediction models. This section of the research study presents a detailed review of the literature on previous works carried out for forecasting the charging requirement of the electric vehicles.
Wang et al. (2014) predicted the state of charge of the energy storage in hybrid electric vehicles and used Bayesian extreme learning machine for the prediction process [1]. Grubwinkler and Lienkamp (2015) presented the application of machine learning algorithms for an accurate estimation of the energy consumption of electric vehicles [2]. Majidpour et al. (2015) proposed an algorithm based on cell phones for the prediction of energy consumption at electric vehicle charging stations at the University of California [3]. Chen et al. (2016) presented a multimode switched logic control strategy, targeting fuel economy improvement and forecasting process for the plug-in hybrid electric vehicle team for a particular route [4]. Li et al. (2017) modeled a forecasting method that is based on machine learning for predicting the capacity of the charging stations [5]. Foiadelli et al. (2018) extracted statistical features from the electric vehicles and employed supervised learning technique for predicting energy consumed by EVs [6]. Fukushima et al. (2018) proposed a transfer learning approach, a variant of machine learning that constructs prediction models using other sufficient data on EV models [7].
Liu et al. (2019) integrated short-term predictions into a hybrid electric vehicle energy management strategy and have the potential to improve its energy efficiency [8]. Mao et al. (2019) proposed forecasting models for schedulable capacity and energy demand of electric vehicles through the parallel gradient boosting decision tree algorithm [9]. Saputra et al. (2019) proposed novel approaches using state-of-the-art machine learning techniques, aiming at predicting energy demand for electric vehicles [10]. McBee et al. (2020) developed a long-term forecasting approach by combining all attributes required to predict energy demand of EV penetration [11]. Zhang et al. (2020) presented a prediction-based optimal energy management of electric vehicles usinganextreme learning machine algorithm and also to provide the driver torque demand prediction [12]. Huang et al. (2020) performed forecasting of electric vehicle charging loads using machine learning methods [13]. Sun et al. (2020) proposed an EV charging behavior prediction scheme based on the hybrid artificial intelligence to identify targeted EVs [14].
Khan et al. (2021) presented a network model ‘DB-Net’ by incorporating a dilated convolutional neural network (DCNN) with bidirectional long short-term memory for forecasting power consumption in EVs [15]. Deb et al. (2021) developed machine learning approaches in combination with Bayesian optimization for prediction analysis of plug-in electric vehicles state-of-charging [16]. Pan et al. (2021) modeled the fuzzy logic control strategy based on driving condition prediction of electric vehicles, which is optimized by using the grey wolf optimizer algorithm [17]. Quan et al. (2021) proposed model predictive control and the total power demand was forecasted via the Markov speed predictor and imported into the energy management system response prediction model to improve the control performance [18]. Schmid et al. (2021) presented an energy management strategy for parallel plug-in-hybrid electric vehicles based on Pontryagin’sminimum principle [19]. Lin et al. (2021) developed an ensemble learning velocity prediction-based energy management strategy considering the driving pattern adaptive reference state of charge for plug-in EVs [20]. Xin et al. (2021) employedaradial basis function neural network as the predictor to obtain the short-term velocity in the future for fuel cell hybrid electric vehicles [21].
Thorgeirsson et al. (2021) demonstrated the performance advantage of probabilistic prediction models over deterministic prediction models for energy demand prediction of electric vehicles [22]. Shahriar et al. (2021) proposed the usage of historical charging data in conjunction with weather, traffic, and events data to predict EV session duration and energy consumption using popular machine learning algorithms [23]. Cadete et al. (2021) studied long short-term memory and autoregressive and moving average models to predict charging loads with temporal profiles from three EV charging stations [24]. Liu et al. (2021) modeled a driving condition prediction model based on a BP neural network for parallel hybrid electric vehicles [25]. Zhao et al. (2021) presented a novel data-driven framework for large-scale charging energy predictions by individually controlling the strongly linear and weakly nonlinear contributions of EVs [26]. Lin et al. (2021) modeled a novel velocity prediction method using prediction error of back propagation neural network (BPNN)-based method for forecasting charging demand of EV [27]. Malek et al. (2021) introduced a speed forecasting method based on the multi-variate long short-term memory (LSTM) model for EVs [28].
Basso et al. (2021) presented the time-dependent Electric Vehicle Routing Problem with Chance-Constraints and partial recharging using probabilistic Bayesian machine learning and also to predict the expected energy consumption [29]. Aguilar-Dominguez et al. (2021) propose a machine learning (ML) model to predict the availability of an electric vehicle (EV) and their charging demands [30]. Lin et al. (2021) proposed an online correction predictive energy management strategy using fuzzy neural network for EVs [31]. Zeng et al. (2021) proposed an optimization-oriented adaptive equivalent consumption minimization strategy based on demand power prediction for electric vehicles [32]. Al-Gabalawy (2021) developed deep reinforcement learning that decreases the convergence time for predicting charging demand in electric vehicles and providing reliable backup power for the grid [33]. Ye et al. (2021) studied intelligent network connectivity technology to obtain forward traffic state data and employed a deep learning algorithm to model vehicle speed prediction and validated with a plug-in hybrid vehicle model [34]. Petkevicius et al. (2021) proposed deep-learning models that are built from electric vehicle tracking data and for predicting EV energy use [35]. Few researchers worked on velocity predictions of electric vehicles using machine learning algorithms, and thereby carried out effective energy management [36,37,38,39,40,41,42].
Sheik Mohammed et al. (2022) proposed algorithms to schedule EV charging based on the availability of solar PV power to minimize the total charging costs [43]. Asensio et al. (2022) predicted the power demand profile based on an autoregressive (AR) model for electric vehicles and a Kalman Filter scheme [44]. Liu et al. (2022) modeled a fast-charging demand prediction model based on the intelligent sensing system of dynamic electric vehicles [45]. Shi et al. (2022) proposed a deep auto-encoded extreme learning machine to attain better prediction accuracy and model complexity for predicting charging load of EVs [46]. Akbar et al. (2022) used machine learning to develop a reliable state of health prediction model for batteries of electric vehicles [47]. Wang et al. (2022) developed a generalized regression neural network (GRNN) to predict future velocity, and thereby charging demand need of electric vehicles [48]. Malik et al. (2022) developed a hybrid model combining empirical mode decomposition (EMD) and neural network (NN) for multi-step ahead load forecasting for virtual power plants with application in EVs [49].
Yan et al. (2022) proposed an artificial intelligence model predictive control framework for the energy management system (EMS) of the series hybrid electric vehicle [50]. Shen et al. (2022) proposed a hybrid deterministic-stochastic methodology utilizing the route information, the driver’s characteristics, and the traffic flow’s uncertainties for predicting the EV’s future velocity profile and energy consumption [51]. Eddine and Shen (2022) proposed temporal encoder-decoder-LSTM concatenated with temporal LSTM (T-LSTM-Ori-TimeFeatures) for addressing the issue of charging demand prediction in electric vehicles [52]. Eagon et al. (2022) proposed a novel approach using two recurrent neural networks (RNNs) for predicting the remaining range of battery of electric vehicles [53]. Wang and Abdallah (2022) modeled a federated learning for qualified local model selection algorithm and semi-decentralized robust network of electric vehicles (NoEV) integration system for power management and prediction of electric vehicles [54]. Table 2 provides an overview of the related works carried out in this specific area.
In connection with the detailed literature review made on different prediction models employed for forecasting the charging demand of the plug-in electric vehicles and battery electric vehicles, each of the prediction techniques were with their own merits and demerits. With the penetration of millions of electric vehicles across the globe, there is always a requirement for enhancing the charging mechanism adopted. Considering the review made, the various limitations observed in the existing prediction models for the same application is listed to be:
-
Presence of dissimilarity measures in the algorithms [1,9,13,15]
-
Lack of mapping the required feature parameters [2,3,4,5,6,7]
-
Certain techniques require more data pertaining to the movement and tracking of electric vehicles for performing the prediction [3,8,10,11,12]
-
Invariant data results in bad prediction results [18,19,20,21,22,23,24]
-
Occurrence of over-fitting and under-fitting issues [27,28,29,30,31,32,33]
-
Heterogeneous data results in non-linear problem and it is difficult to get primary data to model the problem [14,16,17,25]
-
Low prediction accuracy rate for new EVs than the popular EVs using the same prediction model [36,37,38,39,40,41,42]
-
Requirement to analyze the fuel economy and drivability, else higher error variations [44]
-
Different time scales for charging elapses more prediction time [34,35,43]
-
Lack in frequent data sharing between the charging stations and charging station providers [26]
-
Existence of non-linearity with time-dependent data [45]
-
Pre-matured and delayed convergence of few machine learning algorithms [9,47]
-
Presence of local and global optima without attaining the saturation limit [10,11,12,13,14,46]
-
High mobility and low reliability of electric vehicles [47]
-
Environmental factors and occupant behavior affects the performance of existing prediction models [48]
-
Difficulty in analyzing the long-term energy consumption prediction [49]
-
Dependency on the level of state-of-charge [50]
-
Implication of driving capacity on maintaining the charging capacity of electric vehicles [51]
-
Sparse charging infrastructure and non-linear data [27,36,52]
-
Poor descriptive ability of linear networks for complex environments [53]
-
Highest rate of public charging demand [41,47]
-
Certain predictive models are limited to short-term based prediction [54]
-
Difficulty of algorithms to handle temporal profiles [42]

Aim and Objectives of the Research Study

The motivations of this research study are attained by observing the above limitations; so, to overcome these limitations it is required to design and develop a better predictive model for forecasting the charging demand most accurately for the electric vehicles based on the considered input parameters and their features derived. The importance of an accurate predictive model for electric vehicle charging demand is that it will caution the users and the drivers to take precaution when charging has to be done. A more suitable predictive model for forecasting the charging demand of electric vehicles will help in maintaining the balance between the distance travelled, time travelled, time to be taken for charging and the cost incurred.
The predictor model is designed to forecast the charging demand of many electric vehicles that will be charged at a particular sector. For example, with respect to the considered datasets, during the morning session peak hours, if 50 vehicles have to be charged, then the required demand will be high compared to that of the mid-afternoon hour when only five vehicles will be charged. So, in respect of the sectors and with respect to the time zones, once the charging demand is analyzed and forecasted then the charging stations shall more effectively cater the need. In a lane, if more than 100 EVs pass by, then the increased demand and the forecast will help us in installing the charging stations so that the waiting time for charging the EVs will be reduced. Based on this, the objectives of the research study include:
-
To model a novel deep learning based recurrent neural network model that employs auto-encoder and decoder for handling the non-linear data of the electric vehicle charging.
-
Employing the empirical mode decomposition (EMD) to decompose the data and attain the temporal features using the intrinsic frequency components.
-
Applying the arithmetic optimizer algorithm (AOA) to find the optimal weights and bias values of the designed deep learning neural model.
-
Designing the structure of the deep long-short term memory (DLSTM) neural network for performing the prediction with proposed training and training algorithms.
-
Testing and validating the EMD–AOA–DLSTM on the electric vehicle charging dataset of Georgia Tech, Atlanta, USA.
-
To ensure that for charging EVs, based on the prediction done, the waiting time gets reduced for charging in a 24-h time period.

3. Methods and Materials

The design and development of the novel deep long-short term memory neural network for performing the prediction of charging demand of electric vehicle is presented in this section. An overview of the empirical mode decomposition technique and the arithmetic optimizer algorithm are also given. The proposed EMD–AOA–DLSTM predictor model with its complete working flow is also detailed in this section of the paper.

3.1. Empirical Mode Decomposition

The raw time series data obtained directly from the plant (Georgia Tech, Atlanta, GA, USA) shall be decomposed into a different set of sub-series using empirical mode decomposition that can be detected, then individually predicted and finally reconstructed to attain the overall forecasting demand value. The original electric vehicle charging data is reprsented as,
S ( t ) = j = 1 m X j ( t ) + D m ( t )
In Equation (1), Xj(t) for j=1,2,…,m specifies the intrinsic mode functions (IMF) for various decompositions and Dm(t) indicates the residue derived after the specified number of IMFs are carried out. For performing EMD, a suitable IMF should be defined satisfyingthe number of extrema and the zero crossing shall be equal or be different atleast by one and at any specific point, the average value of the envelop indicated by the local maxima and minima should be ‘0’ [55,56]. The steps to perform EMD for the electric vehicle charging time-series data are as follows:
Step 1: Locate all the extrema (both local minima and maxima) of the series [S(t)].
Step 2: Generate the upper envelope [Supp(t)] by connecting all the local maxima by a cubic spline and generate the lower envelope [Slow(t)] by connecting all the local minima.
Step 3: Evaluate the average value of the envelope [A(t)] using the upper and lower envelopes obtained from step 2.
A ( t ) = S u p p ( t ) + S l o w ( t ) 2
Step 4: Extract the information from the original signal and average signal.
Y ( t ) = S ( t ) A ( t )
Step 5: Test for Y(t) to be an intrinsic mode function.
-
On [Y(t)] being an IMF then set X(t) = Y(t) and also replace [S(t)] with the residual [D(t) = S(t) − X(t)].
-
On [Y(t)] not an IMF, replace [S(t)] with [Y(t)].
Repeat steps 2–4, until the stopping condition gets satisfied. The stopping condition is defined to be,
t = 1 n Y k 1 ( t ) Y k ( t ) 2 Y k 1 ( t ) 2 μ k = 1 , 2 , m ; t = 1 , 2 , , n
In Equation (4), ‘n’ represents the signal length, ‘μ’ is the stopping parameter from 0.2 to 0.3 and ‘k’ indicates the number of iterative cycles.
Step 6: Carry out the steps 1–5, until all the intrinsic mode functions are determined.

3.2. Arithmetic Optimization Algorithm—Revisited

The arithmetic optimization algorithm, as developed by Abualigah et al. (2021), operates with the distribution behavior of the fundamental arithmetic mathematic operations multiplication (M), division (D), subtraction (S) and addition (A) and attempts to find the optimal solutions covering abroad range of search space [57,58,59]. Figure 2 presents the hierarchy of operators used in the AOA process flow.
The AOA technique operates based on the four phases—inspiration, initialization, exploration and exploitation. The algorithmic steps adopted to attain the optimal solution employing the AOA approach is as follows:
Step 1: Inspiration—The algorithm is inspired by the operation of the simple arithmetic operators for determining the best value subject to certain criterion from the wide range of candidate solutions. The inspiration is based on the applicability of arithmetic operators for finding solutions to arithmetic problems. The hierarchy of operations adopted is—division, multiplication, subtraction and addition and the dominance decreases from division to addition.
Step 2: Initialization—The candidate solution is defined to be,
Y ( t ) = y 11 y 1 i y 1 , n 1 y 1 n y 21 y 2 i y 2 , n 1 y 2 n y m 1 , 1 y m 1 , 2 y m 1 , i y m 1 , n 1 y m 1 , n y m 1 y m 2 y m i y m , n 1 y m n
Compute the math optimizer acceleration (moa) coefficient,
m o a ( I t e r c u r r e n t ) = α min + I t e r c u r r e n t × α max α min I t e r max
In Equation (6), αmax and αmin specifies the maximum and minimum values of accelerated functions, ‘Itercurrent’ indicates the current iteration and ‘Itermax’ specifies the maximum iteration.
Step 3: Exploration—The exploratory operators of the AOA approach are the division (D) operator and multiplication (M) operator. The exploration mechanism identifies the near optimal solution that shall be obtained after numerous iterations. The exploration operators D and M operate to support the exploitation stage through an effective communication. High dispersion possibility does not allow these exploration operators to near the optimal solution easily.
The division search strategy and multiplication search strategy perform aposition update in this phase and are evolved with the following equation,
If r 1 > m o a t h e n y i j ( I t e r c u r r e n t + 1 ) = y j _ B e s t ÷ ( m o p + ε ) × U B j L B j × λ + L B j , r 2 < 0.5 y j _ B e s t × ( m o p ) × U B j L B j × λ + L B j , O t h e r w i s e
In Equation (7), r1 and r2 are small random numbers and the division operator performs when r2 < 0.5 and the multiplication operator do not perform until ‘D’ operator completes the current operation. The best solution evaluated so far is ‘yj_Best’, ‘λ’ represents the control parameter for adjusting the search mechanism, ‘ε’ is the small integer number, LBj and UBj specifies the lower bound and upper bound of the present position, and ‘mop’ is math optimizer probability given by,
m o p ( I t e r c u r r e n t ) = 1 I t e r c u r r e n t 1 / β I t e r max 1 / β
where, ‘β’ specifies the sensitivity parameter defining the exploration accuracy.
Step 4: Exploitation—The exploitation is carried out by the operators subtraction (S) and addition (A), which moves through the search space and obtains high dense solutions. Due to low dispersion, these two operators aremorecapable of nearing the best solution point than that of the high dispersion operators D and M.
The addition search strategy and subtraction search strategy evolves the position of the best near optimal solution by moving through deep dense regions and the update equation is given by,
If r 1 < m o a ( I t e r c u r r e n t ) t h e n y i j ( I t e r c u r r e n t + 1 ) = y j _ B e s t ( m o p ) × U B j L B j × λ + L B j , r 3 < 0.5 y j _ B e s t + ( m o p ) × U B j L B j × λ + L B j , O t h e r w i s e
The operators S and A helps the algorithm to overcome the occurrence of local minima and this exploitation mechanism facilitates the exploration mechanism to attain the optimal solution by maintaining the diversity of the candidate solutions. The stochastic parameter ‘λ’ is chosen suitable to maintain the exploration starting from first to last iteration. The hierarchical order of the arithmetic operators D, M, S and A estimates the position of the near-optimal solution and this will overcome the optimal stagnation presence towards the end of last iterations.

3.3. LSTM Recurrent Neural Model

Long-short term memory neural network is the recurrent neural network model with an additional memory component included. Long-term categorizes the memory and the short-term categorizes the data part and the LSTM model assigns weights in a way that it has the capability to include new data or forget data or the output is attained based on the earlier stored information of the data samples. In order to remember and retain data of the inputs for a longer time duration and to make necessary operations on the memory (read, write and delete), LSTM models are most suited [60,61,62].
LSTM configures memory in the form of gated cell and this cell decides whether to store or delete the data from the network model. The decision is made by the gated cell based on the weight coefficients and the change of weights during the progressive training. The information with the higher significance will be retained in the LSTM memory during the training process and the others will be deleted moving towards achieving a better predicted value. Figure 3 presents the basic LSTM neural internal structure. The internal structures of LSTM are designed with three gated cell memories—an input gate for identifying and permitting the new inputs into the model, a forget gate for deleting the irrelevant information and the output gate for determining the final output corresponding to the current state.
LSTM neural performs back-propagation based gradient descent learning and overcomes the occurrence of vanishing gradient and exploding gradient by steep gradient formation and with minimized training time and higher prediction accuracy. Figure 4 illustrates the architecture of the LSTM neural network model.
LSTM expands the memory module and these units construct the recurrent neural network model. In LSTM neural network, data get classified to be the short-term and the memory cells become the long-term. The need for recurrent LSTM in this paper include:
-
For overcoming the saturation of the training model and enabling to get convergence
-
For maintaining the balanced weights and bias at the time of training
-
Choosing suitable activation function to evaluate the network output
-
Unwarranted termination of training process is overcome
-
To possess a better slope value so that the gradient enables an efficient training mechanism
-
To increase the memory cells and classify the data; thereby formulating better training and testing process
-
The designed recurrent neural network model will be prevented from instability occurrences.
The algorithmic steps of the training of the LSTM network are as given below.
Step 1: The network sets the initial weights and other learning parameters. Sigmoidal function of the network determines the data that have to be taken forward from gated cells and the data that should be deleted in the specific time period. The current input ‘xt’ and the previous state ‘zt−1′ compute the function and is defined by,
g g t = α W g t [ z ( t 1 ) , x t ] + W o g t
In Equation (10), ‘ggt’ specifies the forget gate, ‘α’ is the learning rate metric, ‘Wgt’ represents the weights of the model and ‘Wogt’ indicating the bias of the neural model.
Step 2: In this step, it is required to add the memory units to the current state and the activation functions—tangential and sigmoidal operates to add memory units. Data to be passed (0 or 1) is decided by the sigmoidal function and the weights of the data to be passed through are done by the tangential function. The operations are indicated by the following equations,
k g t = α × ( W i t [ z ( t 1 ) , x t ] + W o i t )
Y g t = tanh ( W c t [ z ( t 1 ) , x t ] + W o c t )
where, kgt indicates the input gate and Ygt assigns weights to the data through the sigmoidal function.
Step 3: The memory cell state from which the output has to be attained is decided in this step. The sigmoidal layer activates to find the output and the part of memory cell which will compute the output. Then, the corresponding cell states gets through the tangential layer for getting values between −1 and +1 and the final output from the LSTM neural model is computed to be,
R g t = α × W o t [ z ( t 1 ) , x t ] + W o o t z t = R g t × tanh ( Y g t )
In the Equation (13), ‘Rgt’ represents the output gate and this gate presents the output from the memory cells and ‘zt’ specifies the current state from which the output is computed.

3.4. Proposed EMD–AOA–Deep LSTM Recurrent Neural Predictor

With the background on LSTM neural model, this section of the research study develops the novel deep long-short term memory (DLSTM) neural network, and the EMD and AOA approach is employed for time-series EV data decomposition and neural parameter optimization, respectively. A combined version of EMD–AOA–DLSTM is modeled as a predictor for forecasting the electric vehicle charging demand using the considered datasets, and thereby this research study forecasts the charging demand for any electric vehicle. In the proposed DLSTM model, deep and dense layers are stacked to form the deep learning structure and this intends to develop the crucial predictor model. Figure 5 presents the developed DLSTM neural network in this research study for EV charging demand forecasts.
This research study attempts to perform a prediction of charging demand in respect of electric vehicles; the charging time in respect of the electric vehicles are important, and here, the output of the convolutional layer exceeds the input. This is avoided by padding the input data with zeros, and hence the output from the respective convolutional layers shall be identical and perform the deep layer training. The convolutional layer of the proposed DLSTM neural model extracts the significant time series features from the decomposed sub-series signals from EMD and passes the extracted EV charging features to the max-pooling layer of the predictor model. The DLSTM model proposed in this research study is modeled with convolutional layers, pooling layer, dense layer, LSTM layer, dropout layer and, finally, soft_max for presenting the output of the predictor model. In the new DLSTM predictor model, the convolutional operation is carried out for the input-to-state transition and for the state-to-state transition. The equations modified for the new deep learning based DLSTM model is derived as,
k g t = α × W i t x t + U i t z ( t 1 ) + H i t * Y g t 1 + W o i t g g t = α × W g t x t + U g t z ( t 1 ) + H g t * Y g t 1 + W o g t Y g t = g g t Y g t 1 + k g t * tanh W c t x t + U c t z ( t 1 ) + W o c t R g t = α × W o t x t + U o t z ( t 1 ) + H o t * Y g t + W o o t z t = R g t * tanh ( Y g t )
In Equation (14), ‘°’ represents the convolutional operator and * indicates the element-wise operator. The corresponding gates of the DLSTM model are defined by state variable at time t, kgt, ggt and Rgt that combines with the cell output.
The structure of the DLSTM model is designed with four convolutional layers with 30 neurons and a kernel size of four and for non-linear transformations the ReLU (rectified linear unit) is built with these convolutional layers. The output of the convolutional layer is a matrix of U50×30, as there are 30 convolutional filters placed. The output matrix of the convolutional layer contains the weight of one filter and at the end of the fourth convolutional layer, a single dimension max pooling layer (pool size −2) exist and attains the output U25×30. An LSTM operational layer follows the max pooling layer with 70 neurons, and here, there is a 30% recurrent dropout probability and a vector of U1×70 is computed. Finally, a fully connected network with 70 neurons is formed with linear activations and then the final soft-max layer acts as the predictor. The EMD–AOA based DLSTM model evaluates the mean square error value along with the recurrent drop out, which facilitates in circumventing the over-fitting occurrences.For the modeled novel DLSTM, its encoder activation function ‘Gencode’ and the labeled sample data points ‘Xdata’ formulates the encode matrix as,
En vector = G encode ( X d a t a )
From the DLSTM, the reconstructed time-series data output is given by,
O reconstruct = G d e c o d e ( E n v e c t o r )
The designed deep learning layers with auto-encoder and decoder adapts to minimize the error criterion and achieve better prediction metrics during reconstruction operation. The loss function of the new DLSTM is given by,
g ( l o s s _ f u n c t i o n ) = 1 N j = 1 N X d a t a , G d e c o d e G e n c o d e X d a t a
During the deep learning process, the presence of non-linearity is evaluated using,
G e n c o d e X = g f _ e n c o d e ( W 0 + W x ) G d e c o d e ( X ) = g f _ d e c o d e ( W 0 + W x T )
In Equation (18), ‘gf_encode’ and ‘gf_decode’ specifies the encoder and decoder activation function of the deep learning predictor model, ‘W0′ represents the bias element and the weight matrices are ‘Wx’ and ‘ W x T ’. The error is evaluated during deep training process using,
E r r o r D L S T M ( X d a t a , O r e c o n s t r u c t ) = X d a t a O r e c o n s t r u c t 2
For all the deep LSTM layers, the encoder vectors are evaluated using,
En vector 1 = G encode 1 ( X d a t a ) En vector 2 = G encode 2 ( X d a t a ) En vector 3 = G encode 3 ( X d a t a ) En vector n = G encoden ( X d a t a )
The final predicted output from the DLSTM neural model is,
Y p r e d c i t e d _ D L o u t = G e n c o d e _ N + 1 E n v e c t o r n
In Equation (21), ‘GencodeN+1 represents the trained values at the LSTM output layer and the new weights based on the gradients are evaluated to be,
W new _ en = W o l d _ e n + α × E r r o r D L S T M W n e w _ e n W new _ de = W old _ de + α × E r r o r D L S T M W n e w _ d e
Figure 6 illustrates the complete process flow of the proposed EMD–AOA–DLSTM neural predictor model. The approach AOA tunes to obtain the optimized weight and bias component to be presented as initial values during the deep learning LSTM training. The above steps are repeated for the proposed DLSTM predictor model until the error value comes to the most possible minimal value. In respect of the evaluated predicted output and the original data, the mean square error metric is evaluated with,
M S E p r e d i c t i o n = 1 I t e r max i = 1 I t e r max ( Y D L _ predicted , Y O r i g i n a l _ d a t a ) 2
Finally, the weights are updated during training and the predicted output corresponds to the point of attaining minimized MSE prediction value.

3.5. EV Charging Datasets

The dataset employed for testing and validating the proposed DLSTM predictor model pertains to the usage of electric vehicles within the campus of Georgia Tech, Atlanta, USA and the vehicles were charged at the conference center parking station and around 150 vehicles were flying around the campus [63]. The average driving distance of the vehicles is 31 km. Figure 7 provides the histogram plot of the duration of charging and a probability distribution curve. Table 3 provides the sample electric vehicle charging datasets used in this research study. From the datasets, the input variables include Charging Time (hh:mm:ss), Energy (kWh), Greenhouse Gas (GHG) savings (kg), Gasoline savings (gallons) and cost incurred (USD). The output variable corresponds to predicting the charging demand of energy (kWh). The proposed EMD–AOA–DLSTM model is designed and simulated to operate on this electric vehicle dataset for predicting its charging demand.

4. Results and Discussions

The developed novel EMD–AOA–DLSTM predictor model is tested for its superiority and effectiveness for prediction of electric vehicle charging energy demand for the charging stations at Georgia Tech, Atlanta, USA. A simulation process is carried out in MATLAB R2021a environment on an Intel dual core i5 processor of 8GB physical memory. For the original EV charging time-series data, empirical mode decomposition is applied and residual and other IMFs are extracted, the data gets decomposed and the sub-series data forms the input for the deep LSTM neural network model. The basic AOA algorithm gets invoked on completion of the first trial run of the prediction algorithm, and subsequently, the weight coefficients and bias entities of the DLSTM neural predictor are tuned for their optimal values and then the deep learning algorithm performs its training. Based on the data decomposition of the sub-series using EMD and tuned optimal coefficients computed from the AOA technique, the deep learning-based long-short term memory network attempts to locate the best possible forecast value for the EV charging energy demand metric. Table 4 lists the parametric values used during the training process of the proposed EMD–AOA–DLSTM neural predictor.
The proposed EMD–AOA–DLSTM predictor is tested for its superiority based on the performance metrics as given—mean absolute error (MAE), mean square error (MSE), root mean square error (RMSE) and accuracy of prediction (Apre) and they are evaluated with the following equations,
M A E = 1 N q = 1 N Y p r e d i c t e d _ q Y a c t u a l _ q M S E = 1 N q = 1 N Y p r e d i c t e d _ q Y a c t u a l _ q 2 R M S E = 1 N q = 1 N Y p r e d i c t e d _ q Y a c t u a l _ q 2 A pre = 1 N q = 1 N Y q , Y q = 1 , i f Y p r e d i c t e d _ q + 1 Y a c t u a l _ q Y a c t u a l _ q + 1 Y a c t u a l _ q > 0 0 , O t h e r w i s e
In Equation (24), ‘N’ represents the total number of data samples, ‘Yactual’ represents the original EV charging farm data and ‘Ypredicted’ is the predicted output obtained using the proposed predictor model. Figure 8 shows the EMD decomposed sub-series output of the considered EV data samples and these sub-series data are presented as input to the DLSTM model. Figure 9 presents the design of the proposed DLSTM predictor model in the deep network designer of the MATLAB environment and simulation results are attained henceforth by training the predictive model created.
The decomposed sub signals, in respect of the EV charging datasets, are presented to the deep learning based long-short term memory neural network model. The deep LSTM is designed in the network designer with input layer of five neurons (charging time, energy, GHG savings, gasoline, and fee), nine deep dense layers and one output layer with single output neuron for the prediction of charging energy. At the initial iteration, the weights and bias coefficients are set to small random values and during the iterative learning process they are tuned for their optimal values using the AOA optimization process. The weights and bias will form the number of populations and the hierarchy of D, M, S and A will be adopted along with the position update of moa to attain the better optimal solutions, and subsequently deep learning progresses. Reaching the minimal mean square error value is the convergence point for the proposed predictive algorithm for the considered EV charging plant datasets.
The simulation process is done for the EV charging station datasets, the predicted EV charging energy (kWh) is computed using the decomposed sub-series data from EMD and the values of MAE, MSE, RMSE and Apre are evaluated and listed in Table 5. Figure 10 shows the plots of the predicted charging energy level with that of the actual charging energy level of the EV charging station. It is clear from Figure 10 that the predicted EV charging energy is on par with the original charging energy level for the EV charging station. Figure 11 depicts the convergence curve attained during the deep learning process of the proposed predictor model. At the time of training, the convergence was obtained at 251st epoch with an MSE of 4.25516 × 10−10 and for testing and validation the evaluated MSE value is 5.96333 × 10−10 and 5.5317 × 10−10,respectively. The prediction accuracy attained during the convergence of the DLSTM predictor model is 97.14% with a minimal MSE and an MAE of 0.1083.
Figure 12 presents the variation in the gradient value, the momentum factor (Mu) and the validation fail checks carried out at the convergence point of 251st epoch. The plot confirms that a minimal value of gradient is obtained proving the efficacy of the developed predictor model. The regression plot at the time of prediction confirms that the value of regression coefficient R = 1 for training, testing and validating proving the validity and applicability of the proposed EMD–AOA–DLSTM predictor model. This figure has three metrics for a better understanding of the proposed EMD–AOA–DLSTM predictor model. The Gradient during the training process ranges in the limit between 0 to 10−10. This indicates that the proposed predictor model will be able to improve its learning phase only when there is a small change in its weights and bias. This is well supported by the plot shown in Figure 11.
Similarly, in the second part of the Figure 12, the momentum factor (Mu) is plotted for the convergence point of 251st epoch. It may be observed from this plot that when the training epoch numbers increases, the momentum factor starts decreasing. Thus, establishing the proposed predictor model learning process is intact. The third part of Figure 12 is the validation check plot. Which confirms the proposed predictor model is not trapped in a local minima. The all zero plot justifies that the network parameters adopted in this proposed predictor model is optimally chosen. Thus, with the evidence of all these three plots shown in Figure 12, the proposed EMD–AOA–DLSTM predictor model demonstrates itself as a robust predictor model.
Figure 13 shows the regression plots obtained during the progressive training of the deep learning predictor. A good regression plot is the one where the predicted output value should be close to the target output value; thereby, the regression value R = 1. Thus, the proposed EMD–AOA–DLSTM predictor model again establishes itself as a robust predictor model.
The values of gradient value, the momentum factor (Mu), MSE and the performance during the training process is presented in Figure 14.
MSE values computed with respect to the number of iterations elapsed during the training and testing process is provided in Table 6 for the EV charging station datasets. The convergence occurred by obtaining an MSE value of 4.25516 × 10−10 elapsed at 251st epoch for training process and during testing process, the MSE was 5.96333 × 10−10 at 251st for testing process. Table 7 provides the sample of predicted EV charging demand value with that of the actual EV charging demand value at the Georgia Tech charging outlet. The predicted values prove their values are on par and near equal to that of the actual EV charging energy (kWh) for the considered charging station dataset. At the 251st Epoch, it has reached the convergence and attained the minimal MSE value.
Employing the developed novel EMD–AOA–DLSTM predictor model, better prediction accuracy with minimized error values has been evaluated. The superiority of the predictor lies in its ability to carry out prediction process based on the memory states of the recurrent LSTM model. The new DLSTM model retains the information of the previous past, present, and thereby, considering the memory states, predicts the future value.
Deep learning procedure incorporated with the LSTM neural model intends to extract the significant features from the data through the convolutional layers and processing with sigmoidal functions takes place at the fully connected dense layers. This enables the proposed predictor to forecast the charging energy level for the future demand based on the previous history of electric vehicles and their charging energy levels. Furthermore, applying the classis arithmetic optimizer algorithm the optimal weight and bias coefficients are tuned and this helps the DLSTM predictor to overcome the over-fitting occurrences. A 4-fold cross validation is carried out on the EV datasets for training, testing and validating with the EMD–AOA–DLSTM predictor model and the results are computed during the simulation process.

5. Comparative Analysis

The novel predictor neural model proposed and simulated in this research study performed the prediction of charging demand energy level of the electric vehicles [64,65,66]. The charging demand was predicted with respect to the charging time taken, charging energy in kWh, greenhouse emission savings, gasoline and the cost incurred for saving. With respect to the EV charging station considered in this research study, the EMD–AOA–DLSTM model resulted in better prediction accuracy and minimal mean square error during both training and testing process.
Table 8 provides a comparative analysis of the developed EMD–AOA based deep LSTM predictor with that of the earlier prediction techniques from existing works [1,10,13,16,22,24,27,50,52]. The same Georgia Tech EV datasets were presented as input to all the comparison models for the respective codes in github.com and their comparison metrics—MSE, training efficiency, testing efficiency, computational time and prediction accuracy was evaluated. It is clear from Table 8 that the novel EMD–AOA–DLSTM model with MSE of 4.25516 × 10−10 and prediction accuracy of 97.14% has proved its superiority than other previous techniques from earlier works. The proposed model incurred a minimized computational time of 7.4 s than the Bayesian ELM model, which incurred 16.14 s [1]. The average training and testing efficiency of proposed predictor was 98.62% and 98.03%, better than other compared models proving the efficacy of deep learning mechanism.TheEMD based sub-series decomposition and the AOA presence to attain optimal training parameters has enhanced the proposed DLSTM model to achieve predicted charging energy demand on par with that of the actual EV charging energy level.
The Taylor diagram is another option to graphically summarize the proximity between the trained and tested data. The similarity between the two datasets is quantified in terms of their correlation, their root mean square error and the standard deviations. The Taylor diagram is widely used to understand the performance of various complex models used for prediction. Any model which has relatively high correlation and low RMSE will be marked as OBS in the X-axis of the Taylor diagram. The RMSE calculated using Equation (24) and the correlation value are used to arrive at the observed (OBS) value. The standard deviation of the test data can be related to the radial distance from the origin of the plot.
Here, amongst the ten well established models tabulated in Table 8, all the prediction models, including the one proposed in this paper, are plotted in the Taylor diagram shown in Figure 15. The position of the colored dots establishes the closeness of the models’ efficiency in predicting the test data using the trained data. The OBS value as per the diagram is 0.81.
It may be inferred from Figure 15 that any model which has high correlation and low RMSE will be declared as the successful prediction model. Accordingly, both the Bayesian ELM model (violet dot) and the probabilistic prediction model (brown triangle) has high RMSE, but low correlation. Similarly, the federated learning (red triangle) model has a better RMSE and correlation. It was followed by the ensemble learning model (yellow dot) followed by the LSTM model (green dot) and the Bayesian optimization (green triangle). As far as the deep inference framework (black dot) and back propagation (blue triangle) are concerned, both have performed very poorly in establishing both correlation and RMSE. The deep learning (blue dot) model performed reasonably. Finally, the proposed EMD–AOA–DLSTM predictor (red dot) is demonstrated for its superiority based on the performance metrics by achieving relatively high correlation and low RMSE, as depicted in the Taylor diagram.
Another widely adopted statistical plot to establish the comparison of prediction errors of various prediction models is shown in Figure 16. The box plots are comprised of five components to portray the error statistics, namely, the three quartiles (lower, median and upper), and minimum and maximum error values. Accordingly, the box plot is shown here, such that the rectangle box conveys the range in which the prediction error is attained by the prediction models. The black lines show the median absolute error and the + signs conveys the prediction error outliers.
Another widely adopted statistical plot to establish the comparison of prediction errors of various prediction models is shown in Figure 16. The normalized RMSE is plotted for all the prediction models listed in Table 8, including the proposed EMD–AOA–DLSTM prediction model. The box plots are comprised of five components to portray the error statistics. Accordingly, the box plot is shown here, such that the box conveys the range in which the prediction error is attained by the prediction models. The black lines show the median absolute error and the + signs conveys the prediction error outliers. For clarity in the figure, the different predictor models are abbreviated as follows: EAD: EMD–AOA–DLSTM; FEL: federated learning approach; ENS: ensemble learning; BOP: Bayesian optimization with ML; LST: LSTM model; BP: back propagation model; DPI: deep inference framework; DPL: deep learning model; BEL: Bayesian ELM neural model.
From the box plot it is clear that the proposed EMD–AOA–DLSTM predictor has the narrowest error range; on the contrary, the BP has the widest error range. In addition, the outliers and median error for the proposed model is also almost small compared to other predictor models. The DPL and DPI models almost produced the same error statistics. Similarly, BOP and ENS performed very much the same. BEL, LST, FEL and BP are very poor in reducing errors as well as in predicting the test data.
As the proposed deep LSTM neural model has a random initialization of weights, it is highly important to statistically validate the developed predictor model. In this study, two statistical parameters coefficient of determination and correlation coefficient are evaluated for the proposed neural model. When the values of both of these coefficients are nearer to 1, this confirms the statistical validity of the proposed neural network model. Table 9 shows the evaluated values of the statistical parameters. From the table, it is clear that both of these values are closer to 1, proving that the proposed model is statistically valid.

6. Conclusions

In this research study, a novel EMD–AOA–DLSTM predictor model has been developed and applied for forecasting the electric vehicle charging demand for the considered EV datasets. The proposed predictor model combined the merits of the empirical mode decomposition to decompose signals into sub-series without loss of information in the data, arithmetic optimization algorithm with better exploration and exploitation phases, LSTM recurrent model with memory states for retaining past information and deep learning to enhance the depth of architecture layers and their intensive training to make the prediction more accurate. The simulation process has been carried out using the proposed predictor on the EV datasets and the results have proven their superiority over the other existing prediction models for the same datasets. The training efficiency and testing efficiency of the modeled predictor was evaluated to be 98.62 and 98.03, which is better than other state-of-art techniques, and the prediction accuracy of 97.14% with a very minimal MSE in the order of 10-10, proves its effectiveness. Henceforth, the EMD–AOA based DLSTM predictor has confirmed its effectiveness with better prediction accuracy and minimized error values in forecasting the electric vehicles’ charging demand.

Author Contributions

Conceptualization, J.S. and A.A.V.; methodology, J.S. and A.A.V.; software, G.B. and A.V.; validation, J.S., G.B. and A.A.V.; resources, A.V.; writing—original draft preparation, J.S. and A.A.V.; writing—review and editing, J.S. and A.A.V.; supervision, J.S. and A.A.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, Q.; Sun, Y.; Huang, Y. SOC Prediction of HES in HEV based on bayesian extreme learning machine. ICIC Express Lett. Part B Appl. Int. J. Res. Surv. 2014, 5, 1735–1740. [Google Scholar]
  2. Grubwinkler, S.; Lienkamp, M. Energy prediction for EVS using support vector regression methods. In Intelligent Systems; Springer: Cham, Switzerland, 2014; pp. 769–780. [Google Scholar]
  3. Majidpour, M.; Qiu, C.; Chu, P.; Gadh, R.; Pota, H.R. Fast prediction for sparse time series: Demand forecast of EV charging stations for cell phone applications. IEEE Trans. Ind. Inform. 2014, 11, 242–250. [Google Scholar] [CrossRef]
  4. Chen, Z.; Li, L.; Yan, B.; Yang, C.; Martinez, C.M.; Cao, D. Multimode energy management for plug-in hybrid electric buses based on driving cycles prediction. IEEE Trans. Intell. Transp. Syst. 2016, 17, 2811–2821. [Google Scholar] [CrossRef]
  5. Almaghrebi, A.; Aljuheshi, F.; Rafaie, M.; James, K.; Alahmad, M. Data-driven charging demand prediction at public charging stations using supervised machine learning regression methods. Energies 2020, 13, 4231. [Google Scholar] [CrossRef]
  6. Zhang, Z.; Zou, Y.; Zhou, T.; Zhang, X.; Xu, Z. Energy Consumption Prediction of Electric Vehicles Based on Digital Twin Technology. World Electr. Veh. J. 2021, 12, 160. [Google Scholar] [CrossRef]
  7. Fukushima, A.; Yano, T.; Imahara, S.; Aisu, H.; Shimokawa, Y.; Shibata, Y. Prediction of energy consumption for new electric vehicle models by machine learning. IET Intell. Transp. Syst. 2018, 12, 1174–1180. [Google Scholar] [CrossRef]
  8. Liu, K.; Asher, Z.; Gong, X.; Huang, M.; Kolmanovsky, I. Vehicle Velocity Prediction and Energy Management Strategy Part 1: Deterministic and Stochastic Vehicle Velocity Prediction Using Machine Learning; SAE Technical Paper; SAE International: Warrendale, PA, USA, 2019; pp. 1–8. [Google Scholar]
  9. Mao, M.; Zhang, S.; Chang, L.; Hatziargyriou, N.D. Schedulable capacity forecasting for electric vehicles based on big data analysis. J. Mod. Power Syst. Clean Energy 2019, 7, 1651–1662. [Google Scholar] [CrossRef]
  10. Hannah Jessie Rani, R.; Aruldoss Albert Victoire, T. A hybrid Elman recurrent neural network, group search optimization, and refined VMD-based framework for multi-step ahead electricity price forecasting. Soft Comput. 2019, 23, 8413–8434. [Google Scholar] [CrossRef]
  11. McBee, K.D.; Bukofzer, D.; Chong, J.; Bhullar, S. Forecasting long-term electric vehicle energy demand in a specific geographic region. In Proceedings of the IEEE Power & Energy Society General Meeting (PESGM), Montreal, QC, Canada, 3–6 August 2020; pp. 1–5. [Google Scholar]
  12. Zhang, J.; Xu, F.; Zhang, Y.; Shen, T. ELM-based driver torque demand prediction and real-time optimal energy management strategy for HEVs. Neural Comput. Appl. 2020, 32, 14411–14429. [Google Scholar] [CrossRef]
  13. Huang, X.; Wu, D.; Boulet, B. Ensemble learning for charging load forecasting of electric vehicle charging stations. In Proceedings of the IEEE Electric Power and Energy Conference (EPEC), Edmonton, AB, Canada, 18 January 2020; pp. 1–5. [Google Scholar]
  14. Sun, D.; Ou, Q.; Yao, X.; Gao, S.; Wang, Z.; Ma, W.; Li, W. Integrated human-machine intelligence for EV charging prediction in 5G smart grid. EURASIP J. Wirel. Commun. Netw. 2020, 2020, 139. [Google Scholar] [CrossRef]
  15. Khan, N.; Haq, I.U.; Khan, S.U.; Rho, S.; Lee, M.Y.; Baik, S.W. DB-Net: A novel dilated CNN based multi-step forecasting model for power consumption in integrated local energy systems. Int. J. Electr. Power Energy Syst. 2021, 133, 107023. [Google Scholar] [CrossRef]
  16. Deb, S.; Goswami, A.K.; Chetri, R.L.; Roy, R. Bayesian optimization based machine learning approaches for prediction of plug-in electric vehicle state-of-charge. Int. J. Emerg. Electr. Power Syst. 2021, 22, 753–764. [Google Scholar] [CrossRef]
  17. Pan, C.; Tao, Y.; Liu, Q.; He, Z.; Liang, J.; Zhou, W.; Wang, L. Grey wolf fuzzy optimal energy management for electric vehicles based on driving condition prediction. J. Energy Storage 2021, 44, 103398. [Google Scholar] [CrossRef]
  18. Quan, S.; Wang, Y.X.; Xiao, X.; He, H.; Sun, F. Real-time energy management for fuel cell electric vehicle using speed prediction-based model predictive control considering performance degradation. Appl. Energy 2021, 304, 117845. [Google Scholar] [CrossRef]
  19. Schmid, R.; Buerger, J.; Bajcinca, N. Energy management strategy for plug-in-hybrid electric vehicles based on predictive PMP. IEEE Trans. Control Syst. Technol. 2021, 29, 2548–2560. [Google Scholar] [CrossRef]
  20. Lin, X.; Wu, J.; Wei, Y. An ensemble learning velocity prediction-based energy management strategy for a plug-in hybrid electric vehicle considering driving pattern adaptive reference SOC. Energy 2021, 234, 121308. [Google Scholar] [CrossRef]
  21. Xin, W.; Zheng, W.; Qin, J.; Wei, S.; Ji, C. Energy management of fuel cell vehicles based on model prediction control using radial basis functions. J. Sens. 2021, 24, 25–37. [Google Scholar] [CrossRef]
  22. Thorgeirsson, A.T.; Scheubner, S.; Fünfgeld, S.; Gauterin, F. Probabilistic prediction of energy demand and driving range for electric vehicles with federated learning. IEEE Open J. Veh. Technol. 2021, 2, 151–161. [Google Scholar] [CrossRef]
  23. Shahriar, S.; Al-Ali, A.R.; Osman, A.H.; Dhou, S.; Nijim, M. Prediction of EV charging behavior using machine learning. IEEE Access 2021, 9, 111576–111586. [Google Scholar] [CrossRef]
  24. Cadete, E.; Ding, C.; Xie, M.; Ahmed, S.; Jin, Y.F. Prediction of electric vehicles charging load using long short-term memory model. In Tran-SET 2021; American Society of Civil Engineers: Reston, VA, USA, 2021; pp. 52–58. [Google Scholar]
  25. Liu, Q.; Dong, S.; Yang, Z.; Xu, F.; Chen, H. Energy management strategy of hybrid electric vehicles based on driving condition prediction. IFAC-Pap. Online 2021, 54, 265–270. [Google Scholar] [CrossRef]
  26. Zhao, Y.; Wang, Z.; Shen, Z.J.; Sun, F. Data-driven framework for large-scale prediction of charging energy in electric vehicles. Appl. Energy 2021, 282, 116175. [Google Scholar] [CrossRef]
  27. Lin, X.; Wang, Z.; Wu, J. Energy management strategy based on velocity prediction using back propagation neural network for a plug-in fuel cell electric vehicle. Int. J. Energy Res. 2021, 45, 2629–2643. [Google Scholar] [CrossRef]
  28. Malek, Y.N.; Najib, M.; Bakhouya, M.; Essaaidi, M. Multivariate deep learning approach for electric vehicle speed forecasting. Big Data Min. Anal. 2021, 4, 56–64. [Google Scholar] [CrossRef]
  29. Basso, R.; Kulcsár, B.; Sanchez-Diaz, I. Electric vehicle routing problem with machine learning for energy prediction. Transp. Res. Part B Methodol. 2021, 145, 24–55. [Google Scholar] [CrossRef]
  30. Aguilar-Dominguez, D.; Ejeh, J.; Dunbar, A.D.; Brown, S.F. Machine learning approach for electric vehicle availability forecast to provide vehicle-to-home services. Energy Rep. 2021, 7, 71–80. [Google Scholar] [CrossRef]
  31. Lin, X.; Zeng, S.; Li, X. Online correction predictive energy management strategy using the Q-learning based swarm optimization with fuzzy neural network. Energy 2021, 223, 120071. [Google Scholar] [CrossRef]
  32. Zeng, T.; Zhang, C.; Zhang, Y.; Deng, C.; Hao, D.; Zhu, Z.; Ran, H.; Cao, D. Optimization-oriented adaptive equivalent consumption minimization strategy based on short-term demand power prediction for fuel cell hybrid vehicle. Energy 2021, 227, 120305. [Google Scholar] [CrossRef]
  33. Al-Gabalawy, M. Reinforcement learning for the optimization of electric vehicle virtual power plants. Int. Trans. Electr. Energy Syst. 2021, 31, e12951. [Google Scholar] [CrossRef]
  34. Ye, M.; Chen, J.; Li, X.; Ma, K.; Liu, Y. Energy Management Strategy of a Hybrid Power System Based on V2X Vehicle Speed Prediction. Sensors 2021, 21, 5370. [Google Scholar] [CrossRef]
  35. Chinnadurrai, C.L.; Aruldoss Albert Victoire, T. Dynamic economic emission dispatch considering wind uncertainty using non-dominated sorting crisscross optimization. IEEE Access 2020, 8, 94678–94696. [Google Scholar] [CrossRef]
  36. Liu, Y.; Li, J.; Gao, J.; Lei, Z.; Zhang, Y.; Chen, Z. Prediction of vehicle driving conditions with incorporation of stochastic forecasting and machine learning and a case study in energy management of plug-in hybrid electric vehicles. Mech. Syst. Signal Process. 2021, 158, 107765. [Google Scholar] [CrossRef]
  37. Pokharel, S.; Sah, P.; Ganta, D. Improved Prediction of Total Energy Consumption and Feature Analysis in Electric Vehicles Using Machine Learning and Shapley Additive Explanations Method. World Electr. Veh. J. 2021, 12, 94. [Google Scholar] [CrossRef]
  38. Rabinowitz, A.; Araghi, F.M.; Gaikwad, T.; Asher, Z.D.; Bradley, T.H. Development and evaluation of velocity predictive optimal energy management strategies in intelligent and connected hybrid electric vehicles. Energies 2021, 14, 5713. [Google Scholar] [CrossRef]
  39. Zhou, H.; Zhou, Y.; Hu, J.; Yang, G.; Xie, D.; Xue, Y.; Nordström, L. LSTM-based energy management for electric vehicle charging in commercial-building prosumers. J. Mod. Power Syst. Clean Energy 2021, 9, 1205–1216. [Google Scholar] [CrossRef]
  40. Wu, C.; Jiang, S.; Gao, S.; Liu, Y.; Han, H. Charging demand forecasting of electric vehicles considering uncertainties in a microgrid. Energy 2022, 247, 123475. [Google Scholar] [CrossRef]
  41. Chen, Z.; Liu, Y.; Zhang, Y.; Lei, Z.; Chen, Z.; Li, G. A neural network-based ECMS for optimized energy management of plug-in hybrid electric vehicles. Energy 2022, 243, 122727. [Google Scholar] [CrossRef]
  42. Powell, S.; Cezar, G.V.; Rajagopal, R. Scalable probabilistic estimates of electric vehicle charging given observed driver behavior. Appl. Energy 2022, 309, 118382. [Google Scholar] [CrossRef]
  43. Titus, F.; Thanikanti, S.B.; Deb, S.; Kumar, N.M. Charge Scheduling Optimization of Plug-In Electric Vehicle in a PV Powered Grid-Connected Charging Station Based on Day-Ahead Solar Energy Forecasting in Australia. Sustainability 2022, 14, 3498. [Google Scholar]
  44. Asensio, E.M.; Magallán, G.A.; Pérez, L.; De Angelo, C.H. Short-term power demand prediction for energy management of an electric vehicle based on batteries and ultracapacitors. Energy 2022, 247, 123430. [Google Scholar] [CrossRef]
  45. Liu, Y.; Liu, W.; Gao, S.; Wang, Y.; Shi, Q. Fast charging demand forecasting based on the intelligent sensing system of dynamic vehicle under EVs-traffic-distribution coupling. Energy Rep. 2022, 8, 1218–1226. [Google Scholar] [CrossRef]
  46. Shi, J.; Liu, N.; Huang, Y.; Ma, L. An Edge Computing-oriented Net Power Forecasting for PV-assisted Charging Station: Model Complexity and Forecasting Accuracy Trade-off. Appl. Energy 2022, 310, 118456. [Google Scholar] [CrossRef]
  47. Akbar, K.; Zou, Y.; Awais, Q.; Baig, M.J.; Jamil, M. A Machine Learning-Based Robust State of Health (SOH) Prediction Model for Electric Vehicle Batteries. Electronics 2022, 11, 1216. [Google Scholar] [CrossRef]
  48. Wang, W.; Guo, X.; Yang, C.; Zhang, Y.; Zhao, Y.; Huang, D.; Xiang, C. A multi-objective optimization energy management strategy for power split HEV based on velocity prediction. Energy 2022, 238, 121714. [Google Scholar] [CrossRef]
  49. Malik, H.; Alotaibi, M.A.; Almutairi, A. A new hybrid model combining EMD and neural network for multi-step ahead load forecasting. J. Intell. Fuzzy Syst. 2022, 42, 1099–1114. [Google Scholar] [CrossRef]
  50. Yan, Q.D.; Chen, X.Q.; Jian, H.C.; Wei, W.; Wang, H. Design of a deep inference framework for required power forecasting and predictive control on a hybrid electric mining truck. Energy 2022, 238, 121960. [Google Scholar] [CrossRef]
  51. Shen, H.; Wang, Z.; Zhou, X.; Lamantia, M.; Yang, K.; Chen, P.; Wang, J. Electric Vehicle Velocity and Energy Consumption Predictions Using Transformer and Markov-Chain Monte Carlo. IEEE Trans. Transp. Electrif. 2022, 8, 3836–3847. [Google Scholar] [CrossRef]
  52. Eddine, M.D.; Shen, Y. A deep learning-based approach for predicting the demand of electric vehicle charge. J. Supercomput. 2022, 78, 14072–14095. [Google Scholar] [CrossRef]
  53. Eagon, M.J.; Kindem, D.K.; Panneer Selvam, H.; Northrop, W.F. Neural Network-Based Electric Vehicle Range Prediction for Smart Charging Optimization. J. Dyn. Syst. Meas. Control 2022, 144, 011110. [Google Scholar] [CrossRef]
  54. Wang, Z.; Abdallah, A.B. A Robust Multi-Stage Power Consumption Prediction Method in a Semi-Decentralized Network of Electric Vehicles. IEEE Access 2022, 10, 37082–37096. [Google Scholar] [CrossRef]
  55. Huang, N.E.; Shen, Z.; Long, S.R.; Wu, M.C.; Shih, H.H.; Zheng, Q.; Yen, N.C.; Tung, C.C.; Liu, H.H. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 1998, 454, 903–995. [Google Scholar] [CrossRef]
  56. Rilling, G.; Flandrin, P.; Goncalves, P. On empirical mode decomposition and its algorithms. In Proceedings of the IEEE-EURASIP Workshop on Nonlinear Signal and Image Processing, Grado, Italy, 8–11 June 2003; Volume 3, pp. 8–11. [Google Scholar]
  57. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  58. Agushaka, J.O.; Ezugwu, A.E. Advanced arithmetic optimization algorithm for solving mechanical engineering design problems. PLoS ONE 2021, 16, e0255703. [Google Scholar] [CrossRef] [PubMed]
  59. Kaveh, A.; Hamedani, K.B. Improved arithmetic optimization algorithm and its application to discrete structural optimization. Structures 2022, 35, 748–764. [Google Scholar] [CrossRef]
  60. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A review of recurrent neural networks: LSTM cells and network architectures. Neural Comput. 2019, 31, 1235–1270. [Google Scholar] [CrossRef]
  61. Udaiyakumar, S.; Aruldoss Albert Victoire, T. Week ahead electricity price forecasting using artificial bee colony optimized extreme learning machine with wavelet decomposition. Teh. Vjesn. 2021, 28, 556–567. [Google Scholar]
  62. Sherstinsky, A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef]
  63. Campus Electric Vehicle Charging Stations Behavior. Available online: www.kaggle.com/datasets/claytonmiller/campus-electric-vehicle-charging-stations-behavior/metadata (accessed on 16 February 2022).
  64. Macioszek, E. Electric vehicles-problems and issues. In Scientific and Technical Conference Transport Systems Theory and Practice; Springer: Cham, Switzerland, 2019; pp. 169–183. [Google Scholar]
  65. Wang, R.; Xing, Q.; Chen, Z.; Zhang, Z.; Liu, B. Modeling and Analysis of Electric Vehicle User Behavior Based on Full Data Chain Driven. Sustainability 2022, 14, 8600. [Google Scholar] [CrossRef]
  66. Macioszek, E. E-mobility infrastructure in the Górnośląsko-Zagłębiowska Metropolis, Poland, and potential for development. In Proceedings of the 5th World Congress on New Technologies (NewTech’19), Lisbon, Portugal, 18–20 August 2019; p. 108. [Google Scholar]
Figure 1. Statistics of EVs across countries (Courtesy—statista.com (accessed on 16 February 2022)).
Figure 1. Statistics of EVs across countries (Courtesy—statista.com (accessed on 16 February 2022)).
Sustainability 14 10207 g001
Figure 2. Hierarchy of process flow of AOA technique.
Figure 2. Hierarchy of process flow of AOA technique.
Sustainability 14 10207 g002
Figure 3. LSTM neural internal gated structure.
Figure 3. LSTM neural internal gated structure.
Sustainability 14 10207 g003
Figure 4. Architecture of LSTM neural network predictor.
Figure 4. Architecture of LSTM neural network predictor.
Sustainability 14 10207 g004
Figure 5. Proposed architecture of DLSTM neural predictor model.
Figure 5. Proposed architecture of DLSTM neural predictor model.
Sustainability 14 10207 g005
Figure 6. Flow diagram of the combined EMD–AOA–DLSTM predictor model.
Figure 6. Flow diagram of the combined EMD–AOA–DLSTM predictor model.
Sustainability 14 10207 g006
Figure 7. Histogram of charging duration and distribution plot.
Figure 7. Histogram of charging duration and distribution plot.
Sustainability 14 10207 g007
Figure 8. Original EV data decomposition using EMD.
Figure 8. Original EV data decomposition using EMD.
Sustainability 14 10207 g008
Figure 9. Design of proposed DLSTM neural model in Deep Network Designer.
Figure 9. Design of proposed DLSTM neural model in Deep Network Designer.
Sustainability 14 10207 g009
Figure 10. Plot of actual and predicted EV charging energy value.
Figure 10. Plot of actual and predicted EV charging energy value.
Sustainability 14 10207 g010
Figure 11. Convergence plot for the proposed predictor model during DL training.
Figure 11. Convergence plot for the proposed predictor model during DL training.
Sustainability 14 10207 g011
Figure 12. Variation of gradients during deep learning process.
Figure 12. Variation of gradients during deep learning process.
Sustainability 14 10207 g012
Figure 13. Regression plot of the new predictor model.
Figure 13. Regression plot of the new predictor model.
Sustainability 14 10207 g013
Figure 14. Training and testing process of DLSTM model.
Figure 14. Training and testing process of DLSTM model.
Sustainability 14 10207 g014
Figure 15. Taylor diagram for the prediction of Georgia Tech EV charging datasets.
Figure 15. Taylor diagram for the prediction of Georgia Tech EV charging datasets.
Sustainability 14 10207 g015
Figure 16. Normalized RMSE statistics for Georgia Tech EV Charging datasets.
Figure 16. Normalized RMSE statistics for Georgia Tech EV Charging datasets.
Sustainability 14 10207 g016
Table 1. EV Charging and Power rating.
Table 1. EV Charging and Power rating.
Charging ModePower Rating (P)Supply
Normal Power ChargingLess than or equal to 7 kWDC and AC
7 kW to 22 kWDC and AC
High Power Charging22 kW to 50 kWOnly DC Supply
50 kW to 200 kWOnly DC Supply
Table 2. Related works.
Table 2. Related works.
AuthorMethod AdoptedChallenges
Mao et al. (2019) [9]Parallel gradient boosting decision tree algorithmDelayed scheduling capacity and energy demand
Saputra et al. (2019) [10]Variants of machinelearningtechniquesOnly for short-term prediction
Zhang et al. (2020) [12]Extreme learning machine algorithmStuck with algorithmic stagnation issues
Sun et al. (2020) [14]Hybrid artificial intelligence techniquesAccuracy was not guaranteed due to rule base employed
Thorgeirsson et al. (2021) [22]Probabilistic prediction modelsInvariant time frame of analysis
Shahriar et al. (2021) [23]Hybrid Machine learning algorithmsLonger session duration of prediction with higher error
Cadete et al. (2021) [24]Autoregressive and moving average modelsHigher error variations with minimized accuracy
Liu et al. (2021) [25]Back propagation neural networkOver-fitting and under-fitting problems
Zeng et al. (2021) [32]Optimization-oriented adaptive training predictorDifficulty in data sharing
Ye et al. (2021) [34]Deep learning algorithmIncreased layer complexity and delayed convergence
Asensio et al. (2022) [44]Kalman Filter scheme and auto regressive modelsDifficulty in handling temporal files
Liu et al. (2022) [45]Intelligent sensing systemDifficulty in handling non-linear time dependent data
Shi et al. (2022) [46]Deep auto-encoded extreme learning machineDifficulty in analysing the long-term energy consumption prediction
Table 3. Sample datasets of Electric Vehicle Charging.
Table 3. Sample datasets of Electric Vehicle Charging.
Charging Time (hh:mm:ss)Energy (kWh)GHG Savings (kg)Gasoline Savings (Gallons)Cost Incurred (USD)
01:11:506.2492.6250.7841.02
00:58:154.3521.8280.5460.83
01:11:244.3411.8230.5451.02
03:19:307.8573.30.9864.12
01:58:146.0752.5510.7621.68
02:23:587.7583.2580.9742.04
01:37:259.554.0111.1991.39
03:24:2411.2754.7351.4152.9
01:01:506.0612.5460.7610.88
01:23:294.0191.6880.5041.19
02:29:459.0853.8161.142.12
01:54:495.1532.1640.6471.78
06:20:2019.288.0982.426.17
04:47:3612.3345.181.5484.73
03:12:169.9754.191.2522.73
03:10:1912.8155.3821.6082.7
03:27:0513.8265.8071.7352.94
04:37:0612.4085.2111.5574.25
04:48:0718.2727.6742.2935.56
03:57:319.5944.031.2044.52
04:43:3212.9675.4461.6274.5
02:06:294.7892.0110.6011.96
06:11:2119.5248.22.458.96
04:45:1715.3596.4511.9274.05
05:07:4612.3365.1811.5484.36
04:18:4514.5756.1221.8293.67
00:01:340.0810.0340.010
Table 4. Parameters and their values of EMD–AOA–DLSTM predictor.
Table 4. Parameters and their values of EMD–AOA–DLSTM predictor.
ParametersValues of Predictor ModelParametersValues of Predictor Model
Number of Deep layers11Recurrent LSTM memory states8
Number of IMFs4Dropout probability30%
Input neurons5Output neurons1
Convolutional layer neuron nodes30ActivationSigmoidal function
Pooling layer nodes30Max iterationsTill the convergence criterion
LSTM layer neurons70No. of trial runs32
Learning rate metric0.01β5
Number of populations40λ0.5
Convergence criterion10−6
Table 5. Performance metrics with EMD–AOA–DLSTM predictor model.
Table 5. Performance metrics with EMD–AOA–DLSTM predictor model.
EV DatasetsPerformance Metrics
MAEMSERMSEApre
Georgia Tech EV charging station, USA0.10834.25516 × 10−102.0628 × 10−50.9714
Table 6. Evaluated MSE value over Epochs.
Table 6. Evaluated MSE value over Epochs.
TrainingTesting
EpochsMean Square ErrorEpochsMean Square Error
104.2516105.7164
501.0291503.9917
1000.20981000.6310
1503.0816 × 10−31507.5518 × 10−3
2007.1892 × 10−52009.9421 × 10−5
2502.1163 × 10−72505.2219 × 10−7
2514.25516 × 10−102515.96333 × 10−10
Table 7. Sample predicted output using EMD–AOA–DLSTM predictor.
Table 7. Sample predicted output using EMD–AOA–DLSTM predictor.
Georgia Tech EV Charging Station Dataset
Actual Charging
Energy Output
(kWh)
Predicted Charging
Energy Output
(kWh)
Actual Charging
Energy Output
(kWh)
Predicted Charging
Energy Output
(kWh)
13.57513.5009.9039.900
5.9526.0140.9760.971
19.5819.7267.0537.048
5.6175.2295.5525.488
18.79518.9031.1791.184
4.3024.00713.13713.130
10.81810.80018.27818.275
6.326.2962.1262.130
19.48519.5016.9396.900
9.1329.12710.81210.810
12.50712.5009.0399.010
10.31410.3108.0258.022
7.87.7799.2089.201
12.26912.2610.8910.890
19.73719.7306.9526.950
13.90213.90011.99611.990
10.03310.00016.78916.790
Table 8. Comparisons with the state-of-the-art techniques.
Table 8. Comparisons with the state-of-the-art techniques.
Prediction Techniques AdoptedGeorgia Tech EV Charging Datasets
MSE ErrorTraining Efficiency % MeanTesting Efficiency % MeanComputational Time (s)Prediction Accuracy %
Bayesian ELM neural model [1]5.130483.2682.7716.1487.09
Federated learning approach [10]2.347885.1984.7615.8389.66
Ensemble learning [13]0.336789.0388.6414.3690.01
Bayesian optimization with ML [16]0.164989.4888.9114.8790.73
Probabilistic prediction [22]0.002149691.4590.6715.0493.26
LSTM model [24]0.0048763293.5192.4811.6095.03
Back propagation model [27]0.000214893.6493.3710.4895.67
Deep inference framework [50]8.11476 × 10−494.1894.029.8496.48
Deep learning model [52]6.13857 × 10−696.4494.279.1596.91
Proposed EMD–AOA–DLSTM neural predictor4.25516 × 10−1098.6298.037.497.14
Table 9. Statistical validation of proposed neural model.
Table 9. Statistical validation of proposed neural model.
Proposed Neural ModelCorrelation CoefficientCoefficient of Determination
Deep LSTM neural model0.99670.9993
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shanmuganathan, J.; Victoire, A.A.; Balraj, G.; Victoire, A. Deep Learning LSTM Recurrent Neural Network Model for Prediction of Electric Vehicle Charging Demand. Sustainability 2022, 14, 10207. https://doi.org/10.3390/su141610207

AMA Style

Shanmuganathan J, Victoire AA, Balraj G, Victoire A. Deep Learning LSTM Recurrent Neural Network Model for Prediction of Electric Vehicle Charging Demand. Sustainability. 2022; 14(16):10207. https://doi.org/10.3390/su141610207

Chicago/Turabian Style

Shanmuganathan, Jaikumar, Aruldoss Albert Victoire, Gobu Balraj, and Amalraj Victoire. 2022. "Deep Learning LSTM Recurrent Neural Network Model for Prediction of Electric Vehicle Charging Demand" Sustainability 14, no. 16: 10207. https://doi.org/10.3390/su141610207

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop