Next Article in Journal
Stochastic Multi-Objective Optimal Reactive Power Dispatch with the Integration of Wind and Solar Generation
Previous Article in Journal
Evaluation of Voltage Stability in Microgrid-Tied Photovoltaic Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Efficient Management of Energy Consumption of Electric Vehicles Using Machine Learning—A Systematic and Comprehensive Survey

by
Marouane Adnane
1,
Ahmed Khoumsi
1 and
João Pedro F. Trovão
1,2,3,*
1
e-TESC Laboratory, University of Sherbrooke, Sherbrooke, QC J1K 2R1, Canada
2
Department of Electrical Engineering, Polytechnic Institute of Coimbra, Coimbra Institute of Engineering, 3030-199 Coimbra, Portugal
3
Department of Electrical and Computer Engineering, INESC Coimbra, University of Coimbra, Polo II, 3030-290 Coimbra, Portugal
*
Author to whom correspondence should be addressed.
Energies 2023, 16(13), 4897; https://doi.org/10.3390/en16134897
Submission received: 16 May 2023 / Revised: 10 June 2023 / Accepted: 13 June 2023 / Published: 23 June 2023
(This article belongs to the Section E: Electric Vehicles)

Abstract

:
Electric vehicles are growing in popularity as a form of transportation, but are still underused for several reasons, such as their relatively low range and the high costs associated with manufacturing and maintaining batteries. Many studies using several approaches have been conducted on electric vehicles. Among all studied subjects, here we are interested in the use of machine learning to efficiently manage the energy consumption of electric vehicles, in order to develop intelligent electric vehicles that make quick unprogrammed decisions based on observed data allowing minimal electricity consumption. Our interest is motivated by the adequate results obtained using machine learning in many fields and the increasing but still insufficient use of machine learning to efficiently manage the energy consumption of electric vehicles. From this standpoint, we have built this comprehensive survey covering a broad variety of scientific papers in the field published over the last few years. According to the findings, we identified the current trend and revealed future perspectives.

1. Introduction

1.1. Objective and Motivation

Transportation is an important sector of human life. People use vehicles in their daily lives to enjoy the convenience and benefits of reaching a destination by car or public transport. The transport community is growing faster than the human population [1].
In the last few years, electric vehicles (EVs) have become more popular. There are now several car manufacturers that have EVs on the market. The EV has gained in popularity due to its ecological impact and energy source flexibility. However, EVs do not represent a very large share of the global vehicle fleet. In 2020, EVs accounted for 4.2% of new vehicle sales around the world. This tends to prove that many drivers have not yet taken the plunge and that they certainly do not yet have confidence in EVs, and they certainly ask the question: How far can we drive before having to recharge the EV?
Overall, the worldwide future of EVs is dependent on a number of factors, some of which are as follows [2]:
  • The initial cost of the vehicles is mainly determined by battery costs and other industrial inputs, as well as company profits.
  • The number of charging stations available, the distance between them, and the time it takes to charge.
  • Determination of driving range, speed, and acceleration.
  • Maintenance expenses, particularly in terms of battery durability.
A significant number of studies were analyzed, which provided a broad overview related to EVs taking into account different challenges and contexts, with no clear methodological approach [3]. In fact, Vidal et al. [4] present an overview of battery state estimation techniques using machine learning (ML) algorithms, such as feedforward neural networks, radial basis functions, etc. The authors aim to provide readers with a comprehensive understanding of the ML landscape for battery state of charge and state of health estimation in terms of datasets, features, test conditions, battery types, and evaluation measures. The authors of [5] provide a survey of 15 previously published review papers on privacy protection using ML for EVs between 2016 and 2021. They also highlight various application scenarios in EVs to secure information connected between data collected from EVs and vehicles to everything. Comparisons between approaches are made in terms of ML algorithms, datasets, evaluation measures, and privacy framework. Other studies reviewed a specific topic in EVs as the battery EV technology [6,7] and EVs within smart grid systems [8], by collecting information that is unsystematically interpreted with subjective summaries of findings. Among all studied subjects, the use of ML in managing efficiently energy consumption of EVs (ECEV) is becoming fundamental for the development of truly intelligent EVs that make quick unprogrammed decisions based on observed data and ensure minimum electricity consumption. ML-based methods for (efficient) ECEV require particular attention for two reasons. First, they aim at processing large and complex data to extract knowledge and make decisions with minimal human intervention. Second, they facilitate the expression of the relationship between the output variables (i.e., that quantify ECEV) and its input variables, i.e., features that influence ECEV, such as speed, road type, weather, etc.) [9].
The satisfactory results obtained using ML in many fields and the increasing but still insufficient use of ML to efficiently manage ECEV motivated us to write this accurate state of the art of ML-based studies of ECEV to encourage critical thinking by highlighting the primary research avenues, approaches, and gaps in the industry.

1.2. Contributions

According to the literature, there is a lack of in-depth analysis of ML-based studies of ECEV and there is no systematic map or literature review of ML-based studies of ECEV that has been published so far. For this reason, we have built this state of the art of ML-based study of ECEV, which, to our best knowledge, is the most comprehensive survey covering a set of studies rigorously selected. A set of 156 studies are systematically selected among many papers, before being analyzed and classified with respect to several elements, such as the algorithms used, the datasets and data preparation techniques, the features used to make predictions, the research types, the most frequently used evaluation measures, and cross-validation methods, the software tools used, the road types considered, and the training architectures adopted. This classification and analysis can help researchers define research projects related to applying ML for studying ECEV.

1.3. Brief Introduction to Machine Learning

In the last decade, ML algorithms have been used to assess their ability to improve ECEV compared to statistical algorithms. The principle of ML is to learn patterns in data to make predictions, recognize new patterns, or suggest different classes to the data. ML algorithms have been used in many research topics, such as computer vision for autonomous vehicles [10], software engineering [11], and smart cities [12]. ML algorithms have two significant advantages: the capacity to model the complex set of relationships between the output to be predicted and the input, and the capacity to learn from historical data [9]. By combining several domains such as artificial intelligence, statistics, linear algebra (vector spaces), and optimization, ML tends to solve a number of problems by designing various algorithms that “vary in their goals, in the available training datasets, and in the learning strategies and representation of data” [9]. The three main categories of ML are [9]: supervised, unsupervised, and reinforcement learning.
Supervised Learning (SL): Its principle is to build a prediction model (or predictor) from training data whose each instance is specified by a pair (input, output). The obtained predictor can then be used to determine the outputs of new instances whose outputs are unknown [9]. Classification and regression are the common prediction tasks supported by this category [9]. SL is called classification when the domain of outputs is finite, and hence discrete. In this case, each output value corresponds to a class (or category), and classification consists of predicting the class of an input [9]. The generated classification model may be represented in many structures such as classification rules or mathematical formulae [13]. On the other hand, SL is called regression when the domain of outputs (and inputs) is continuous, and hence infinite [9]. Many algorithms initially developed for classification have been adapted for regression.
Unsupervised Learning (UL): In this category, data that are given to learning algorithms during the learning process are composed of input values, and no notion of output is used [9]. That is, there are no labels to predict. The main objective of UL is to discover natural structure in the input data [9]. UL algorithms can organize the data at hand in different ways.
Reinforcement Learning (RL): It is based on the Markov decision process [14] and consists of using an agent and an environment that interact as follows: the agent applies an action to the environment and the latter reacts by modifying the agent’s state and giving a reward to the agent. The latter state and reward are used as an input of the agent which will deduce the next action which is therefore an output of the agent, and so on. To learn which actions to take, the agent uses two phases: (1) exploration: the agent tries several actions and observes the reaction of the environment, and (2) exploitation: the agent adapts his behavior to find the strategy that tends to maximize the total reward received by the agent from the environment [14].
This is the organization of the paper: In Section 2, we present the methodology used to build our survey. The results of the analysis of these papers are presented and discussed in Section 3. Finally, Section 4 presents the current trends, future perspectives, and the validity of this survey.

2. Methodology

This section presents the methodology adopted to carry out an up-to-date literature review of ML-based studies of ECEV using Systematic Mapping Study (SMS). SMS [15,16,17] is a well-structured methodology that basically aim to determine, classify, and analyze the most relevant studies on a specific topic of interest. The resulting literature review is addressed to persons interested in studying ECEV, where the focus is on the use of ML as it has shown impressive results in solving complex problems in many fields. Systematic mapping process can be represented as three phases which are explained below.

2.1. Formulate a Set of Mapping Questions

The first phase aims to formulate the mapping questions (MQs), which are questions to which the literature review to be built is supposed to provide answers. The main objective of this phase is to identify and clearly formulate a set of adequate MQs. Such a phase is the first step to building a literature review in a systematic and rigorous way. This is a non-automatable human task, whose input is the studied topic (i.e., studying ECEV using ML) and output (i.e., MQs) is the input of the other two phases.
After analysis of several surveys and a thorough reflection, we have identified six MQs. In the following, each MQ is specified as an interrogative sentence and then justified in a short paragraph.
MQ1: It is related to datasets and is composed of two questions: (a) What datasets are used? (b) How datasets are prepared?
Classifying the publications based on the types of datasets and their preparation helps understand how these elements influence the quality of the predictions of ML-based models.
MQ2: What features (input variables) are used?
Classifying the publications based on the features used helps understand how the feature choice influences the quality of the predictions of ML-based models.
MQ3: What types of roads are the most investigated?
Classifying the publications based on road types helps identify the results, problems, and challenges encountered depending on the type of roads when using ML to study ECEV. This MQ can be generically reworded as: What elements of the environment are the most investigated?
MQ4: It is composed of two questions: (a) What ML-based algorithms are used and how are they tuned? (b) What software tools and programming languages are used?
The objective is to exhibit the algorithms used to generate ML-based prediction models and investigate whether parameter tuning is used in these algorithms. This helps classify the algorithms used and highlight their trends in the field from the perspective of researchers and practitioners. Additionally, it helps understand the overall behavior of ML models by studying the use of parameter tuning methods. Moreover, this question aims to identify the most used software tools for ML-based studies of ECEV which enables highlighting the tool kit where research has focused on the implementation and simulation of newly designed ML-based prediction models.
MQ5: What evaluation measures and validation methods are used?
The aim is to identify the most frequently used measures to assess the quality of predictions obtained from ML prediction models. The aim is also to identify methods to assess the model’s ability to predict new data that were not used to build the prediction model.
MQ6: What resolution approaches and architectures are used?
The intention is to highlight the solutions and training architectures proposed in the selected studies. Answering this MQ enables assessing the maturity level of research in this field.

2.2. Identify a Relatively Reduced Set of Relevant Studies

The second phase consists of selecting the most relevant papers to be studied in this literature review study. The papers are selected based on the studied topic and the MQs using a systematic mapping process. More precisely, we proceed in three steps to identify a set of relevant papers to be investigated as described below.

2.2.1. Step 1: Construct and Use a Search String to Select Papers in Digital Libraries

In this step, we first extract keywords from MQs and complete them with synonyms and alternative spellings. The Boolean AND is used to combine the keywords, and OR to combine the synonyms. The result of combination is the following search string, where * refers to any sequence of characters:
(artificial intelligence OR machine learning OR deep learning OR AI OR ML OR DL) AND (prediction OR assessment OR evaluation OR forecast* OR measurement) AND (algorithm OR model OR technique OR tool OR method OR approach) AND (speed OR velocity OR input OR features) AND (electric* vehicle OR electric* car OR EV) AND (energy consummation OR energy optimization OR energy minimization).
The obtained search string is then used to automatically search for papers in six electronic databases: IEEE Xplore, Science Direct, Springer Link, Scopus, ACM Digital Library, and Google Scholar.
The automatic search is performed for the period from 2000 to 2022. The search string is formulated according to the characteristics of each database, and Mendeley and Publish or Perish tools are used to manage the automatic search results.
This automatic search in the 6 digital libraries returns a huge set of 135,028 papers. The obtained list of studies is ordered as follows: studies that respect the most the search string are the first to appear in the list, and the more we go down in the list, the more studies become irrelevant. The numbers of studies obtained from each database and at each step are shown in Figure 1.

2.2.2. Step 2: Filter the Result of Step 1 Based on Titles and Abstracts of Papers

In this step, we filter the huge set of 135,028 papers returned by step 1, in order to keep a relatively reduced set of the most relevant papers. Recall that in Step 1, all papers that respect the search string are returned. In step 2, we keep only the k first papers of the list whose title and/or abstract respect the search string. That is, k is such that the (k + 1)th paper is the first paper in the list whose title and/or abstract does not respect the search string. We proceed this way because the list returned in step 1 is ordered from the most relevant papers to the least relevant papers. Therefore, step 2 is performed manually without excessive complexity although the list is very large. This step returns 215 candidate papers, which are the 215 first papers of the list returned by step 1.

2.2.3. Step 3: Filter the Result of step 2 Using Inclusion and Exclusion Criteria

This step, which is performed manually, consists of applying a set of inclusion criteria (ICs) and exclusion criteria (ECs) to the candidate studies obtained from step 2 to select the most relevant studies. In general, ICs and ECs should be built in such a way as to guarantee that they can be interpreted in a systematic manner and that they correctly classify studies [15]. Some ICs and ECs can be generic. For each study, a filter is applied to determine if the study should be included or excluded. We proceed as follows and in order for each study:
  • We apply some generic ECs, i.e., which are used in most studies in the literature [18], to exclude some studies such as: (1) short studies without any contribution, (2) studies which include conference/editorial summaries or guidelines or secondary studies (review, survey, etc.), (3) studies not written in English, (4) studies that the entire text is not available, (5) studies that are duplicates of other studies, and (6) books and grey literature. Each study that meets one of these generic ECs is excluded.
  • Among the remaining papers (i.e., those not excluded in the above point), we exclude the studies of EVs which ignore energy consumption.
  • Among the remaining papers (i.e., those not excluded in the above two points), we only keep the papers that meet at least one of the following ICs: (1) studies that use ML and artificial intelligence algorithms for ECEV, (2) studies working on the minimization of ECEV, and (3) studies that predict speed profile.
A set of 156 relevant studies [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174] results from step 3, which will be used to carry out the state of the art of ML-based studies of ECEV.

2.3. Analysis and Discussion of the Mapping Questions Results

Phase 3 consists of analyzing the 156 relevant papers selected in the second phase. The analysis consists of responding to each MQ derived from the first phase. For this purpose, for each paper, a spreadsheet form is filled to gather information required to respond to every MQ. Then, we synthesize and analyze the obtained information throughout all papers to answer each MQ as presented in the six following Section 3.2, Section 3.3, Section 3.4, Section 3.5, Section 3.6 and Section 3.7, respectively. Note that Table A1 in the Appendix A shows the list of acronyms used in this study.

3. Content Analysis and Discussion

3.1. Preliminary Analysis of the Selected Studies

This section presents a preliminary analysis of the selected studies in ML-based ECEV. By analyzing these studies, we found that the interest in ML-based ECEV studies has started in 2008. Most of them were published in the last 4 years, where the production is growing at an accelerated rate since 2016. The selected ML-based ECEV studies combine two subjects: EV and computer science. Thus, we found a diversity in the identified sources. Consequently, researchers can use diverse sources that focus on these two fields for future publications.
Moreover, the selected ML-based ECEV studies are distributed between two research types [175]: evaluation research (analysis) and solution proposal (design). Most of these papers (67%) are evaluation research studies that are undertaken to show how the ML-based algorithms used to evaluate ECEV are implemented in practice and determine the benefits and drawbacks of various existing implementations [175]. The rest of the papers (33%) are studies to design a new method or an improvement of an existing method, by showing the applicability of the proposed solution and the possible benefits using simple examples [175]. This result shows that the main concern of researchers in the EV domain is to analyze the ECEV models, while relatively few studies propose and design new approaches to enhance ECEV based on ML.

3.2. Datasets and Their Preparations

3.2.1. What Datasets Are Used?

To empirically validate and evaluate the effect of datasets on minimizing ECEVs, a set of 54 different datasets are identified in the selected ML-based ECEV studies. Figure 2 presents the evolution over the years of the most investigated datasets in the selected papers. Driving cycles (e.g., urban dynamometer driving schedule [132], highway fuel economy test [163], etc.) are the most adopted datasets in 70 studies. They include a variety of regions using different vehicle categories over various road types [141] and contribute to develop the powertrain for vehicles [129,163]. We observe also that since 2017 driving cycles had the most marked growth. The slowdown in 2022 is because only three months of the year are considered. Then we have datasets based on the Global Positioning System (GPS) which are used in 36 studies. They help to measure and determine the position of the vehicle during a trip [32]. In the third position, datasets based on the Internet of Things (IoT) are used in 24 studies. They are a way for vehicles to communicate between them using vehicle-to-vehicle, and with the road infrastructure using vehicle-to-infrastructure communications networks in order to estimate and predict the vehicle speed under various conditions [172]. Then follow big data-based datasets which are used in 12 studies, whose main advantage is to accelerate the learning curve. The more data an ML model receives, the more it learns and the more accurate it becomes. The remaining studies use other datasets such as performance measurement system (PeMS) and the next-generation simulation (NGSIM) datasets, in 3 studies each one. The distribution of datasets in the selected ML-based ECEV studies is shown in Table 1.
Analyzing vehicle energy consumption from multi-modal sensor data, including GPS, IMU, steering wheel, and wheel speed, presents significant challenges, particularly in achieving time and space synchronization, which are crucial for ML applications. To enhance the readability from an ML perspective, it is essential to address these challenges by exploring research directions. One such direction is improving vehicle localization using on-board sensors and incorporating vehicle lateral velocity estimation [176]. This integration of data sources can provide more accurate spatial information, enabling better modeling and prediction of energy consumption patterns. Additionally, estimating IMU yaw misalignment by fusing information from automotive onboard sensors and an adaptive Kalman filter can enhance the accuracy of ML models in capturing vehicle dynamics [177].
Table 1. Datasets used in two or more of the selected studies (MQ1-a).
Table 1. Datasets used in two or more of the selected studies (MQ1-a).
DatasetsReferencesN° of Studies
Driving cycles[20,21,22,23,24,25,27,28,29,30,31,33,34,35,36,37,38,39,44,45,49,53,55,57,60,65,66,67,69,70,73,77,78,85,88,89,90,91,94,97,98,102,104,105,113,119,121,129,130,132,133,134,135,136,137,139,141,145,146,149,153,157,160,161,163,165,168,169,173,174]70
GPS-based datasets[21,26,32,34,42,50,51,52,58,63,64,68,72,78,79,81,82,83,87,93,95,97,111,124,125,126,127,131,138,145,147,149,159,166,167,170]36
IoT-based datasets[44,47,56,64,81,85,95,106,107,108,110,117,122,125,126,128,134,136,138,140,144,149,155,172]24
Big data-based datasets[19,41,43,54,75,96,102,107,109,145,147,170]12
PeMs[93,112,116] 3
NGSIM[108,142,150]3
Some of the datasets are available online for researchers, such as: MnDOT IRIS (http://data.dot.state.mn.us, (accessed on 20 March 2021)), dataset of real-world driving to assess driver workload (https://www.hcilab.org/research/hcilab-driving-dataset, (accessed on 22 March 2021)), driving dataset for EVs powered by intelligently managed supercapacitor-battery systems (http://www.chargecar.org/data, (accessed on 20 March 2021)), Peachtree Street dataset (https://www.webpages.uidaho.edu/ngsim/ATL/ATL.htm, (accessed on 21 March 2021)), and a dataset of Tesla S model collected from an open-source German website Spritmonitor (https://www.spritmonitor.de/, (accessed on 20 March 2021)) using web scraping technique which is a simple way to extract all the data and information from any website available on the World Wide Web in the form of images, data, and table. The collected datasets are ready-to-use without the need for complex coding to convert HTML data to data in the form of Excel, XML, CSV, or JSON.
Other datasets are private and not accessible to the public, such as data provided by Google maps, which is due to the privacy of the vehicle information. The identified datasets have different natures. For instance:
  • Traffic flow data [127,153], which is a series of data points, indexed in time order, gathered from different sources and applications such as Nokia Here [19] and Google maps [54].
  • Weather data [54] gathered from different Application Programming Interfaces (API) to collect weather-related data such as temperature, sunset, speed, and direction.
  • Sensors data (i.e., cameras, radar) [106,108,149].
  • Geographic data retrieve information from geographic information system (GIS) [107], GPS, and OpenStreetMap (OSM) [83].
In ML, the priority is not to choose the best algorithm, but to collect enough data. Indeed, data are crucial for ML testing, validation, and monitoring of algorithms, with the objective of building a predictive model. However, even with sufficient data, real-world data are often incomplete and inconsistent and hence must be supplemented and improved. Otherwise, ML models will be failures. Moreover, the low availability of data is still a major gap.

3.2.2. How Datasets Are Prepared?

For ML algorithms to be properly trained and provide the expected results, the data used must be clean, accurate, and complete. Thus, it is also necessary to understand the data by which ML operates. A classification of data preprocessing tasks is proposed in [178]: data integration, data transformation, data reduction, and data cleaning. Figure 3 shows a distribution of these preprocessing tasks in the selected studies over the years. The total of percentages in Figure 3 exceeds 100, because some publications study more than one data preprocessing task. As shown in this figure, data preprocessing tasks were little investigated until 2017 and their use has been steadily increasing since 2018, mostly data integration. The slowdown in 2022 is because only three months of the year are considered. Table 2 shows each of the four categories of preprocessing tasks used in the selected studies. The studies not shown in Table 2 do not provide any data preprocessing task.
Data integration tasks are most widely employed in 67% (105 papers) of the selected studies. The objective is to merge data from many sources into a unified data storage. The motivation is that separate data are insufficient while their combination generates information that can be sufficient for ML. For instance, Foiadelli et al. [54] combine data of six separate databases of various routes from six different regions under different weather conditions. Another way for combining multiple datasets is used in [125] where two datasets are used separately. Several prediction models are trained on the first dataset, and then the best performing model is re-trained in the second dataset. However, by using data integration to merge these two datasets, it will give access to a large amount of training data and help increase the performance of the accuracy. There are several approaches to address the issue associated with insufficient data, which constitute a key obstacle to deploy ML models. Data augmentation is a helpful approach for designing an ML model since it allows researchers to increase the size of the learning data without having to acquire new data [125]. Fusion of multi-source dataset, including historical velocity data, video, or image information, also can significantly enhance the training and performance of ML models in optimizing energy consumption [179]. By combining different data sources, the model can capture a more comprehensive understanding of the underlying patterns and relationships within the energy system.
Concerning data transformation, Figure 3 shows that it is used in 29% (46 papers) of the selected studies. Most of the 29% use feature scaling (also known as data normalization) using techniques such as min–max. Feature scaling has various advantages [164] such as: (1) it prevents features with higher numeric values from dominating those with lower numeric values, and (2) it avoids numerical complications throughout the calculation. It is recommended that each attribute should be linearly scaled to the range [−1, +1] or [0, +1]. Moreover, some ML algorithms are sensitive to feature scaling since they exploit distances or similarities between data samples. The ML algorithms that require feature scaling are mostly artificial neural networks (ANN), SVM, and k-nearest neighbors (KNN) [180,181]. On the other hand, the majority of studies ignore the importance of feature scaling and waste the opportunity to have better training to increase the performance of ML models. Only four studies [121,136,141,145] do not mention which techniques have been used for normalization. However, most of the selected studies do not use data transformation, which is essential to have an accurate dataset, since it can improve the convergence rate and eliminate the negative influence of one factor over another (i.e., to give features equal chances) [160].
Concerning data reduction, Figure 3 shows that it is used in 18% (29 papers) of the selected studies. Of these studies, [83,94,121] apply feature selection using different types of methods, while [95,117,120] use feature extraction, and only one study [115] uses Pearson correlation coefficient.
Concerning data cleaning, Figure 3 shows that it is used in only 12% (19 papers) of the selected studies. The objective is to remove incorrect, corrupted, duplicate, outlier, or incomplete information within a dataset. This result highlights the question of how studies ensure good quality of information. Data cleaning is especially relevant for studies that deal with data from different sources and manipulate IoTs or big data where the transmitted information is exposed to many noises, such as the absence or failure of a sensor in a certain period of time, the weather, data stability, etc. [116,120]. As provided in [182,183], an interesting process that helps minimizing the impact of noise and bias in the raw vehicle data and provides more reliable estimates of the true state variables of the vehicle’s motion is Kalman filter. Based on this process, the authors of [182] proposed an autonomous vehicle sideslip angle estimation algorithm based on consensus and vehicle kinematics/dynamics synthesis. The results confirm that the proposed method is accurate in various automated driving conditions.
Furthermore, for classification problems, data may be unbalanced. An unbalanced dataset means a dataset with a minority class. In such a case, the model may try to fit the majority class and then provide a biased prediction, which may result in a false sense of accuracy (misleading accuracy). However, only one study [152] deals with the data-balancing task using a proposed anchor (baseline) based strategy. The authors of [152] find that solving the unbalance distribution of data achieves high performance. Moreover, to deal with unbalanced datasets, there are many class-rebalancing techniques used in the literature in other fields such as: over-sampling, under-sampling, and the synthetic minority oversampling technique (SMOTE), etc. [184,185].

3.3. Features Adopted to Build the Models

The aim here is to identify the features (i.e., context variables) used to improve ECEV. Various factors that influence ECEV should be controlled and taken into consideration. In order to organize knowledge and facilitate the review results, it will be more suitable to group all available features into categories according to their similarity and their sharing of the same characters. Thus, all features are classified in a way that helps in predicting the energy consumption of an EV when travelling on a road. Five categories of context variables are identified, which implies a five-dimensional vector space. These context variables are the main contributors to the energy consumed [99,114]:
  • Vehicle context refers to vehicle characteristics that have a significant impact on energy consumption, such as battery state of health, battery state of charge, battery capacity, motor maximum power, vehicle mass, etc.
  • Weather context contains weather indicators, such as temperature and humidity, that influence the amount of energy consumed.
  • Traffic context quantifies the state of traffic on a road segment within a given time frame, such as traffic flow information, distance occupancy, time band, weekday, etc.
  • Road segment context refers to the properties of the road segment that impact energy consumption in a direct or indirect way, such as traffic lights, speed limit, number of lanes, etc.
  • Driver profile context refers to the driver’s habits, physical condition, and personality of the driver. It also refers to the user’s driving attitude (e.g., more or less aggressive).
Figure 4 presents the evolution over the years of the context variables (features) studied in the selected papers. The figure shows that traffic context is the most widely used (85%), followed in order by the other contexts: road (71%), vehicle (56%), weather (24%), and driver profile (17%). The total of percentages in Figure 4 exceeds 100, because some publications study more than one context. Moreover, from the figure, we found that contexts were little investigated until 2016 and their use has been steadily increasing since 2017. Traffic and road segment contexts arouse the most interest, followed by vehicle context. The slowdown in 2022 is because only three months of the year are considered. Table 3 shows in which selected studies is used each of the five contexts. The studies not shown in Table 3 do not provide any information about the context type.
From the selected studies, speed is noticed as a principal component to optimize the energy consumption of the vehicle. It can be exploited in two ways. First, as a feature used as input for ML algorithm. The speed profile, used as input, has an important impact factor along the selected route, since it influences the required propulsion energy and the travel time, which determines the necessary energy. Other features could be generated from the speed such as acceleration, deceleration, average speed, speed standard deviation, average acceleration, average deceleration, etc., [87] that help to train ML algorithms to optimize energy consumption. Second, speed can be exploited as an output for the predictive ML model. The speed could be used as the final result to be predicted by controlling the vehicle speed to have a better consumption of the energy. The mapping between the features and vehicle speed is highly nonlinear and complex. Moreover, the challenge is to choose the minimum information that is available with speed, taking into consideration all the provided features to have an accurate predictive model. However, until today researchers do not have a clear vision of the exact and sufficient features that should be applied in ML algorithms with the objective to minimize energy consumption.
Thus, several studies [96,111] build their ML models based on more than ten features as inputs, without giving importance to prioritizing their features and keeping the most essential ones. The authors of [96] propose an ML-based method for predicting ECEV in real-world driving circumstances. The test results show that a value of 0.159 kWh of root mean square error (RMSE) and 12.68% of mean absolute percentage error (MAPE) is obtained under real-world driving circumstances. Moreover, the selection of the inputs becomes more important when the number of features is very large [9]. Globally, all features to create a predictive model maybe not needed, only those features that are important and have a direct influence on the output should be used. On top of that, selecting the right features enables the ML algorithms to train faster, reduces the complexity of a model, increases the accuracy of the model, and decreases the overfitting. To this end, identifying the relevant features to use as inputs should be realized using feature selection methods and features engineering. If possible, it is therefore relevant to find an optimal choice of the features.

3.4. Types of Roads Investigated

MQ5 is about the types of roads that have been studied in the ECEV. From observations, the energy optimization using ML could be studied at three different levels of routes:
  • Highway which provides a high speed of movement and is typically a long-term travel (e.g., few hours, few hundreds of km).
  • Rural has a high mobility but a low degree of access and is typically a medium-term travel (e.g., some minutes, few km).
  • Urban (local) has a high degree of access but lower mobility and is typically a short-term travel (e.g., few hundred seconds, few km).
Figure 5 presents the evolution over the years of the use of the three categories of road types in the selected papers. The figure shows that 81% (127 papers) of the selected studies use “urban” roads to experiment ML models, followed by “highway” with 52% (82 papers), and “rural” with 33% (52 papers). The total of percentages in Figure 5 exceeds 100, because some publications study more than one road type. Moreover, from the figure, we found that until 2016, few studies investigate road types, and from 2017 there has been a continuously growing interest in studying road types. Urban roads have attracted the most marked interest in 2021. The slowdown in 2022 is because only three months of the year are considered. Table 4 presents the three categories of roads used in the selected studies. The studies not shown in Table 4 do not mention the road type used in their experiments.
Distinguishing between highway, urban, and rural consumption is justified by the fact that each road type requires a distinct way of driving [186]. In the city, shorter trips home to work, driving the kids to school, going to the supermarket and the mall, etc., are made. In this case, repetitive stops at traffic lights, stops, pedestrian crossings, school crosswalk, shops, incidents on the road, and traffic jams happen. Even when moving, driving is performed in a transient mode, such as accelerating, decelerating, and braking. However, on the highway, longer journeys, without regular stops are the pattern, which means vehicles run more at a steady speed. The rural is less investigated due to the lack of infrastructures in rural areas compared to urban areas. Furthermore, the effects of the road type on rolling resistance, and therefore on energy consumption, are not well established yet. The authors in [187] demonstrate that the energy consumption throughout a driving cycle is directly affected by the accelerating stages, and that EV consumes less energy when driving more slowly [188].

3.5. ML Algorithms, Configuration and Software Tools Used

3.5.1. ML-Based Algorithms Used and How They Are Tuned

Many algorithms have been used in the literature to generate models that help in minimizing ECEV. The aim of this section is to identify those algorithms that are ML-based and determine how they are tuned. The identified algorithms are presented in Table A2 (in Appendix A), while Figure 6 shows the distribution of categories of the algorithms at three levels. Purely statistical algorithms are also considered in Figure 6 and Table A2, because they are used in studies that also use ML algorithms and compare the two categories (ML, statistical). The total percentage in each of the three levels of Figure 6 exceeds 100%, because some publications study more than one category of that level.
Statistical algorithms adopt a predefined form of function that relies on the independent variables and the dependent variables. Only 17% (27 out of 156 studies) use statistical algorithms, where the Markov chain (MC) is used in 12 empirical studies. Due to the fact that MC does not generally use many inputs, it does not produce strong prediction results compared to the ML model [125].
In [125,127], ML and statistical algorithms are considered and compared for speed prediction of hybrid EV. In general, ML algorithms provide more accurate predictions, while statistical algorithms can produce sure information for the prediction results. The authors of [127] demonstrate that MLP performs efficiently as a prediction method compared to other algorithms such as auto-regressive (AR). The authors of [125] find that the long short-term memory (LSTM) deep neural network shows the best performance among auto-regressive moving average (ARMA), nonlinear auto-regressive eXternal model (NARX), LSTM, MC, and conditional linear gaussian. Compared to ML and deep learning (DL), auto-regressive integrated moving average (ARIMA) requires less historical data to make predictions with high accuracy and low complexity [93]. However, it does not give good results for vehicle speed prediction [125]. Statistical algorithms are easy to understand and fast to learn. However, they are considered black-box solutions and highly dependent on the dataset [93,116].
The use of ML algorithms in the 156 selected studies is motivated by the difficulty of handling complex data and the ability of ML to perform better in various fields such as face recognition [189], e-health [190], and software engineering [11]. Let us consider the three categories of ML algorithms [9] (SL, UL, RL) introduced in Section 1.3.
Supervised learning (SL): Of the 156 selected studies, 106 (68%) use SL algorithms. Three main types of SL algorithms are identified: individual ML algorithms, ensemble ML algorithms, and DL algorithms.
Individual ML algorithms (IMLs): are the most widely used in 86 studies (81% of 106) due to their high performance in several ECEV studies [54,114,115,127]. Among the 86 studies, ANN algorithms are the most used with 58 studies. ANN algorithms have gained great attention since they are exploited to design energy management strategies (EMSs) based on ML [141]. ANN is the approach that most closely resembles the behavior of a human neuron, since it is inspired by the way the human brain processes information [91]. Generally, the selection of input and output variables for an ANN significantly affects the network performance and its utilization/generalization [87]. For instance, Vatanparvar et al. [86] implemented a driving behavior model using ANN based on the historical information about drivers’ reactions and route speed from google maps. The results show that the model proposed using ANN achieves a higher accuracy if the future vehicle speeds are known. In such a case, the controller can save up to 82% of the maximum energy and can improve the battery lifetime. Other types of ANN are used in various studies such as back propagation neural network (BPNN) and MLP in 12 and 10 studies, respectively. The main reasons behind using different types of ANN are their ability to deal with large amounts of data and high performing results [144]. However, with all these benefits, ANN algorithms are of a “black box” nature, i.e., it is difficult for academics and specialists to understand why ANN algorithms make the outcomes they do, since the numeric values produced by the ANN are difficult to trace back [87]. After ANN, the second most widely used individual ML algorithm is support SVM in 27 studies. SVM is a well-known ML algorithm used for both classification and regression problems [95]. Its performance has been analyzed in different contexts, and proven to be more sensitive to hourly fluctuations and giving better performance [95]. Sixteen studies use decision trees (DT) and fourteen other studies use k-nearest neighbor (KNN).
Ensemble ML algorithms (EMLs): are SL algorithms that combine at least two base models in order to have better accurate models. The shift towards using ensemble algorithms has emerged recently since no individual learning algorithm can be considered the best and most accurate in all applications. Thus, combining the results of each individual model generated using training samples helps in reducing the error rate [86]. Only twenty-five studies (twenty-three percent of one hundred and six) use ensemble algorithms where random forest (RF) is the most used algorithm in eight studies, followed by gradient boosting decision tree (GBDT) in six studies, adaptive boosting (AdaBoost), bagging, and extreme gradient boosting (XGBoost) in three studies each one, light gradient boosting machine (LightGBM) and stacked generalization in one study each one. As already known, different EML algorithms may have different biases and make different assumptions about the dataset in the training phase. A potential solution is to combine multiple EML algorithms to create a more robust and accurate prediction model. This solution can help increase the stability, robustness, and handle complex relationships of the prediction model. Since it will aggregate predictions from multiple models, it could also handle noise, outliers, and uncertainties in the data more effectively.
DL algorithms: inherit the benefits of traditional neural networks [119]. It can examine the deeper possible link between historical and future driving data, which is very relevant for predicting short-term vehicle speed [119]. A total of 31 studies (29% of 106) use DL algorithms. LSTM is the most used algorithm in DL category in seven studies, followed by recurrent neural networks (RNN) and deep neural network (DNN) in five studies each. The prediction methods based on DL can add a memory function to their neurons to realize information memory, investigate the characteristics of data, and learn from multi-level features [119]. For instance, Du et al. [119] propose a DL model using RNN and LSTM algorithms to estimate the future vehicle speed. The results show that LSTM performs well when dealing with the time series produced by vehicle speed data. Xu et al. [88] study game-theoretic energy management with velocity prediction for hybrid EV, combining engine generator, battery, and ultracapacitor. The authors explore and combine the effects of two kinds of DL algorithms on velocity prediction: recurrent neural network (RNN) is combined with long short-term memory (LSTM), which results in a method called RNN-LSTM. The results suggest that using RNN-LSTM performs better with up to a 6.84% decrease in battery power variation and 8.21% increase in battery usage power compared to the basic game-theoretic strategy without velocity prediction. Moreover, it can produce little energy from the engine generator and higher battery–ultracapacitor average energy difference.
Unsupervised learning (UL): Of the 156 selected studies, only 3 studies (2%) [111,133,156] use UL algorithms. Kretzschmar et al. [111] present an approach using the x-means algorithm to predict the energy loss of an EV in a safe way for urban roads. In fact, the clustering process creates a constant converging error value. To this end, the authors of [111] create a lightweight and adaptable model that eliminates the need for recurring modelling. Shi et al. [133] propose a new EMS by combining the kernel fuzzy C clustering (KFCM) and multi neural network (MNN). The results reveal that the proposed EMS is a quasi-optimal control strategy and achieves good learning ability for dynamic planning. Zhang et al. [156] develop a new EMS using expectation maximum (EM) clustering algorithm and SVM to train the used dataset. The authors of [156] use cloud computing and the internet of vehicles to solve the conflict between fuel efficiency optimization vs. battery state of health preservation, and between global optimality vs. real-time capability. The results show that the proposed EMS can minimize, in real time, the battery life loss and the total cost of fuel.
Reinforcement learning (RL): Of the 156 selected studies, 51 (33%) use RL algorithms. Han et al. [65] propose a new model using double deep Q-learning (DDQL) to optimize the fuel consumption and compare it to the conventional deep Q learning-based model. The results show that the suggested method outperforms the conventional deep Q learning-based strategy and reaches 93.2% level of dynamic programming benchmark. The main benefit of the DDQL algorithm is its good performance in the battery state of charge retention with different initial values. He et al. [76] aim to control the driving behavior and reduce the energy consumption by managing the acceleration pedal travel changing travel on each step of an acceleration process. The authors of [76] train and compare two RL algorithms Q-learning (QL) and deep Q-learning (DQL). The results suggest that DQL is more stable and that it may be able to give a more stable control approach suited for practical use. Xu et al. [105] present a RL-based EMS for hybrid EV using QL algorithm. The authors of [105] split the torque between the engine and electric machine by integrating the QL in the EMS. The results suggest that QL-based EMS can reduce the fuel consumption of the EV, and the increasing number of states improves fuel economy. Through these studies, the usefulness of RL which mainly focuses on QL algorithms and its derivatives (deep Q-learning, double Q-learning, etc.), could be understood, because it takes into account that the environment changes regularly and learns by assuming that the system is moving optimally.
Figure 7 shows the evolution over the years of the use of ML-based algorithms in the selected papers. ML-based algorithms were little studied until 2016 (19 studies), and from 2017 there has been a continuously growing interest in using the different categories of ML-based algorithms. The use of various ML-based algorithms is motivated by the desire the obtain high performance prediction models. Individual algorithms have always attracted the greatest interest. The decrease in 2022 is because only three months of the year are considered.
Table 5 presents the main advantages and challenges of different categories of ML algorithms in ECEV field.
In summary, the state-of-the-art technology has revealed that electric energy storages have very complex properties which, in some studies, might infer the consumption models used [111]. Thus, choosing the best ML algorithms is highly important due to their direct influence on ECEVs. This choice may be completed by identifying the strengths and weaknesses of each ML algorithm used for ECEV in terms of complexity, computation time, interpretability, and scalability which have not been conducted in the previous studies. For example, DL algorithms can handle noisy data and outliers to some extent due to their ability to learn complex relationships [9]. This advantage is approved in other applications, such as detecting tassels in RGB UAV imagery, and leverages the power of DL to achieve accurate object detection [193]. Thus, it could be interesting to explore the insights gained from the above work with the analysis of traffic context to explore the relationship between traffic conditions and energy consumption in electric vehicles. Moreover, the performance of ML algorithms is highly influenced by their hyperparameters. Thus, optimizing them is required. Of the selected studies, only 22 studies use optimization methods, 10 studies use Grid search (GS) [43,51,52,71,83,106,141,158,164,165], 5 studies the use genetic algorithm [96,100,115,140,169], 4 studies [36,160,165,173] use particle swarm optimization (PSO), and 2 studies use random search [132,146]. Other optimization methods are less used such as Bayesian optimization [132], Fine-tuning [138], least mean squared (LMS) optimization [162], lightning search algorithm, backtracking search optimization, and gravitational search algorithm [160], each one being used in only one study. GS is an optimization method that is based on searching all the possible combinations of parameter values of a given technique and evaluating them, and the best combination is considered when fitting on a dataset [141,164]. The genetic algorithm is an optimization and search method that simulates an evaluation process in order to minimize or maximize a target function [96].

3.5.2. Software Tools and Programming Languages

Techniques and tools used to predict EV energy consumption are computationally intensive and require software tools for their use. Figure 8 shows the evolution over the years of the most used software tools and programming languages in the selected papers while Table 6 presents the tools used for each study. We note that some studies do not provide the tool used, therefore they are not presented in Figure 8 and Table 6. The most used tool is Matlab® in 63 of the selected studies (40%). Developed by MathWorks (https://www.mathworks.com/products/matlab.html, (accessed on 13 February 2021)), Matlab® is a simple to use platform for developing algorithms, models, and analyzing datasets. The second most used tool is Weka (https://www.cs.waikato.ac.nz/ml/weka/, (accessed on 14 February 2021)) (in only two studies) provides several ML algorithms for use in data mining area, which includes several ML tasks such as classification, regression, clustering, association rules for SL, data preparation, and data visualization. From the results, there seem to be few support tools for ECEV algorithms. In general, the tools are limited in their use since they do not give various choices for possible parameters of ML algorithms, which could limit the appropriate usage of ML algorithms in the industry. We note that contrary to research studies (see Figure 8), programming languages are more used than Matlab and Weka in the industry since they offer more choices to construct ML models. From Figure 8: 18% of the 156 selected studies use software prototypes developed in Python, since it is simple, consistent, and based on open-source libraries of ML algorithms, such as Tensorflow, Keras, and Scikit learn, two studies use C++, only one study uses C#, and another single study uses R language.
We can also notice from Figure 8 that until 2016, few studies (only 11 studies) indicate the software or language used. Since 2017, Matlab and Python have by far been the largest and most growing use. The decrease in 2022 is because only three months of the year are considered.
Other tools are used for simulations to accelerate the vehicle configurations and model vehicle powertrains, such as the Powertrain Systems Analysis Toolkit (PSAT) in [90,91,94]. Traffic simulation software is used to simulate complex vehicle interactions and handle large networks, such as the microscopic traffic simulator SUMO in [122,128,143,172] and the traffic simulation software VISSIM in [117]. To analyze and simulate network communication, the vector CANalyzer software tool is used in [104,130].

3.6. Evaluation Measures and Validation Methods

To determine how accurate the ML algorithms are, a set of 32 evaluation measures are used in the studies to evaluate the ML algorithms’ performances. Figure 9 presents the 6 evaluation measures that are most used over the years, where the total of percentages exceeds 100, because some publications study more than 1 evaluation measure, while Table 7 presents in which selected studies is used each of the 6 evaluation measures. Note that Table 7 includes only the evaluation measures that are used in more than five studies. Root mean square error (RMSE), which is a measure of precision based on sample errors [32], is the most frequently used with 41% (64 studies). RMSE is followed by mean absolute error (MAE) with 22% (35 studies), mean square error (MSE) with 16% (25 studies), mean absolute percentage error (MAPE) with 15% (23 studies), coefficient of determination ( R 2 ) with 5% (12 studies) and correlation coefficient (R) with 5% (8 studies). Additionally, as can be seen from Figure 9, evaluation measures were little studied until 2016, and from 2017 there has been a continuously growing interest in studying evaluation measures, mostly RMSE. The decrease in 2022 is because only three months of the year are considered.
The same indicator is used in MAE and MAPE, and also the same indicator is used in RMSE and MSE [196]. However, most studies use RMSE and MAPE rather than MSE and MAE, respectively, to reduce the amount of calculation [95,107]. The lower value of MAE, MAPE, RMSE, and MSE implies higher accuracy of the associated model [160,172]. Moreover, a higher value of R and R 2 is considered desirable [170]. Other evaluation measures are less used such as accuracy and max absolute relative error (MaxARE), each one being used in only two studies. From the analysis of the results, to select the more appropriate evaluation measure for each context, we may consider several elements such as the objectives of each ML algorithm (if it is a regression problem, or a classification problem, etc.), and the characteristics of the datasets, especially for the classification task where the dataset can be either balanced of unbalanced.
Concerning the validation methods, 25 of the selected studies use them to assess the prediction accuracy of ECEV models. As can be seen in Figure 10, the cross-validation method is the most frequently used in 15 studies [36,47,48,49,64,71,102,106,109,111,115,120,121,160,164], followed by 10-fold cross-validation in 10 studies [41,43,51,52,83,103,141,147,165,170], 5-fold cross-validation in 6 studies [91,94,108,152,158,167], then 3-fold cross-validation in 3 studies [92,93,152]. Less studied cross-validations methods are not shown in Figure 10, such as 1-2-4-fold cross validation [152], 6-fold cross validation [137], 8-fold cross validation [146], 9-fold cross-validation [122], and leave-one-out cross-validation [83] which are in only 1 study each. The cross-validation methods are used for the estimation of the general error [94]. cross-validation has the advantage to yield more robust results. It makes full use of the data, i.e., all available data are used both for training and testing. Hence, the diversity and adequacy of the evaluation are achieved [43].

3.7. Resolution Approaches and Training Architectures

The aim is to determine and assess the resolution approaches and training architectures used in the selected studies. Figure 11 shows the evolution over the years of the two most used resolution approaches in the selected papers: ML-based speed profile prediction, and ML-based EMS, while Table 8 shows the resolution approaches used in each of the 156 selected studies. A total of 32% of the 156 selected studies (50 papers) use ML to develop an EMS (ML-based EMS), while 22% of the selected studies (35 papers) deal with energy management using ML for EV by predicting the speed profile which is the factor that most influences energy consumption for EV. The use of ML-based speed profile prediction has been studied since 2008, but by few papers until 2016. From 2017, both approaches are used. We note a decrease in the use of ML-based speed profile prediction since 2020 compared to the use of ML-based EMS which increases in 2021. This increase can be explained by the interest of ML-based ECEV researchers, not only to predict speed profile, but also to integrate the predicted speed in an EMS to obtain the energy consumed. The decrease in 2022 is because only three months of the year are considered. The remaining 46% of studies use ML to optimize the energy by using different aspects such as the state of charge prediction [154], the travel-time prediction [164], etc., without presenting a well-defined strategy for energy management.
Speed prediction is the topic where researchers most apply ML algorithms to EVs till 2019. For instance, Chen et al. [98] propose an EMS for hybrid EVs that structures the energy management algorithm into two levels for optimal energy distribution. They propose and evaluate a neural network model based on radial basis function over six driving cycle datasets using RMSE evaluation measure. The results demonstrate that the proposed EMS may enhance fuel efficiency and increase the accuracy of speed prediction in hybrid EVs. Liu et al. in [125] apply several deterministic and stochastic approaches for hybrid EV speed prediction. The results suggest that LSTM performs best in terms of MAE measure. An improved study has been published later by the same authors in [126] that proposes the integration of the best model presented in [125] into their EMS to increase the fuel economy and minimize energy consumption for hybrid EVs. Most ML-based speed profile studies predict speed values to address a regression time series problem rather than predicting the speed in a continuous domain; hence, an infinite number of possible values. A complementary study that may be promising is to investigate the use of ML algorithms for classification problems for fully EV, where the objective is to predict the category (or class) of the speed in a finite (hence discrete) domain. Recently, the researchers began to investigate this issue. For instance, the authors of [141] propose a driving mode predictor (DMP)-based EMS which calculates automatically the power to take from the battery and the supercapacitor as two different sources of energy for EV, in real time, based on the speed history. The authors of [141] develop an ML-based method to design a DMP that predicts the driving mode in real-time considering four classes: low, medium, high, and extra-high. The obtained DMP is integrated into an EMS and validated. The results show that the proposed DMP-based EMS achieves up to 39% current fluctuation reduction and 86% peak current decrease in a critical period over a real-world driving cycle. Thus, the proposed DMP-based EMS will be a good choice to save the battery lifespan for EVs.
Since 2020, there has been a considerable rise in research interest in ML-based EMS, to improve the performance of EMS for EVs. The approach consists of combining ML and optimization methods by applying learning procedures where both the predictions and the energy consumption are jointly optimized. The authors of [29] introduce an ML-based EMS for EV. The TD3 algorithm is used to create an intelligent EMS for EVs that is integrated with a heuristic rule-based local controller to minimize excessive torque allocations in electric motors, while considering battery charging characteristics. In [37], the authors propose an ML-based EMS with a hierarchical structure for the DQL algorithm to obtain the optimal energy management solution. The experimental findings suggest that the proposed ML-based EMS consumes less energy, compared to existing RL models. The authors of [27] propose a self-supervised reinforcement learning-based EMS for HEV based on a DQL algorithm. The results show that the proposed RL-based EMS achieves faster training convergence and lower fuel consumption compared to the traditional approach. Moreover, the authors found that the fuel economy of the proposed model may be able to achieve a global optimum under their new driving cycle.
Traditionally, ML is performed offline, which is termed as offline learning. Figure 12 presents the distribution of offline and online architectures over the years in the selected ML-based ECEV studies, and Table 9 lists in which selected studies are used offline and online training architectures. A total of 26% of the selected studies use offline ML algorithms to minimize battery degradation and energy consumption, especially for hybrid vehicles, while only two studies (3%) use an online learning [106,142]. The remaining 71% of studies do not indicate which training architecture they use. Moreover, the offline architecture has been used since 2008 but its use has only grown since 2018. The online architecture has been used since 2019 by a few studies (one per year). We cannot draw any conclusion from the decrease in 2022 as only three months of the year are considered.
The building models perform the training offline, which means that the prediction model is constructed once in the training phase and cannot be updated with new data. Therefore, over time, when the model gets out-of-date and does not work perfectly in the way that it should, it becomes necessary to re-train the model with more or newer data and then update the system which includes the new model. The authors of [94] present a vehicle power controller by proposing an offline ML algorithm to build a knowledge base for optimal control settings based on 11 driving cycles. The proposed model performs within less than 0.15% compared to the optimal dynamic programming. The authors of [107] propose an offline approach based on vehicle speed prediction using statistical and ML algorithms. The results suggest that BPNN performs better than SVM, and the accuracy is about 95%. Jiang and Fei [122] present an offline two-level approach for motorway vehicle speed prediction using two algorithms: NN and Hidden Markov. In the first level, the proposed model is trained using historical traffic data and vehicle speed information. In the second level, the Hidden Markov algorithm is used to predict the speed using traffic predictions from NNs estimated in the first level.
However, predictors based on offline ML algorithms have several drawbacks [106,142]: (1) their performance is highly dependent on the initial dataset used for the training and the predictive model is constant even if the driving conditions of EVs are changing, (2) they produce only one final and non-changeable model, with all the datasets, and (3) their performance, under any new driving conditions, should be reassessed and tested. Moreover, the offline ML models cannot adapt to the changeable driving situations and novel conditions because of the lack of an updating mechanism, which results in poor performance [106,142].
In light of the aforementioned research gaps, and for more adaptability to novel driving conditions and environments, an online ML model can be a relevant solution. Instead of designing once for all the predictors from the initial available data (which is the case of the offline ML), online ML consists of regularly updating the predictor by regularly considering the new data collected from the vehicle. Intuitively, the more the vehicle is used, the more the predictor understands the driving conditions and can adapt to its evolutive environment.

4. Current Trends, Future Perspectives, and Validity

4.1. Current Trends

A set of 156 ML-based studies of ECEV has been rigorously selected and analyzed using SMS. The search has been conducted in six digital libraries. The analysis of the 156 papers results in answering a set of six MQs. The answers to the MQs elaborate the following trends (which are discussed in detail in Section 3):
  • Several datasets are used in the literature, especially traffic flow data, weather data, sensors data, and geographic data. For data preprocessing, four main tasks are identified where data integration is the most widely used data preprocessing technique, followed by data transformation.
  • The features used as inputs for ML-based algorithms are classified into five categories of context variables. The most widely used context is that of traffic, followed by road segment context, then vehicle context, weather context, and driver profile context.
  • Most of the selected studies use “urban” roads to experiment ML models, followed by “highway” and “rural”.
  • ML algorithms are most used for ECEV compared to statistical ones. Most of ML algorithms use SL (68%), while RL and UL are only used in 33% and 2% of studies, respectively. For software tools, MATLAB® is the most widely used tool for ECEV. However, some studies use software prototypes developed in Python and based on open-source libraries of ML algorithms.
  • RMSE is the most frequently used evaluation measure, followed by MAE, and MAPE.
  • Most of the selected studies do not present a well-defined EMS and few studies use an ML-based EMS or ML-based speed profile prediction. Most of the selected studies deal with hybrid EVs while few studies deal with fully EVs. Only five studies use an online architecture for ML, 26% use an offline architecture, while the remaining studies do not mention the architecture used.
In our answers to the six MQs in Section 3, we have illustrated in nine figures the evolutions over the years of various elements. Table 10 presents the most used element over the years based on our answers to MQs. The nine columns of the Table 10 synthesize information from the nine figures, respectively. The nine columns (identified by the corresponding MQ and figure) show, respectively, the elements most used or studied over the years in the subjects of the following list:
  • Datasets investigated (MQ1-a, Figure 2);
  • Data preprocessing tasks investigated (MQ1-b, Figure 3);
  • Context variables investigated (MQ2, Figure 4);
  • Road types investigated (MQ3, Figure 5);
  • ML-based algorithms investigated (MQ4-a, Figure 7);
  • Software tools and programming languages used (MQ4-b, Figure 8);
  • Evaluation measures investigated (MQ5, Figure 9);
  • Resolution approaches investigated (MQ6-a, Figure 11);
  • Training architectures investigated (MQ6-b, Figure 12).

4.2. Future Perspectives

The manner that this review analyzes the papers by answering relevant MQs helps in identifying the main gaps of the papers over a specific area, thereby making the available evidence more accessible to decision-makers and researchers. From our own perspective after analyzing the 156 selected papers, we could suggest exploring the following avenues:
  • Search for appropriate datasets (driving cycles, real data, etc.): researchers should focus on expanding the available datasets by incorporating real-time data from connected vehicles and advanced sensors.
  • Reliable and high-quality data are crucial for ML analysis. The quality of the data used directly affects the performance and accuracy of ML models. So, improving the dataset using data preprocessing techniques such as data transformation, data augmentation, data cleaning, and data integration is very important and should enhance the accuracy and relevance of ML models.
  • Explore the integration of multiple context variables, such as traffic, road segment, vehicle, weather, and driver profile information, to create comprehensive ML models and to select the right features to quickly generate a prediction model with a good accuracy and reduced complexity and overfitting.
  • Train the proposed models in different road types to generate results that hold for various categories of routes.
  • Explore and evaluate advanced ML algorithms, including DL and EML algorithms, and compare them with other ML algorithms to ensure selecting the appropriate models with the best performance.
  • Integrate ML algorithms with optimization techniques to optimize their parameters and generate more accurate training models with low complexity and computation time, and high interpretability and scalability.
  • Develop and select suitable and robust evaluation measures that capture specific ECEV requirements and challenges, and validation methods based on the problem and the type of the dataset to avoid biased results.
  • Define the problem to be solved as a classification problem, so that the result to be predicted belongs to a finite set.
  • Use online ML algorithms to regularly update the prediction models by considering continuously the new data collected from the vehicle.
By considering these future perspectives, researchers can contribute to the advancement of ML-based ECEV studies and develop more accurate and efficient energy management strategies for electric and hybrid vehicles.

4.3. Validity of the Survey

Below, we present two threats we have identified to the validity of this survey, along with our suggested mitigation solutions.
Construct threat: it is about the risk of missing relevant studies. To mitigate this risk, bibliographical references of each selected study should also be scanned, and we also apply several inclusion and exclusion criteria.
Internal threat: it is about the risk of bias in analyzing relevant papers. To mitigate this risk, the data must be extracted and analyzed very carefully and rigorously.

Author Contributions

M.A.: investigation, methodology, validation, writing—original draft. A.K.: conceptualization, validation, funding acquisition, supervision, writing—review and editing. J.P.F.T.: resource, conceptualization, validation, funding acquisition, supervision, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by Grant 950-230672 from Canada Research Chairs Program, in part by RGPIN-2017-05924 from the Natural Sciences and Engineering Research Council of Canada, and also in part by FCT-Portuguese Foundation for Science and Technology project UIDB/00308/2020.

Data Availability Statement

The bibliographic data associated with this paper are available online, in bib, ris and xml formats. The three corresponding files can be downloaded from the following link: https://github.com/bewiv/DB_EMEC_EV_UML.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. List of acronyms.
Table A1. List of acronyms.
AdaBoostAdaptive Boosting
ANNArtificial Neural Network
APIApplication Programming Interfaces
ARAuto-Regressive
ARIMAAuto-Regressive Integrated Moving Average
ARMAAuto-Regressive Moving Average
ARXAuto Regressive with Exogenous
BP-LSTMBack Propagation-Long Short-Term Memory
BPNNBack Propagation Neural Network
CARTClassification and Regression Trees
CNNConvolutional Neural Network
DCNNDeep Convolutional Neural Network
DDPGDeep Deterministic Policy Gradient
DEGWODifferential Evolution and Grey Wolf Optimizer
DLDeep Learning
ECExclusion Criteria
ECEVEnergy Consumption of Electric Vehicle
ELMExtreme learning machine
EMExpectation Maximum
EMLEnsemble ML Algorithm
EMSEnergy Management Strategy
EVElectric Vehicle
Fully EVFully Electric Vehicle
GBDTGradient Boosting Decision Tree
GISGeographic Information System
GPGaussian Processes
GPSGlobal Positioning Systems
GRUGated Recurrent Unit
GSGrid Search
HMPHistoric Mean Prediction
Hybrid EVHybrid Electric Vehicle
ICInclusion Criteria
IMInterpolation Method
IMLIndividual ML Algorithm
IoTInternet of Things
IRISIntelligent Roadway Information System
KFKalman Filter
KLMSKernel Least Mean Squares
KNNK-Nearest Neighbors
KRLSKernel Recursive Least Squares
LDALinear Discriminant Analysis
LightGBMLight Gradient Boosting Machine
LMSLeast Mean Squares
LSSVMLeast Square SVM
LSTMLong Short-Term Memory
MAEMean Absolute Error
MAPEMean Absolute Percentage Error
MaxAREMax Absolute Relative Error
MCMarkov Chain
MeanAREMean Absolute Relative Error
MedAPEMedian Absolute Percentage Error
MLMachine Learning
MLPMulti-Layer Perceptron
MQMapping Question
MSEMean Square Error
NARXNonlinear Auto-Regressive eXternal
NEDCNew European Driving Cycle
NGSIMNext-Generation Simulation
OSMOpenStreetMap
PCA/PCRPrincipal Component Analysis/Regression
PeMSPerformance Measurement System
PSATPowertrain Systems Analysis Toolkit
PSOParticle Swarm Optimization
RCorrelation Coefficient
R2Coefficient of Determination
RBMRestricted Boltzmann Machine
RFRandom Forest
RLReinforcement Learning
RLSRecursive Least Squares
RMERelative Mean Error
RMSERoot Mean Square Error
RNNRecurrent Neural Networks
RNN-LSTMRecurrent Neural Network LSTM
RWRandom Walk
SLSupervised Learning
SMOTESynthetic Minority Oversampling Technique
SMSSystematic Mapping Study
SoCBattery State of Charge
SoHBattery State of Health
STSpace Time
SVMSupport Vector Machines
UDDSUrban Dynamometer Driving Schedule
ULUnsupervised Learning
V2IVehicle-to-Infrastructure
V2VVehicle-to-Vehicle
WLTCWorldwide Harmonized Light Duty Driving Test Cycle
XGBoosteXtreme Gradient Boosting
Table A2. Algorithms used in the selected studies (MQ4).
Table A2. Algorithms used in the selected studies (MQ4).
TypeAlgorithmsRef.# of StudiesTotal
Supervised
Learning (SL)
Deep Learning (DL)Recurrent Neural Networks (RNN)RNN[35,84,85,119,129,151]631
Fine-Grained RNN[68]1
Gated Recurrent Unit (GRU)[35,151]2
Long Short-Term Memory (LSTM)LSTM[26,35,44,47,56,59,62,63,68,72,75,88,95,119,125,126,136,151]21
Bidirectional LSTM Network [35,68,117,151]
Back Propagation-LSTM (BP-LSTM) [95,96,116]
Recurrent Neural Network LSTM (RNN-LSTM)[88,117]
Convolutional Neural Network LSTM (CNN-LSTM)[44]
Convolutional Neural Network (CNN)[116,151]2
Deep CNN (DCNN)[136]1
Restricted Boltzmann Machine (RBM) and SVR (RBM-SVR)[110]1
Deep Neural Network (DNN) [21,44,47,51,75,124,138,146]10
DL Networks: Deep Belief Network-Based Stacked Autoencoder [124]1
Deep Restricted Boltzmann Machines (DBM) and Sequence Pattern Predicting Capability of Bidirectional LSTM (BLSTM) (DBMBLSTM)[128]1
DL Combined with the Median Filter Preprocessing (DLM8L)[116]1
EnsembleRandom Forest (RF)[44,47,48,49,51,71,78,86,102,137,138,147,152,154,170]1625
Extra Trees (ET)[44]1
Gradient Boosting Decision Tree (GBDT)[36,102,120,145,152,154,167]7
Adaptive Boosting (AdaBoost)[68,102,137,138,171]5
eXtreme Gradient Boosting (XGBoost)[41,43,51,52,138,152]7
Bagging [44,68,137,152,171]5
Light Gradient Boosting Machine (LightGBM)[41,78,152]3
Voting Ensemble[51,71]2
Stacked Generalization [51,147]2
IndividualLinear Regression[44,48,64,68,78,102,103,131,138,152,171]1186
Multiple Linear Regression[41,44,52,87,109,120,140,145,151]9
Logistic Regression [141]1
Naïve Bayes (NB)[57,68,141]3
Bayesian Network[143]1
Artificial Neural Network (ANN) ANN[35,38,41,46,50,85,87,90,91,93,94,101,106,109,122,130,133,144,151,152,164,169,171] 58
Radial Basis Function (RBF) Neural Networks (RBF-NN)[20,89,98,108,128,129,145]
RBF-NN with Wavelet Transform[98]
Adaptive Neuro Fuzzy Inference System (ANFIS)[19,118,162]
Multilayer feed forward NN (MFFNN)[99,118,121]
Back Propagation NN (BPNN)[95,96,100,104,107,115,117,128,129,134,142,159]
Multilayer Perceptron (MLP)[54,71,84,108,112,127,138,141,145,181]
NARX NN[97]
Stacked Autoencoder (SAEs)[116]
ST-ResNE[151]
Feed Forward NN[86]
Extreme Learning Machine (ELM)[155]
Recurrent Nonlinear Autoregressive with Exogenous Inputs NN (RNARX-NN) [160]
Support Vector Machines (SVM)SVM[47,49,51,57,68,71,78,83,84,95,103,107,110,115,116,137,138,141,145,151,156,158,164,171]27
Least Square SVM (LSSVM)[100,165]
Decision Trees (DT) [44,47,48,54,57,68,78,102,109,120,130,137,141,145,147,158]16
K-nearest neighbors (KNN)[44,48,57,78,102,109,113,116,137,141,147,148,158]14
Principal Component Analysis/Regression (PCA/PCR)[109]1
Least Mean Squares (LMS) LMS[106]1
Kernel LMS (KLMS)[106]
Fixed budget quantized KLMS (QKLMS-FB)[106]
Recursive Least Squares (RLS)RLS[106]1
Kernel RLS (KRLS)[106]
RLS Tracker KRLS-T[106]
Fuzzy Logic (FL)FL[32,113,123,142]6
Fuzzy Inference System (Mamdami)[131]
Interval Type-2 Fuzzy System (IT2FS)[153]
Differential Evolution And Grey Wolf Optimizer (DEGWO)[110]1
Linear Discriminant Analysis (LDA)[57,137,141]3
Unsupervised Learning (UL)X-means[111]14
K-means[49]1
Kernel Fuzzy C-Means[133]1
Expectation Maximum (EM)[156]1
Reinforcement Learning (RL)Multi-Step Markov[113]151
Deep Q-learning (DQL)[25,27,29,31,34,37,40,42,45,53,60,65,74,76,77,80,132,139,149,150,166]21
Double Deep Q-learning (DDQL)[29,30,59,61,65,69]6
Duel Deep Q-learning[29]1
Q-Learning [22,23,24,28,33,39,67,70,76,79,80,105,113,135,157,161,163,166,168,173] 20
Deep Deterministic Policy Gradient (DDPG)[27,29,55,58,62,74,80,81,82,139,174]11
TD3[29,59,73,80]4
Trust Region Policy Optimization (TRPO)[59]1
Distributed Proximal Policy Optimization (DPPO)[60]1
Markov-Chain Monte Carlo[63,79]2
Queue-Dyna[66]1
The Asynchronous Advantage Actor Critic (A3C)[60]1
Statistical Markov[20,32,33,34,62,104,122,123,125,128,129,142,157,159,169]1527
Auto Regressive (AR)[123,127]2
AR with Exogenous (ARX)[142] 1
Non-Linear ARX (NARX) [86,97,125]3
Space Time (ST)[84]1
ARIMA[84,116,125,164]4
VAR[84]1
Gaussian Processes (GP)[108,125,171]3
Kalman Filter (KF)[118]1
Game Theory[88]1
Historic Mean Prediction (HMP)[164]1
Interpolation Method (IM)[145]1
Random Walk (RW)[172]1

References

  1. Gärling, A.; Thøgersen, J. Marketing of Electric Vehicles. Bus. Strategy Environ. 2001, 10, 53–65. [Google Scholar] [CrossRef]
  2. Chen, J.; Todd, J.; Clogston, C.T. Creating the Clean Energy Economy; International Economic Development Council: Washington, DC, USA, 2013. [Google Scholar]
  3. Skouras, T.; Gkonis, P.; Ilias, C.; Trakadas, P.; Tsampasis, E.; Zahariadis, T. Electrical Vehicles: Current State of the Art, Future Challenges, and Perspectives. Clean Technol. 2019, 2, 1–16. [Google Scholar] [CrossRef] [Green Version]
  4. Vidal, C.; Malysz, P.; Kollmeyer, P.; Emadi, A. Machine Learning Applied to Electrified Vehicle Battery State of Charge and State of Health Estimation: State-of-the-Art. IEEE Access 2020, 8, 52796–52814. [Google Scholar] [CrossRef]
  5. Sani, A.R.; Hassan, M.U.; Chen, J. Privacy Preserving Machine Learning for Electric Vehicles: A Survey. arXiv 2022, arXiv:2205.08462. [Google Scholar]
  6. Mahmoudzadeh Andwari, A.; Pesiridis, A.; Rajoo, S.; Martinez-Botas, R.; Esfahanian, V. A Review of Battery Electric Vehicle Technology and Readiness Levels. Renew. Sustain. Energy Rev. 2017, 78, 414–430. [Google Scholar] [CrossRef]
  7. Hannan, M.A.; Lipu, M.S.H.; Hussain, A.; Mohamed, A. A Review of Lithium-Ion Battery State of Charge Estimation and Management System in Electric Vehicle Applications: Challenges and Recommendations. Renew. Sustain. Energy Rev. 2017, 78, 834–854. [Google Scholar] [CrossRef]
  8. Shaukat, N.; Khan, B.; Ali, S.M.; Mehmood, C.A.; Khan, J.; Farid, U.; Majid, M.; Anwar, S.M.; Jawad, M.; Ullah, Z. A Survey on Electric Vehicle Transportation Within Smart Grid System. Renew. Sustain. Energy Rev. 2018, 81, 1329–1349. [Google Scholar] [CrossRef]
  9. Kantardzic, M. Data Mining: Concepts, Models, Methods, and Algorithms; Wiley-IEEE Press: New York, NY, USA, 2011. [Google Scholar]
  10. Janai, J.; Güney, F.; Behl, A.; Geiger, A. Computer Vision for Autonomous Vehicles; Foundations and Trends in Computer Graphics and Vision; Now Publishers Inc.: Hanover, MA, USA, 2020; Volume 12, pp. 1–308. [Google Scholar]
  11. Zhang, D.; Tsai, J.J.P.P. Machine Learning and Software Engineering. Softw. Qual. J. 2003, 11, 87–119. [Google Scholar] [CrossRef]
  12. Mohammadi, M.; Al-Fuqaha, A. Enabling Cognitive Smart Cities Using Big Data and Machine Learning: Approaches and Challenges. IEEE Commun. Mag. 2018, 56, 94–101. [Google Scholar] [CrossRef] [Green Version]
  13. Quinlan, J.R. Reinforcement Learning: An Introduction; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1993. [Google Scholar]
  14. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction. MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
  15. Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Keele University and Durham University Joint Report; Keele University: Edinburgh, UK, USA; Durham University: Durham, UK, 2007. [Google Scholar]
  16. Petersen, K.; Feldt, R.; Mujtaba, S.; Mattsson, M. Systematic Mapping Studies in Software Engineering. In Proceedings of the 12th International Conference on Evaluation and Assessment in Software Engineering, Bari, Italy, 26–27 June 2008; BCS Learning & Development Ltd.: Swindon, UK, 2008; pp. 68–77. [Google Scholar]
  17. Petersen, K.; Vakkalanka, S.; Kuzniarz, L. Guidelines for Conducting Systematic Mapping Studies in Software Engineering: An Update. Inf. Softw. Technol. 2015, 64, 1–18. [Google Scholar] [CrossRef]
  18. Achimugu, P.; Selamat, A.; Ibrahim, R.; Mahrin, M.N.R. A systematic literature review of software requirements prioritization research. In Proceedings of the 2018 IEEE International Conference on Environment and Electrical Engineering and 2018 IEEE Industrial and Commercial Power Systems Europe (EEEIC/I&CPS Europe), Palermo, Italy, 12–15 June 2018; Elsevier: Amsterdam, The Netherlands, 2014; pp. 568–585. [Google Scholar]
  19. Cheng, Z.; Chow, M.Y.; Jung, D.; Jeon, J. A Big Data Based Deep Learning Approach for Vehicle Speed Prediction. In Proceedings of the IEEE International Symposium on Industrial Electronics, Edinburgh, UK, 19–21 June 2017; Institute of Electrical and Electronics Engineers Inc.: Interlaken, Switzerland, 2017; pp. 389–394. [Google Scholar]
  20. Zhang, Y.; Chu, L.; Ou, Y.; Guo, C.; Liu, Y.; Tang, X. A Cyber-Physical System-Based Velocity-Profile Prediction Method and Case Study of Application in Plug-In Hybrid Electric Vehicle. IEEE Trans. Cybern. 2019, 51, 40–51. [Google Scholar] [CrossRef]
  21. Yan, M.; Li, M.; He, H.; Peng, J. Deep Learning for Vehicle Speed Prediction. Energy Procedia 2018, 152, 618–623. [Google Scholar] [CrossRef]
  22. Lee, H.; Song, C.; Kim, N.; Cha, S.W. Comparative Analysis of Energy Management Strategies for HEV: Dynamic Programming and Reinforcement Learning. IEEE Access 2020, 8, 67112–67123. [Google Scholar] [CrossRef]
  23. Xiong, R.; Cao, J.; Yu, Q. Reinforcement learning-based real-time power management for hybrid energy storage system in the plug-in hybrid electric vehicle. Appl. Energy 2018, 211, 538–548. [Google Scholar] [CrossRef]
  24. Chen, Z.; Hu, H.; Wu, Y.; Xiao, R.; Shen, J.; Liu, Y. Energy Management for a Power-Split Plug-in Hybrid Electric Vehicle Based on Reinforcement Learning. Appl. Sci. 2018, 8, 2494. [Google Scholar] [CrossRef] [Green Version]
  25. Li, W.; Cui, H.; Nemeth, T.; Jansen, J.; Ünlübayir, C.; Wei, Z.; Zhang, L.; Wang, Z.; Ruan, J.; Dai, H.; et al. Deep reinforcement learning-based energy management of hybrid battery systems in electric vehicles. J. Energy Storage 2021, 36, 102355. [Google Scholar] [CrossRef]
  26. Petkevicius, L.; Saltenis, S.; Civilis, A.; Torp, K. Probabilistic Deep Learning for Electric-Vehicle Energy-Use Prediction. In Proceedings of the 17th International Symposium on Spatial and Temporal Databases (SSTD ’21), New York, NY, USA, 23–25 August 2021; ACM International Conference Proceeding Series. pp. 85–95. [Google Scholar]
  27. Qi, C.; Zhu, Y.; Song, C.; Cao, J.; Xiao, F.; Zhang, X.; Xu, Z.; Song, S. Self-supervised reinforcement learning-based energy management for a hybrid electric vehicle. J. Power Sources 2021, 514, 230584. [Google Scholar] [CrossRef]
  28. Lee, H.; Kim, K.; Kim, N.; Cha, S.W. Energy efficient speed planning of electric vehicles for car-following scenario using model-based reinforcement learning. Appl. Energy 2022, 313, 118460. [Google Scholar] [CrossRef]
  29. Zhou, J.; Xue, S.; Xue, Y.; Liao, Y.; Liu, J.; Zhao, W. A novel energy management strategy of hybrid electric vehicle via an improved TD3 deep reinforcement learning. Energy 2021, 224, 120118. [Google Scholar] [CrossRef]
  30. Du, G.; Zou, Y.; Zhang, X.; Guo, L.; Guo, N. Energy management for a hybrid electric vehicle based on prioritized deep reinforcement learning framework. Energy 2022, 241, 122523. [Google Scholar] [CrossRef]
  31. Tang, X.; Zhou, H.; Wang, F.; Wang, W.; Lin, X. Longevity-conscious energy management strategy of fuel cell hybrid electric Vehicle Based on deep reinforcement learning. Energy 2022, 238, 121593. [Google Scholar] [CrossRef]
  32. Shin, J.; Kim, S.; Sunwoo, M.; Han, M. Ego-Vehicle Speed Prediction Using Fuzzy Markov Chain with Speed Constraints. In Proceedings of the IEEE Intelligent Vehicles Symposium, Paris, France, 9–12 June 2019; Institute of Electrical and Electronics Engineers Inc.: Interlaken, Switzerland, 2019; pp. 2106–2112. [Google Scholar]
  33. Yang, N.; Han, L.; Xiang, C.; Liu, H.; Li, X. An indirect reinforcement learning based real-time energy management strategy via high-order Markov Chain model for a hybrid electric vehicle. Energy 2021, 236, 121337. [Google Scholar] [CrossRef]
  34. Aljohani, T.M.; Ebrahim, A.; Mohammed, O. Real-Time metadata-driven routing optimization for electric vehicle energy consumption minimization using deep reinforcement learning and Markov chain model. Electr. Power Syst. Res. 2021, 192, 106962. [Google Scholar] [CrossRef]
  35. George, D.; Sivraj, P. Driving Range Estimation of Electric Vehicles using Deep Learning. In Proceedings of the 2nd International Conference on Electronics and Sustainable Communication Systems, ICESC 2021, Coimbatore, India, 4–6 August 2021; pp. 358–365. [Google Scholar]
  36. Bansal, S.; Dey, S.; Khanra, M. Energy storage sizing in plug-in Electric Vehicles: Driving cycle uncertainty effect analysis and machine learning based sizing framework. J. Energy Storage 2021, 41, 102864. [Google Scholar] [CrossRef]
  37. Qi, C.; Zhu, Y.; Song, C.; Yan, G.; Xiao, F.; Wang, D.; Zhang, X.; Cao, J.; Song, S. Hierarchical reinforcement learning based energy management strategy for hybrid electric vehicle. Energy 2022, 238, 121703. [Google Scholar] [CrossRef]
  38. Chen, Z.Z.; Liu, Y.; Zhang, Y.; Lei, Z.; Chen, Z.Z.; Li, G. A neural network-based ECMS for optimized energy management of plug-in hybrid electric vehicles. Energy 2022, 243, 122727. [Google Scholar] [CrossRef]
  39. Zhang, Y.; Ma, R.; Zhao, D.; Huangfu, Y.; Liu, W. A Novel Energy Management Strategy based on Dual Reward Function Q-learning for Fuel Cell Hybried Electric Vehicle. IEEE Trans. Ind. Electron. 2021, 69, 1537–1547. [Google Scholar] [CrossRef]
  40. Lin, Y.; McPhee, J.; Azad, N.L. Co-Optimization of On-Ramp Merging and Plug-In Hybrid Electric Vehicle Power Split Using Deep Reinforcement Learning. IEEE Trans. Veh. Technol. 2022, 71, 6958–6968. [Google Scholar] [CrossRef]
  41. Ullah, I.; Liu, K.; Yamamoto, T.; Al Mamlook, R.E.; Jamal, A. A comparative performance of machine learning algorithm to predict electric vehicles energy consumption: A path towards sustainability. Energy Environ. 2021, 33, 1583–1612. [Google Scholar] [CrossRef]
  42. Xiong, S.; Zhang, Y.; Wu, C.; Chen, Z.; Peng, J.; Zhang, M. Energy management strategy of intelligent plug-in split hybrid electric vehicle based on deep reinforcement learning with optimized path planning algorithm. Proc. Inst. Mech. Eng. Part D J. Automob. Eng. 2021, 235, 3287–3298. [Google Scholar] [CrossRef]
  43. Zhang, J.; Wang, Z.; Liu, P.; Zhang, Z. Energy Consumption Analysis and Prediction of Electric Vehicles Based on Real-World Driving Data. Appl. Energy 2020, 275, 115408. [Google Scholar] [CrossRef]
  44. Rabinowitz, A.; Araghi, F.M.; Gaikwad, T.; Asher, Z.D.; Bradley, T.H. Development and Evaluation of Velocity Predictive Optimal Energy Management Strategies in Intelligent and Connected Hybrid Electric Vehicles. Energies 2021, 14, 5713. [Google Scholar] [CrossRef]
  45. Tang, X.; Chen, J.; Pu, H.; Liu, T.; Khajepour, A. Double Deep Reinforcement Learning-Based Energy Management for a Parallel Hybrid Electric Vehicle with Engine Start-Stop Strategy. IEEE Trans. Transp. Electrif. 2022, 8, 1376–1388. [Google Scholar] [CrossRef]
  46. Smith, T.; Garcia, J.; Washington, G. Electric Vehicle Charging via Machine-Learning Pattern Recognition. J. Energy Eng. 2021, 147, 04021035. [Google Scholar] [CrossRef]
  47. Shibl, M.; Ismail, L.; Massoud, A. Electric Vehicles Charging Management Using Machine Learning Considering Fast Charging and Vehicle-to-Grid Operation. Energies 2021, 14, 6199. [Google Scholar] [CrossRef]
  48. Tran, M.K.; Panchal, S.; Chauhan, V.; Brahmbhatt, N.; Mevawalla, A.; Fraser, R.; Fowler, M. Python-based scikit-learn machine learning models for thermal and electrical performance prediction of high-capacity lithium-ion battery. Int. J. Energy Res. 2021, 46, 786–794. [Google Scholar] [CrossRef]
  49. Hou, Z.; Guo, J.; Xing, J.; Guo, C.; Zhang, Y. Machine learning and whale optimization algorithm based design of energy management strategy for plug-in hybrid electric vehicle. IET Intell. Transp. Syst. 2021, 15, 1076–1091. [Google Scholar] [CrossRef]
  50. Shao, Y.; Zheng, Y.; Sun, Z. Machine Learning Enabled Traffic Prediction for Speed Optimization of Connected and Autonomous Electric Vehicles. In Proceedings of the American Control Conference, New Orleans, LA, USA, 25–28 May 2021; Institute of Electrical and Electronics Engineers Inc.: Interlaken, Switzerland, 2021; pp. 172–177. [Google Scholar]
  51. Shahriar, S.; Al-Ali, A.R.; Osman, A.H.; Dhou, S.; Nijim, M. Prediction of EV charging behavior using machine learning. IEEE Access 2021, 9, 111576–111586. [Google Scholar] [CrossRef]
  52. Pokharel, S.; Sah, P.; Ganta, D. Improved Prediction of Total Energy Consumption and Feature Analysis in Electric Vehicles Using Machine Learning and Shapley Additive Explanations Method. World Electr. Veh. J. 2021, 12, 94. [Google Scholar] [CrossRef]
  53. Lee, W.; Jeoung, H.; Park, D.; Kim, T.; Lee, H.; Kim, N. A Real-Time Intelligent Energy Management Strategy for Hybrid Electric Vehicles Using Reinforcement Learning. IEEE Access 2021, 9, 72759–72768. [Google Scholar] [CrossRef]
  54. Foiadelli, F.; Longo, M.; Miraftabzadeh, S. Energy Consumption Prediction of Electric Vehicles Based on Big Data Approach. In Proceedings of the 2018 IEEE International Conference on Environment and Electrical Engineering and 2018 IEEE Industrial and Commercial Power Systems Europe, EEEIC/I and CPS Europe, Palermo, Italy, 12–15 June 2018. [Google Scholar]
  55. Hu, B.; Li, J. An adaptive hierarchical energy management strategy for hybrid electric vehicles combining heuristic domain knowledge and data-driven deep reinforcement learning. IEEE Trans. Transp. Electrif. 2021, 8, 3275–3288. [Google Scholar] [CrossRef]
  56. Eichenlaub, T.; Rinderknecht, S. Anticipatory Longitudinal Vehicle Control using a LSTM Prediction Model. In Proceedings of the IEEE Conference on Intelligent Transportation Systems, ITSC, Indianapolis, IN, USA, 19–22 September 2021; Institute of Electrical and Electronics Engineers Inc.: Interlaken, Switzerland, 2021; pp. 447–452. [Google Scholar]
  57. Kwon, H. Control Map Generation Strategy for Hybrid Electric Vehicles Based on Machine Learning with Energy Optimization. IEEE Access 2022, 10, 15163–15174. [Google Scholar] [CrossRef]
  58. Rafiei, M.; Boudjadar, J.; Griffiths, M.P.; Khooban, M.H. Deep Learning-Based Energy Management of an All-Electric City Bus with Wireless Power Transfer. IEEE Access 2021, 9, 43981–43990. [Google Scholar] [CrossRef]
  59. Yan, L.; Chen, X.; Zhou, J.; Chen, Y.; Wen, J. Deep Reinforcement Learning for Continuous Electric Vehicles Charging Control with Dynamic User Behaviors. IEEE Trans. Smart Grid 2021, 12, 5124–5134. [Google Scholar] [CrossRef]
  60. Tang, X.; Chen, J.; Liu, T.; Qin, Y.; Cao, D. Distributed Deep Reinforcement Learning-Based Energy and Emission Management Strategy for Hybrid Electric Vehicles. IEEE Trans. Veh. Technol. 2021, 70, 9922–9934. [Google Scholar] [CrossRef]
  61. Meng, X.; Li, Q.; Zhang, G.; Wang, X.; Chen, W. Double Q-learning-based energy management strategy for overall energy consumption optimization of fuel cell/battery vehicle. In Proceedings of the IEEE Transportation Electrification Conference and Expo (ITEC), Chicago, IL, USA, 21–25 June 2022; Institute of Electrical and Electronics Engineers Inc.: Interlaken, Switzerland, 2021; pp. 1–6. [Google Scholar]
  62. Sichen, L.; Weihao, H.; Di, C.; Tomislav, D.; Qi, H.; Zhe, C.; Frede, B. Electric Vehicle Charging Management Based on Deep Reinforcement Learning. J. Mod. Power Syst. Clean Energy 2021, 10, 719–730. [Google Scholar]
  63. Shen, H.; Wang, Z.; Zhou, X.; Lamantia, M.; Yang, K.; Chen, P.; Wang, J. Electric Vehicle Velocity and Energy Consumption Predictions Using Transformer and Markov-Chain Monte Carlo. IEEE Trans. Transp. Electrif. 2022, 8, 3836–3847. [Google Scholar] [CrossRef]
  64. Sagoian, A.; Varga, B.O.; Solodushkin, S. Energy Consumption Prediction of Electric Vehicle Air Conditioning System Using Artificial Intelligence. In Proceedings of the Ural Symposium on Biomedical Engineering, Radioelectronics and Information Technology (USBEREIT), Yekaterinburg, Russia, 13–14 May 2021; Institute of Electrical and Electronics Engineers Inc.: Interlaken, Switzerland, 2021; pp. 379–382. [Google Scholar]
  65. Han, X.; He, H.; Wu, J.; Peng, J.; Li, Y. Energy Management Based on Reinforcement Learning with Double Deep Q-Learning for a Hybrid Electric Tracked Vehicle. Appl. Energy 2019, 254, 113708. [Google Scholar] [CrossRef]
  66. Yang, N.; Han, L.; Xiang, C.; Liu, H.; Hou, X. Energy Management for a Hybrid Electric Vehicle Based on Blended Reinforcement Learning with Backward Focusing and Prioritized Sweeping. IEEE Trans. Veh. Technol. 2021, 70, 3136–3148. [Google Scholar] [CrossRef]
  67. Lee, H.; Cha, S.W. Energy Management Strategy of Fuel Cell Electric Vehicles Using Model-Based Reinforcement Learning with Data-Driven Model Update. IEEE Access 2021, 9, 59244–59254. [Google Scholar] [CrossRef]
  68. Hua, Y.; Sevegnani, M.; Yi, D.; Birnie, A.; Mcaslan, S. Fine-grained RNN with Transfer Learning for Energy Consumption Estimation on EVs. IEEE Trans. Ind. Inform. 2021, 18, 8182–8190. [Google Scholar] [CrossRef]
  69. Du, G.; Zou, Y.; Zhang, X.; Guo, L.; Guo, N. Heuristic Energy Management Strategy of Hybrid Electric Vehicle Based on Deep Reinforcement Learning with Accelerated Gradient Optimization. IEEE Trans. Transp. Electrif. 2021, 7, 2194–2208. [Google Scholar] [CrossRef]
  70. Lin, X.; Zhou, K.; Mo, L.; Li, H. Intelligent Energy Management Strategy Based on an Improved Reinforcement Learning Algorithm With Exploration Factor for a Plug-in PHEV. IEEE Trans. Intell. Transp. Syst. 2021, 23, 8725–8735. [Google Scholar] [CrossRef]
  71. Renata, D.A.; Fauziah, K.; Aji, P.; Larasati, A.; Halidah, H.; Tasurun, D.P.; Astriani, Y. Riza: Modeling of Electric Vehicle Charging Energy Consumption using Machine Learning. In Proceedings of the International Conference on Advanced Computer Science Information Systems, ICACSIS, Depok, Indonesia, 23–25 October 2021; Institute of Electrical and Electronics Engineers Inc.: Interlaken, Switzerland, 2021. [Google Scholar]
  72. Malek, Y.N.; Najib, M.; Bakhouya, M.; Essaaidi, M. Multivariate deep learning approach for electric vehicle speed forecasting. Big Data Min. Anal. 2021, 4, 56–64. [Google Scholar] [CrossRef]
  73. Zhang, H.; Liu, S.; Lei, N.; Fan, Q.; Li, S.E.; Wang, Z. Learningbased supervisory control of dual mode engine-based hybrid electric vehicle with reliance on multivariate trip information. Energy Convers. Manag. 2022, 257, 115450. [Google Scholar] [CrossRef]
  74. Shi, J.; Xu, B.; Shen, Y.; Wu, J. Energy management strategy for battery/supercapacitor hybrid electric city bus based on driving pattern recognition. Energy 2022, 243, 122752. [Google Scholar] [CrossRef]
  75. Kim, D.; Shim, H.; Eo, J. A Machine Learning Method for EV Range Prediction with Updates on Route Information and Traffic Conditions. Proc. AAAI Conf. Artif. Intell. 2021, 36, 12545–12551. [Google Scholar] [CrossRef]
  76. He, H.; Cao, J.; Cui, X. Energy Optimization of Electric Vehicle’s Acceleration Process Based on Reinforcement Learning. J. Clean. Prod. 2020, 248, 119302. [Google Scholar] [CrossRef]
  77. Qi, C.; Song, C.; Xiao, F.; Song, S. Generalization ability of hybrid electric vehicle energy management strategy based on reinforcement learning method. Energy 2022, 250, 123826. [Google Scholar] [CrossRef]
  78. Liu, Y.; Zhang, Q.; Lyu, C.; Liu, Z. Modelling the energy consumption of electric vehicles under uncertain and small data conditions. Transp. Res. Part A Policy Pract. 2021, 154, 313–328. [Google Scholar] [CrossRef]
  79. Basso, R.; Kulcsár, B.; Sanchez-Diaz, I.; Qu, X. Dynamic stochastic electric vehicle routing with safe reinforcement learning. Transp. Res. Part E Logist. Transp. Rev. 2022, 157, 102496. [Google Scholar] [CrossRef]
  80. Wei, H.; Zhang, N.; Liang, J.; Ai, Q.; Zhao, W.; Huang, T.; Zhang, Y. Deep reinforcement learning based direct torque control strategy for distributed drive electric vehicles considering active safety and energy saving performance. Energy 2022, 238, 121725. [Google Scholar] [CrossRef]
  81. He, W.; Huang, Y. Real-time energy optimization of hybrid electric vehicle in connected environment based on deep reinforcement learning. IFAC-PapersOnLine 2021, 54, 176–181. [Google Scholar] [CrossRef]
  82. Li, J.; Wu, X.; Hu, S.; Fan, J. A deep reinforcement learning based energy management strategy for hybrid electric vehicles in connected traffic environment. IFAC-PapersOnLine 2021, 54, 150–156. [Google Scholar] [CrossRef]
  83. Grubwinkler, S.; Lienkamp, M. Energy Prediction for Evs Using Support Vector Regression Methods. Adv. Intell. Syst. Comput. 2015, 323, 769–780. [Google Scholar]
  84. Yang, X.; Zou, Y.; Tang, J.; Liang, J.; Ijaz, M. Evaluation of Short-Term Freeway Speed Prediction Based on Periodic Analysis Using Statistical Models and Machine Learning Models. J. Adv. Transp. 2020, 2020, 9628957. [Google Scholar] [CrossRef] [Green Version]
  85. Sung Koo, K.; Govindarasu, M.; Tian, J. Event Prediction Algorithm Using Neural Networks for The Power Management System of Electric Vehicles. Appl. Soft Comput. J. 2019, 84, 105709. [Google Scholar]
  86. Vatanparvar, K.; Faezi, S.; Burago, I.; Levorato, M.; Al Faruque, M.A. Extended Range Electric Vehicle with Driving Behavior Estimation in Energy Management. IEEE Trans. Smart Grid 2019, 10, 2959–2968. [Google Scholar] [CrossRef]
  87. De Cauwer, C.; Verbeke, W.; Coosemans, T.; Faid, S.; Van Mierlo, J. A Data-Driven Method for Energy Consumption Prediction and Energy-Efficient Routing of Electric Vehicles in Real-World Conditions. Energies 2017, 10, 608. [Google Scholar] [CrossRef] [Green Version]
  88. Xu, J.; Alsabbagh, A.; Yan, D.; Ma, C. Game-theoretic Energy Management with Velocity Prediction in Hybrid Electric Vehicle. In Proceedings of the IEEE International Symposium on Industrial Electronics, Vancouver, BC, Canada, 12–14 June 2019; Institute of Electrical and Electronics Engineers Inc.: Interlaken, Switzerland, 2019; pp. 1084–1089. [Google Scholar]
  89. Sun, H.; Li, J.; Sun, C. Improved Real-Time Velocity Prediction by Considering Preceding Vehicle Dynamics. In Proceedings of the 2019 IEEE Vehicle Power and Propulsion Conference (VPPC), Hanoi, Vietnam, 14–17 October 2019. [Google Scholar]
  90. Murphey, Y.L.; Park, J.; Abul Masrur, M. Intelligent Energy Management in a Low Cost Hybrid Electric Vehicle Power System. In Proceedings of the IEEE Vehicular Technology Conference, Las Vegas, NV, USA, 2–5 September 2013. [Google Scholar]
  91. Murphey, Y.L.; Park, J.; Chen, Z.; Kuang, M.L.; Masrur, M.A.; Phillips, A.M. Intelligent Hybrid Vehicle Power Control Part I: Machine Learning of Optimal Vehicle Power. IEEE Trans. Veh. Technol. 2012, 61, 3519–3530. [Google Scholar] [CrossRef]
  92. Park, J.; Murphey, Y.L.; Kristinsson, J.; McGee, R.; Kuang, M.; Phillips, T. Intelligent Speed Profile Prediction on Urban Traffic Networks with Machine Learning. In Proceedings of the International Joint Conference on Neural Networks, Dallas, TX, USA, 4–9 August 2013. [Google Scholar]
  93. Park, J.; Murphey, Y.L.; McGee, R.; Kristinsson, J.G.; Kuang, M.L.; Phillips, A.M. Intelligent Trip Modeling for The Prediction of an Origin-Destination Traveling Speed Profile. IEEE Trans. Intell. Transp. Syst. 2014, 15, 1039–1053. [Google Scholar] [CrossRef]
  94. Park, J.; Chen, Z.; Kiliaris, L.; Kuang, M.L.; Masrur, M.A.; Phillips, A.M.; Murphey, Y.L. Intelligent Vehicle Power Control Based on Machine Learning of Optimal Control Parameters and Prediction of Road Type and Traffic Congestion. IEEE Trans. Veh. Technol. 2009, 58, 4741–4756. [Google Scholar] [CrossRef]
  95. Yufang, L.; Mingnuo, C.; Wanzhong, Z. Investigating Long-Term Vehicle Speed Prediction Based on BP-LSTM Algorithms. IET Intell. Transp. Syst. 2019, 13, 1281–1290. [Google Scholar] [CrossRef]
  96. Li, Y.; Ren, C.; Zhao, H.; Chen, G. Investigating Long-Term Vehicle Speed Prediction Based on GA-BP Algorithms and The Road-Traffic Environment. Sci. China Inf. Sci. 2020, 63, 190205. [Google Scholar] [CrossRef]
  97. Baker, D.; Asher, Z.; Bradley, T. Investigation of Vehicle Speed Prediction from Neural Network Fit of Real World Driving Data for Improved Engine On/Off Control of the EcoCAR3 Hybrid Camaro; SAE Technical Papers; SAE International: Warrendale, PA, USA, 2017. [Google Scholar]
  98. Chen, Z.; Guo, N.; Shen, J.; Xiao, R.; Dong, P. A Hierarchical Energy Management Strategy for Power-Split Plug-in Hybrid Electric Vehicles Considering Velocity Prediction. IEEE Access 2018, 6, 33261–33274. [Google Scholar] [CrossRef]
  99. Masikos, M.; Demestichas, K.; Adamopoulou, E.; Theologou, M. Machine-Learning Methodology for Energy Efficient Routing. IET Intell. Transp. Syst. 2014, 8, 255–265. [Google Scholar] [CrossRef]
  100. Zeng, T.; Zhang, C.; Hu, M.; Chen, Y.; Yuan, C.; Chen, J.; Zhou, A. Modelling and predicting energy consumption of a range extender fuel cell hybrid vehicle. Energy 2018, 165, 187–197. [Google Scholar] [CrossRef]
  101. Zhang, L.; Zhang, J.; Liu, D. Neural Network based Vehicle Speed Prediction for Specific Urban Driving. In Proceedings of the 2018 Chinese Automation Congress, CAC, Xi’an, China, 30 November–2 December 2018; pp. 1798–1803. [Google Scholar]
  102. Delnevo, G.; Di Lena, P.; Mirri, S.; Prandi, C.; Salomoni, P. On combining Big Data and machine learning to support eco-driving behaviours. J. Big Data 2019, 6, 64. [Google Scholar] [CrossRef] [Green Version]
  103. Bolovinou, A.; Bakas, I.; Amditis, A.; Mastrandrea, F.; Vinciotti, W. Online Prediction of an Electric Vehicle Remaining Range Based on Regression Analysis. In Proceedings of the 2014 IEEE International Electric Vehicle Conference, IEVC, Florence, Italy, 17–19 December 2014. [Google Scholar]
  104. Shen, P.; Zhao, Z.; Zhan, X.; Li, J.; Guo, Q. Optimal Energy Management Strategy for a Plug-In Hybrid Electric Commercial Vehicle Based on Velocity Prediction. Energy 2018, 155, 838–852. [Google Scholar] [CrossRef]
  105. Xu, B.; Rathod, D.; Zhang, D.; Yebi, A.; Zhang, X.; Li, X.; Filipi, Z. Parametric Study on Reinforcement Learning Optimized Energy Management Strategy for a Hybrid Electric Vehicle. Appl. Energy 2020, 259, 114200. [Google Scholar] [CrossRef]
  106. Rhode, S.; Van Vaerenbergh, S.; Pfriem, M. Power Prediction for Electric Vehicles Using Online Machine Learning. Eng. Appl. Artif. Intell. 2020, 87, 103278. [Google Scholar] [CrossRef]
  107. Yufang, L.; Jun, Z.; Chen, R.; Xiaoding, L. Prediction of Vehicle Energy Consumption on a Planned Route Based on Speed Features Forecasting. IET Intell. Transp. Syst. 2020, 14, 511–522. [Google Scholar] [CrossRef]
  108. Ye, F.; Hao, P.; Qi, X.; Wu, G.; Boriboonsomsin, K.; Barth, M.J. Prediction-Based Eco-Approach and Departure at Signalized Intersections with Speed Forecasting on Preceding Vehicles. IEEE Trans. Intell. Transp. Syst. 2019, 20, 1378–1389. [Google Scholar] [CrossRef] [Green Version]
  109. Zheng, B.; He, P.; Zhao, L.; Li, H. A Hybrid Machine Learning Model For Range Estimation of Electric Vehicles. In Proceedings of the 2016 IEEE Global Communications Conference, GLOBECOM 2016, Washington, DC, USA, 4–8 December 2016. [Google Scholar]
  110. Zhang, X.; Zhang, T.; Zou, Y.; Du, G.; Guo, N. Predictive Eco-Driving Application Considering Real-World Traffic Flow. IEEE Access 2020, 8, 82187–82200. [Google Scholar] [CrossRef]
  111. Kretzschmar, J.; Gebhardt, K.; Theiß, C.; Schau, V. Range Prediction Models for E-Vehicles in Urban Freight Logistics Based on Machine Learning. In International Conference on Data Mining and Big Data; Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2016; Volume 9714, pp. 175–184. [Google Scholar]
  112. Park, J.; Li, D.; Murphey, Y.L.; Kristinsson, J.; McGee, R.; Kuang, M.; Phillips, T. Real Time Vehicle Speed Prediction Using a Neural Network Traffic Model. In Proceedings of the International Joint Conference on Neural Networks, San Jose, CA, USA, 31 July–5 August 2011; pp. 2991–2996. [Google Scholar]
  113. Liu, T.; Hu, X.; Li, S.E.; Cao, D. Reinforcement Learning Optimized Look-Ahead Energy Management of a Parallel Hybrid Electric Vehicle. IEEE/ASME Trans. Mechatron. 2017, 22, 1497–1507. [Google Scholar] [CrossRef]
  114. Masikos, M.; Demestichas, K.; Adamopoulou, E.; Theologou, M. Reliable Vehicular Consumption Prediction Based on Machine Learning. Neural Netw. World 2014, 24, 333–342. [Google Scholar] [CrossRef] [Green Version]
  115. Li, Y.F.; Chen, M.N.; Lu, X.D.; Zhao, W.Z. Research on Optimized GA-SVM Vehicle Speed Prediction Model Based on Driver-Vehicle-Road-Traffic System. Sci. China Technol. Sci. 2018, 61, 782–790. [Google Scholar] [CrossRef]
  116. Liu, Q.; Wang, B.; Zhu, Y. Short-Term Traffic Speed Forecasting Based on Attention Convolutional Neural Network for Arterials. Comput.-Aided Civ. Infrastruct. Eng. 2018, 33, 999–1016. [Google Scholar] [CrossRef]
  117. Han, S.; Zhang, F.; Xi, J.; Ren, Y.; Xu, S. Short-term Vehicle Speed Prediction Based on Convolutional Bidirectional LSTM Networks. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference, ITSC, Auckland, New Zealand, 27–30 October 2019; Institute of Electrical and Electronics Engineers Inc.: Interlaken, Switzerland, 2019; pp. 4055–4060. [Google Scholar]
  118. Barimani, N.; Moshiri, B.; Teshnehlab, M. State Space Modeling and Short-Term Traffic Speed Prediction Using Kalman Filter Based on ANFIS. Int. J. Eng. Technol. 2012, 4, 116–120. [Google Scholar] [CrossRef] [Green Version]
  119. Du, Y.; Cui, N.; Li, H.; Nie, H.; Shi, Y.; Wang, M.; Li, T. The Vehicle’s Velocity Prediction Methods Based on RNN and LSTM Neural Network. In Proceedings of the 32nd Chinese Control and Decision Conference, CCDC, Hefei, China, 22–24 August 2020; Institute of Electrical and Electronics Engineers Inc.: Interlaken, Switzerland, 2020; pp. 99–102. [Google Scholar]
  120. Sun, S.; Zhang, J.; Bi, J.; Wang, Y.; Moghaddam, M.H.Y. A Machine Learning Method for Predicting Driving Range of Battery Electric Vehicles. J. Adv. Transp. 2019, 2019, 4109148. [Google Scholar] [CrossRef]
  121. Yao, J.; Moawad, A. Vehicle Energy Consumption Estimation Using Large Scale Simulations and Machine Learning Methods. Transp. Res. Part C Emerg. Technol. 2019, 101, 276–296. [Google Scholar] [CrossRef]
  122. Jiang, B.; Fei, Y. Vehicle Speed Prediction by Two-Level Data Driven Models in Vehicular Networks. IEEE Trans. Intell. Transp. Syst. 2017, 18, 1793–1801. [Google Scholar] [CrossRef]
  123. Jing, J.; Filev, D.; Kurt, A.; Ozatay, E.; Michelini, J.; Ozguner, U. Vehicle Speed Prediction Using a Cooperative Method of Fuzzy Markov Model and Auto-Regressive Model. In Proceedings of the IEEE Intelligent Vehicles Symposium, Los Angeles, CA, USA, 11–14 June 2017; Institute of Electrical and Electronics Engineers Inc.: Interlaken, Switzerland, 2017; pp. 881–886. [Google Scholar]
  124. Lemieux, J.; Ma, Y. Vehicle Speed Prediction Using Deep Learning. In Proceedings of the 2015 IEEE Vehicle Power and Propulsion Conference, VPPC, Montreal, QC, Canada, 19–22 October 2015. [Google Scholar]
  125. Liu, K.; Asher, Z.; Gong, X.; Huang, M.; Kolmanovsky, I. Vehicle Velocity Prediction and Energy Management Strategy Part 1: Deterministic and Stochastic Vehicle Velocity Prediction Using Machine Learning; SAE Technical Papers; SAE International: Warrendale, PA, USA, 2019. [Google Scholar]
  126. Gaikwad, T.D.; Asher, Z.D.; Liu, K.; Huang, M.; Kolmanovsky, I. Vehicle Velocity Prediction and Energy Management Strategy Part 2: Integration of Machine Learning Vehicle Velocity Prediction with Optimal Energy Management to Improve Fuel Economy; SAE Technical Papers; SAE International: Warrendale, PA, USA, 2019. [Google Scholar]
  127. Fotouhi, A.; Montazeri, M.; Jannatipour, M. Vehicle’s Velocity Time Series Prediction Using Neural Network. Int. J. Automot. Eng. 2011, 1, 21–28. [Google Scholar]
  128. Pei, J.Z.; Su, Y.X.; Zhang, D.H.; Qi, Y.; Leng, Z.W. Velocity Forecasts Using a Combined Deep Learning Model In Hybrid Electric Vehicles with V2V and V2I Communication. Sci. China Technol. Sci. 2020, 63, 55–64. [Google Scholar] [CrossRef]
  129. Sun, C.; Hu, X.; Moura, S.J.; Sun, F. Velocity Predictors for Predictive Energy Management in Hybrid Electric Vehicles. IEEE Trans. Control Syst. Technol. 2015, 23, 1197–1204. [Google Scholar]
  130. Yavasoglu, H.A.; Tetik, Y.E.; Gokce, K. Implementation of Machine Learning Based Real Time Range Estimation Method Without Destination Knowledge for BEVs. Energy 2019, 172, 1179–1186. [Google Scholar] [CrossRef]
  131. Wang, Y.; Beullens, P.; Liu, H.; Brown, D.; Thornton, T.; Proud, R. A Practical Intelligent Navigation System Based on Travel Speed Prediction. In Proceedings of the IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, Beijing, China, 12–15 October 2008; pp. 470–475. [Google Scholar]
  132. Kong, H.; Yan, J.; Wang, H.; Fan, L. Energy Management Strategy for Electric Vehicles based on Deep Q-Learning using Bayesian Optimization. Neural Comput. Appl. 2020, 32, 14431–14445. [Google Scholar] [CrossRef]
  133. Shi, L.; Zheng, M.; Li, F. The Energy Management Strategy for Parallel Hybrid Electric Vehicles Based on MNN. Multimed. Tools Appl. 2020, 79, 5321–5333. [Google Scholar] [CrossRef]
  134. Zhang, Y.; Guo, C.; Liu, Y.; Ding, F.; Chen, Z.; Hao, W. A Novel Strategy for Power Sources Management in Connected plug-in Hybrid Electric Vehicles Based on Mobile Edge Computation Framework. J. Power Sources 2020, 477, 228650. [Google Scholar] [CrossRef]
  135. Sun, H.; Fu, Z.; Tao, F.; Zhu, L.; Si, P. Data-Driven Reinforcement-Learning-Based Hierarchical Energy Management Strategy for Fuel Cell/Battery/Ultracapacitor Hybrid Electric Vehicles. J. Power Sources 2020, 455, 227964. [Google Scholar] [CrossRef]
  136. Zhang, Z.; He, H.; Guo, J.; Han, R. Velocity Prediction and Profile Optimization Based Real-Time Energy Management Strategy for Plug-in Hybrid Electric Buses. Appl. Energy 2020, 280, 116001. [Google Scholar] [CrossRef]
  137. Harold, C.K.D.; Prakash, S.; Hofman, T. Powertrain Control for Hybrid-Electric Vehicles Using Supervised Machine Learning. Vehicles 2020, 2, 267–286. [Google Scholar] [CrossRef]
  138. Wang, S.; Lu, C.; Liu, C.; Zhou, Y.; Bi, J.; Zhao, X. Understanding the Energy Consumption of Battery Electric Buses in Urban Public Transport Systems. Sustainability 2020, 12, 7. [Google Scholar] [CrossRef]
  139. Lian, R.; Peng, J.; Wu, Y.; Tan, H.; Zhang, H. Rule-Interposing Deep Reinforcement Learning Based Energy Management Strategy for Power-Split Hybrid Electric vehicle. Energy 2020, 197, 117297. [Google Scholar] [CrossRef]
  140. Croce, V.; Raveduto, G.; Verber, M.; Ziu, D. Combining Machine Learning Analysis and Incentive-based Genetic Algorithms to Optimise Energy District Renewable Self-Consumption in Demand-Response Programs. Electronics 2020, 9, 945. [Google Scholar] [CrossRef]
  141. Adnane, M.; Nguyen, B.H.; Khoumsi, A.; Trovao, J.P.F. Driving Mode Predictor-Based Real-Time Energy Management for Dual-Source Electric Vehicle. IEEE Trans. Transp. Electrif. 2021, 7, 1173–1185. [Google Scholar] [CrossRef]
  142. Zhang, Q.; Filev, D.; Szwabowski, S.; Langari, R. A Real-Time Fuzzy Learning Algorithm for Markov Chain and Its Application on Prediction of Vehicle Speed. In Proceedings of the IEEE International Conference on Fuzzy Systems, New Orleans, LA, USA, 23–26 June 2019; Institute of Electrical and Electronics Engineers Inc.: Interlaken, Switzerland, 2019. [Google Scholar]
  143. Basso, R.; Kulcsár, B.; Sanchez-Diaz, I. Electric Vehicle Routing Problem with Machine Learning for Energy Prediction. Transp. Res. Part B Methodol. 2021, 145, 24–55. [Google Scholar] [CrossRef]
  144. Sun, R.; Chen, Y.; Dubey, A.; Pugliese, P. Hybrid electric Buses Fuel Consumption Prediction Based on Real-World Driving data. Transp. Res. Part D Transp. Environ. 2021, 91, 102637. [Google Scholar] [CrossRef]
  145. Abdelaty, H.; Al-Obaidi, A.; Mohamed, M.; Farag, H.E.Z. Machine Learning Prediction Models for Battery-Electric Bus Energy Consumption in Transit. Transp. Res. Part D Transp. Environ. 2021, 96, 102868. [Google Scholar] [CrossRef]
  146. Maino, C.; Misul, D.; Di Mauro, A.; Spessa, E. A Deep Neural Network Based Model for the Prediction of Hybrid Electric Vehicles Carbon Dioxide Emissions. Energy AI 2021, 5, 100073. [Google Scholar] [CrossRef]
  147. Ullah, I.; Liu, K.; Yamamoto, T.; Zahid, M.; Jamal, A. Electric Vehicle Energy Consumption Prediction using Stacked Generalization: An Ensemble Learning Approach. Int. J. Green Energy 2021, 18, 896–909. [Google Scholar] [CrossRef]
  148. Cabani, A.; Zhang, P.; Khemmar, R.; Xu, J. Enhancement of Energy Consumption Estimation for Electric Vehicles by using Machine Learning. IAES Int. J. Artif. Intell. (IJ-AI) 2021, 10, 215. [Google Scholar] [CrossRef]
  149. Wang, Y.; Tan, H.; Wu, Y.; Peng, J. Hybrid Electric Vehicle Energy Management with Computer Vision and Deep Reinforcement Learning. IEEE Trans. Ind. Inform. 2021, 17, 3857–3868. [Google Scholar] [CrossRef]
  150. Wegener, M.; Koch, L.; Eisenbarth, M.; Andert, J. Automated Eco-Driving in Urban Scenarios Using Deep Reinforcement Learning. Transp. Res. Part C Emerg. Technol. 2021, 126, 102967. [Google Scholar] [CrossRef]
  151. Elmi, S.; Tan, K.-L. DeepFEC: Energy Consumption Prediction under Real-World Driving Conditions for Smart Cities. In Proceedings of the Web Conference 2021, Ljubljana, Slovenia, 19–23 April 2021; ACM: New York, NY, USA, 2021; pp. 1880–1890. [Google Scholar]
  152. Zhao, L.; Yao, W.; Wang, Y.; Hu, J. Machine Learning-Based Method for Remaining Range Prediction of Electric Vehicles. IEEE Access 2020, 8, 212423–212441. [Google Scholar] [CrossRef]
  153. Adrienn, D.; Istvan, V. Adaptive Driver Model for Velocity Profile Prediction. WSEAS Trans. Circuits Syst. 2018, 17, 138–145. [Google Scholar]
  154. Deb, S.; Goswami, A.K.; Chetri, R.L.; Roy, R. Prediction of Plug-in Electric Vehicle’s State-of-Charge Using Gradient Boosting Method and Random Forest Method. In Proceedings of the 9th IEEE International Conference on Power Electronics, Drives and Energy Systems, PEDES, Jaipur, India, 16–19 December 2020; Institute of Electrical and Electronics Engineers Inc.: Interlaken, Switzerland, 2021. [Google Scholar]
  155. Zhang, J.; Xu, F.; Zhang, Y.; Shen, T. ELM-Based Driver Torque Demand Prediction and Real-Time Optimal Energy Management Strategy for HEVs. Neural Comput. Appl. 2020, 32, 14411–14429. [Google Scholar] [CrossRef]
  156. Zhang, Y.; Liu, H.; Zhang, Z.; Luo, Y.; Guo, Q.; Liao, S. Cloud Computing-Based Real-Time Global Optimization of Battery Aging and Energy Consumption for Plug-in Hybrid Electric Vehicles. J. Power Sources 2020, 479, 229069. [Google Scholar] [CrossRef]
  157. Chen, Z.; Hu, H.; Wu, Y.; Zhang, Y.; Li, G.; Liu, Y. Stochastic Model Predictive Control for Energy Management of Power-Split Plug-in Hybrid Electric Vehicles Based on Reinforcement Learning. Energy 2020, 211, 118931. [Google Scholar] [CrossRef]
  158. Lin, K.C.; Lin, C.N.; Ying, J.J.C. Construction of Analytical Models for Driving Energy Consumption of Electric Buses Through Machine Learning. Appl. Sci. 2020, 10, 6088. [Google Scholar] [CrossRef]
  159. Zhang, L.; Liu, W.; Qi, B. Combined Prediction for Vehicle Speed with Fixed Route. Chin. J. Mech. Eng. 2020, 33, 60. [Google Scholar] [CrossRef]
  160. Hannan, M.A.; Lipu, M.S.H.; Hussain, A.; Ker, P.J.; Mahlia, T.M.I.; Mansor, M.; Ayob, A.; Saad, M.H.; Dong, Z.Y. Toward Enhanced State of Charge Estimation of Lithium-ion Batteries Using Optimized Machine Learning Techniques. Sci. Rep. 2020, 10, 4687. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  161. Lee, H.; Cha, S.W. Reinforcement Learning Based on Equivalent Consumption Minimization Strategy for Optimal Control of Hybrid Electric Vehicles. IEEE Access 2021, 9, 860–871. [Google Scholar] [CrossRef]
  162. Gao, J.; Chang, T.C.; Yao, R. An Adaptive Intelligent System to Minimize Energy Use for a Parallel Hybrid Electric Vehicle. Microsyst. Technol. 2021, 27, 1483–1496. [Google Scholar] [CrossRef]
  163. Lee, H.; Kang, C.; Park, Y.I.; Kim, N.; Cha, S.W. Online Data-Driven Energy Management of a Hybrid Electric Vehicle Using Model-Based Q-Learning. IEEE Access 2020, 8, 84444–84454. [Google Scholar] [CrossRef]
  164. Yu, B.; Yang, Z.Z.; Wang, J. Bus Travel-Time Prediction Based on Bus Speed. Proc. Inst. Civ. Eng.-Transp. 2010, 163, 3–7. [Google Scholar] [CrossRef]
  165. Zheng, Y.; He, F.; Shen, X.; Jiang, X. Energy Control Strategy of Fuel Cell Hybrid Electric Vehicle Based on Working Conditions Identification by Least Square Support Vector Machine. Energies 2020, 13, 426. [Google Scholar] [CrossRef] [Green Version]
  166. Li, W.; Cui, H.; Nemeth, T.; Jansen, J.; Ünlübayir, C.; Wei, Z.; Feng, X.; Han, X.; Ouyang, M.; Dai, H.; et al. Cloud-Based Health-Conscious Energy Management of Hybrid Battery Systems in Electric Vehicles with Deep Reinforcement Learning. Appl. Energy 2021, 293, 116977. [Google Scholar] [CrossRef]
  167. Ma, X.; Miao, R.; Wu, X.; Liu, X. Examining Influential Factors on the Energy Consumption of Electric and Diesel Buses: A Data-Driven Analysis of Large-Scale Public Transit Network in Beijing. Energy 2021, 216, 119196. [Google Scholar] [CrossRef]
  168. Cao, J.; He, H.; Wei, D. Intelligent SOC-Consumption Allocation of Commercial Plug-in Hybrid Electric Vehicles in Variable Scenario. Appl. Energy 2021, 281, 115942. [Google Scholar] [CrossRef]
  169. Liu, Y.; Li, J.; Gao, J.; Lei, Z.; Zhang, Y.; Chen, Z. Prediction of Vehicle Driving Conditions with Incorporation of Stochastic Forecasting and Machine Learning and a Case Study in Energy Management of Plug-in Hybrid Electric Vehicles. Mech. Syst. Signal Process. 2021, 158, 107765. [Google Scholar] [CrossRef]
  170. Li, P.; Zhang, Y.; Zhang, K.; Jiang, M. The Effects of Dynamic Traffic Conditions, Route Characteristics and Environmental Conditions on Trip-Based Electricity Consumption Prediction of Electric Bus. Energy 2021, 218, 119437. [Google Scholar] [CrossRef]
  171. Chandran, V.; Patil, C.K.; Karthick, A.; Ganeshaperumal, D.; Rahim, R.; Ghosh, A. State of charge Estimation of Lithium-ion battery for Electric Vehicles Using Machine Learning Algorithms. World Electr. Veh. J. 2021, 12, 38. [Google Scholar] [CrossRef]
  172. Kang, L.; Sarker, A.; Shen, H. Velocity Optimization of Pure Electric Vehicles with Traffic Dynamics and Driving Safety Considerations. ACM Trans. Internet Things 2021, 2, 1–24. [Google Scholar] [CrossRef]
  173. Xu, B.; Hu, X.; Tang, X.; Lin, X.; Li, H.; Rathod, D.; Filipi, Z. Ensemble Reinforcement Learning-Based Supervisory Control of Hybrid Electric Vehicle for Fuel Economy Improvement. IEEE Trans. Transp. Electrif. 2020, 6, 717–727. [Google Scholar] [CrossRef]
  174. Zhou, Y.F.; Huang, L.J.; Sun, X.X.; Li, L.H.; Lian, J. A Long-term Energy Management Strategy for Fuel Cell Electric Vehicles Using Reinforcement Learning. Fuel Cells 2020, 20, 753–761. [Google Scholar] [CrossRef]
  175. Wieringa, R.; Maiden, N.; Mead, N.; Rolland, C. Requirements Engineering Paper Classification and Evaluation Criteria: A Proposal and a Discussion. Requir. Eng. 2005, 11, 102–107. [Google Scholar] [CrossRef]
  176. Gao, L.; Xiong, L.; Xia, X.; Lu, Y.; Yu, Z.; Khajepour, A. Improved Vehicle Localization Using On-Board Sensors and Vehicle Lateral Velocity. IEEE Sens. J. 2022, 22, 6818–6831. [Google Scholar] [CrossRef]
  177. Xia, X.; Xiong, L.; Huang, Y.; Lu, Y.; Gao, L.; Xu, N.; Yu, Z. Estimation on IMU yaw misalignment by fusing information of automotive onboard sensors. Mech. Syst. Signal Process. 2022, 162, 107993. [Google Scholar] [CrossRef]
  178. García, S.; Luengo, J.; Herrera, F. Tutorial on Practical Tips of the Most Influential Data Preprocessing Algorithms in Data Mining. Knowl. Based Syst. 2016, 98, 1–29. [Google Scholar] [CrossRef]
  179. Wang, L.; Li, Z.; Fan, Q. Compound Positioning Method for Connected Electric Vehicles Based on Multi-Source Data Fusion. Sustainability 2022, 14, 8323. [Google Scholar] [CrossRef]
  180. Liu, J.; Li, Q.; Chen, W.; Yan, Y.; Wang, X. A Fast Fault Diagnosis Method of the PEMFC System Based on Extreme Learning Machine and Dempster–Shafer Evidence Theory. IEEE Trans. Transp. Electrif. 2019, 5, 271–284. [Google Scholar] [CrossRef]
  181. Gao, Y.; Yang, T.; Bozhko, S.; Wheeler, P.; Dragičević, T. Filter Design and Optimization of Electro-Mechanical Actuation Systems Using Search and Surrogate Algorithms for More-Electric Aircraft Applications. IEEE Trans. Transp. Electrif. 2020, 6, 1434–1447. [Google Scholar] [CrossRef]
  182. Xia, X.; Hashemi, E.; Xiong, L.; Khajepour, A. Autonomous Vehicle Kinematics and Dynamics Synthesis for Sideslip Angle Estimation Based on Consensus Kalman Filter. IEEE Trans. Control. Syst. Technol. 2023, 31, 179–192. [Google Scholar] [CrossRef]
  183. Liu, W.; Xia, X.; Xiong, L.; Lu, Y.; Gao, L.; Yu, Z. Automated Vehicle Sideslip Angle Estimation Considering Signal Measurement Characteristic. IEEE Sens. J. 2021, 21, 21675–21687. [Google Scholar] [CrossRef]
  184. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic Minority Over-Sampling Technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  185. Yan, Y.; Tan, M.; Xu, Y.; Cao, J.; Ng, M.; Min, H.; Wu, Q. Oversampling for Imbalanced Data via Optimal Transport. Proc. AAAI Conf. Artif. Intell. 2019, 33, 5605–5612. [Google Scholar] [CrossRef] [Green Version]
  186. Hu, K.; Wu, J.; Schwanen, T. Differences In Energy Consumption in Electric Vehicles: An Exploratory Real-World Study In Beijing. J. Adv. Transp. 2017, 2017, 4695975. [Google Scholar] [CrossRef] [Green Version]
  187. Li, L.; Dong, W. Experimental Research on Electric Energy Consumption and Control Method of Electric Vehicle. In Proceedings of the 2017 5th International Conference on Mechatronics, Materials, Chemistry and Computer Engineering (ICMMCCE 2017), Chongqing, China, 24–25 June 2017. [Google Scholar]
  188. Baum, M.; Dibbelt, J.; Pajor, T.; Wagner, D. Energy-Optimal Routes for Electric Vehicles. In Proceedings of the GIS: ACM International Symposium on Advances in Geographic Information Systems, Orlando, FL, USA, 5–8 November 2013; ACM Press: New York, NY, USA, 2013; pp. 54–63. [Google Scholar]
  189. Tripathi, B.K. On The Complex Domain Deep Machine Learning for Face Recognition. Appl. Intell. 2017, 47, 382–396. [Google Scholar] [CrossRef]
  190. Cummins, N.; Ren, Z.; Mallol-Ragolta, A.; Schuller, B. Machine Learning in Digital Health, Recent Trends, and Ongoing Challenges. In Artificial Intelligence in Precision Health; Elsevier: Amsterdam, The Netherlands, 2020; pp. 121–148. [Google Scholar]
  191. Rokach, L.; Schclar, A.; Itach, E. Ensemble Methods for Multi-label Classification. Expert Syst. Appl. 2013, 41, 7507–7523. [Google Scholar] [CrossRef] [Green Version]
  192. Adnane, M.; El Haj Tirari, M.; El Fkihi, S.; Oulad Haj Thami, R. Prediction demand for classified ads using machine learning: An experiment study. In Proceedings of the 2nd International Conference on Networking, Information Systems & Security, Rabat, Morocco, 27–29 March 2019. ACM International Conference Proceeding Series; Part F1481. [Google Scholar]
  193. Liu, W.; Quijano, K.; Crawford, M.M. YOLOv5-Tassel: Detecting Tassels in RGB UAV Imagery with Improved YOLOv5 Based on Transfer Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 8085–8094. [Google Scholar] [CrossRef]
  194. Barlow, H.B. Unsupervised Learning. Neural Comput. 1989, 1, 295–311. [Google Scholar] [CrossRef]
  195. Kaelbling, L.P.; Littman, M.L.; Moore, A.W. Reinforcement Learning: A Survey. J. Artif. Intell. Res. 1996, 4, 237–285. [Google Scholar] [CrossRef] [Green Version]
  196. Lv, Y.; Duan, Y.; Kang, W.; Li, Z.; Wang, F.Y. Traffic Flow Prediction with Big Data: A Deep Learning Approach. IEEE Trans. Intell. Transp. Syst. 2015, 16, 865–873. [Google Scholar] [CrossRef]
Figure 1. Three-step search of papers.
Figure 1. Three-step search of papers.
Energies 16 04897 g001
Figure 2. The most investigated datasets over the years 2008 to 2022 (MQ1-a). * As only the first quarter of 2022 is considered, we have multiplied the values of 2022 by 4 to make them consistent with those of the other years.
Figure 2. The most investigated datasets over the years 2008 to 2022 (MQ1-a). * As only the first quarter of 2022 is considered, we have multiplied the values of 2022 by 4 to make them consistent with those of the other years.
Energies 16 04897 g002
Figure 3. Distribution of data preprocessing tasks in the selected studies over the years 2008 to 2022 (MQ1-b). * As only the first quarter of 2022 is considered, we have multiplied the values of 2022 by 4 to make them consistent with those of the other years.
Figure 3. Distribution of data preprocessing tasks in the selected studies over the years 2008 to 2022 (MQ1-b). * As only the first quarter of 2022 is considered, we have multiplied the values of 2022 by 4 to make them consistent with those of the other years.
Energies 16 04897 g003
Figure 4. Distribution of used contexts in the selected studies over the years 2008 to 2022 (MQ2). * As only the first quarter of 2022 is considered, we have multiplied the values of 2022 by 4 to make them consistent with those of the other years.
Figure 4. Distribution of used contexts in the selected studies over the years 2008 to 2022 (MQ2). * As only the first quarter of 2022 is considered, we have multiplied the values of 2022 by 4 to make them consistent with those of the other years.
Energies 16 04897 g004
Figure 5. Distribution of road types in the selected studies over the years 2008 to 2022 (MQ3). * As only the first quarter of 2022 is considered, we have multiplied the values of 2022 by 4 to make them consistent with those of the other years.
Figure 5. Distribution of road types in the selected studies over the years 2008 to 2022 (MQ3). * As only the first quarter of 2022 is considered, we have multiplied the values of 2022 by 4 to make them consistent with those of the other years.
Energies 16 04897 g005
Figure 6. Distribution of categories and subcategories of ML and statistical algorithms (MQ4-a).
Figure 6. Distribution of categories and subcategories of ML and statistical algorithms (MQ4-a).
Energies 16 04897 g006
Figure 7. Categories of ML-based algorithms investigated over the years 2008 to 2022 (MQ4-a). * As only the first quarter of 2022 is considered, we have multiplied the values of 2022 by 4 to make them consistent with those of the other years.
Figure 7. Categories of ML-based algorithms investigated over the years 2008 to 2022 (MQ4-a). * As only the first quarter of 2022 is considered, we have multiplied the values of 2022 by 4 to make them consistent with those of the other years.
Energies 16 04897 g007
Figure 8. Distribution of tools and programing languages in the selected studies investigated over the years 2008 to 2022 (MQ4-b). * As only the first quarter of 2022 is considered, we have multiplied the values of 2022 by 4 to make them consistent with those of the other years.
Figure 8. Distribution of tools and programing languages in the selected studies investigated over the years 2008 to 2022 (MQ4-b). * As only the first quarter of 2022 is considered, we have multiplied the values of 2022 by 4 to make them consistent with those of the other years.
Energies 16 04897 g008
Figure 9. Distribution of the most used evaluation measures in the selected studies over the years 2008 to 2022 (MQ5). * As only the first quarter of 2022 is considered, we have multiplied the values of 2022 by 4 to make them consistent with those of the other years.
Figure 9. Distribution of the most used evaluation measures in the selected studies over the years 2008 to 2022 (MQ5). * As only the first quarter of 2022 is considered, we have multiplied the values of 2022 by 4 to make them consistent with those of the other years.
Energies 16 04897 g009
Figure 10. Distribution of cross validation methods in the selected studies (MQ5).
Figure 10. Distribution of cross validation methods in the selected studies (MQ5).
Energies 16 04897 g010
Figure 11. Resolution approaches investigated over the years 2008 to 2022 (MQ6). * As only the first quarter of 2022 is considered, we have multiplied the values of 2022 by 4 to make them consistent with those of the other years.
Figure 11. Resolution approaches investigated over the years 2008 to 2022 (MQ6). * As only the first quarter of 2022 is considered, we have multiplied the values of 2022 by 4 to make them consistent with those of the other years.
Energies 16 04897 g011
Figure 12. Training architectures investigated over the years 2008 to 2022 (MQ6). * As only the first quarter of 2022 is considered, we have multiplied the values of 2022 by 4 to make them consistent with those of the other years.
Figure 12. Training architectures investigated over the years 2008 to 2022 (MQ6). * As only the first quarter of 2022 is considered, we have multiplied the values of 2022 by 4 to make them consistent with those of the other years.
Energies 16 04897 g012
Table 2. In which selected studies is used each of the four categories of preprocessing tasks (MQ1-b).
Table 2. In which selected studies is used each of the four categories of preprocessing tasks (MQ1-b).
Preprocessing TasksReferencesN° of Studies
Data integration[19,20,21,22,24,25,26,27,28,30,31,32,33,34,35,36,38,41,43,44,48,49,50,51,52,53,54,55,56,57,58,62,63,64,65,68,69,70,71,72,78,79,83,84,85,86,87,88,89,92,93,95,96,98,102,103,104,106,107,108,109,110,111,114,115,116,117,119,120,121,122,124,125,126,127,128,129,131,134,135,136,138,140,142,143,144,145,146,147,149,151,152,153,155,158,161,163,166,167,169,170,171,172,173,174]105
Data transformation [20,21,32,36,41,45,46,47,48,49,51,52,62,63,64,68,69,75,88,92,94,95,103,109,116,120,121,133,136,138,141,144,145,146,147,151,152,156,158,160,162,164,166,169,170,173]46
Data reduction[20,38,48,49,51,52,62,83,85,91,94,95,107,115,116,117,120,121,135,138,141,146,147,149,151,152,158,169,170]29
Data cleaning[20,21,41,48,51,52,54,103,116,121,135,138,141,147,149,151,152,158,170]19
Table 3. In which selected studies is used each of the five contexts (MQ2).
Table 3. In which selected studies is used each of the five contexts (MQ2).
ContextsReferencesN° of Studies
Traffic context[19,20,21,22,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,41,42,43,44,45,46,47,48,49,50,51,52,54,55,56,57,58,60,62,63,64,65,66,67,68,69,70,71,72,73,75,77,78,79,81,82,83,84,85,86,87,88,89,91,92,93,94,95,96,97,98,99,101,103,104,105,106,107,108,109,110,111,114,115,116,117,118,119,120,122,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,153,155,156,157,158,159,160,163,164,165,166,167,168,169,170,171,172]134
Road segment context[19,20,22,23,24,25,27,28,29,30,31,32,33,34,35,36,37,38,39,41,43,44,45,46,47,48,49,53,54,55,57,60,62,65,66,67,69,70,72,73,75,77,78,84,85,86,87,89,90,94,95,96,97,98,99,101,105,107,108,111,114,115,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,143,144,145,147,148,149,151,152,153,155,156,157,158,159,160,161,163,164,165,166,167,168,169,170,171,172,173,174]111
Vehicle context[20,22,23,24,25,27,28,29,30,31,33,34,35,36,37,38,39,41,43,44,45,46,47,48,49,53,54,55,57,60,62,66,67,69,70,72,73,74,75,76,77,78,79,81,82,83,90,91,93,96,99,100,102,105,111,113,114,120,121,122,129,130,134,137,138,139,140,143,144,145,147,149,151,152,154,155,158,160,161,162,165,166,167,168,170,171,173,174]88
Weather context[19,20,23,26,35,41,43,48,51,52,54,64,68,71,75,83,87,99,103,105,111,114,120,125,126,130,137,138,145,147,149,151,152,166,170,171,174]37
Driver profile context[20,26,35,41,54,59,64,86,93,96,99,103,105,109,111,114,122,130,137,143,145,151,152,153,155,170]26
Table 4. In which selected studies is used each of the three categories of roads (MQ3).
Table 5. Advantages and challenges of ML-based ECEV algorithms (MQ4-a).
Table 5. Advantages and challenges of ML-based ECEV algorithms (MQ4-a).
Category of AlgorithmsAdvantagesChallengesRef.
IML
-
IMLs are most used in the literature, and they are simple to use.
-
IMLs have a lower time complexity compared to other categories of algorithms.
-
The size of the dataset affects the performance of the model, which means that the predictive ability and effectiveness of an IML decreases as the dimensionality increases.
-
IMLs suffer from a weak performance in an unbalanced dataset that contains labels with various probabilities of occurrence and in multi-label classification.
[9,131,171]
EML
-
EMLs can outperform IMLs when handlining massive data by combining two or more ML algorithms (e.g., IML algorithms) to produce one optimal predictive model.
-
EMLs have higher predictive accuracy for solving multi-label classification tasks, they can help to reduce variance and bias using bagging and boosting algorithms, respectively.
-
EMLs are not always better, for example, a model with high variance leads the model to overfit, and high bias leads to underfitting.
-
It is hard to find a good balance between variance and bias, a wrong selection of parameters leads to lower predictive accuracy than an IML.
[102,138,152,191,192]
DL
-
DL algorithms work effectively with large and complicated datasets.
-
DL can be adaptable to new problems because of their flexibility.
-
DL handles noisy and complex data, particularly when using multi-source datasets, since they add a spatial dimension to the analysis, allowing the model to understand the physical context and environmental factors that influence energy consumption for better learning.
-
DL algorithms, such as feature concatenation, allow the model to effectively integrate and extract relevant information from different sources, enhancing its ability to capture complex dependencies and improve its overall performance.
-
DL algorithms require a vast amount of data in order to perform better than another ML algorithm.
-
By reason of the complexity of neural networks, training and adjusting parameters are exceedingly costly.
[44,119,193]
UL
-
UL algorithms are less complex in comparison with other categories, there is no need for deep preparation of the data before the training of the model.
-
For UL, it is frequently simpler to obtain unlabeled data.
-
The generated predictions of UL are not always beneficial to the system, because there is no label measure to confirm their utility and they require human intervention to evaluate the performance of the model.
[111,133,156,194]
RL
-
RL algorithms are auto-evaluators since they can fix the error that happened during the training phase, the chances of the same error occurring again are extremely low.
-
They are good for dealing with complex situations.
-
RL algorithms require a high infrastructure for the training process to train in a reasonable time.
-
RL algorithms need to develop relevant benchmarking to conduct comparison studies between models to determine the effectiveness of RL.
[65,76,132,195]
Table 6. Tools and programming languages used in the selected studies (MQ4-b).
Table 6. Tools and programming languages used in the selected studies (MQ4-b).
Tools/Prog. Lang.Ref.N° of Studies
Matlab[19,22,23,24,25,28,35,36,38,47,57,58,62,63,66,67,68,73,76,79,80,81,82,86,89,97,98,100,101,102,105,108,110,113,115,121,122,123,124,125,126,130,133,134,135,136,141,143,145,150,153,155,156,157,160,161,162,164,165,169,171,172,174]63
Weka[83,103]2
CANalyzer[104,130]2
VISSIM[50,117]2
SUMO[40,56,72,122,128,143,172]7
PSAT[90,91,94]3
Python[26,30,34,35,48,51,52,54,55,59,62,65,71,81,82,84,88,116,122,125,126,132,136,140,141,147,148,152,158]29
C++[99,124]2
C#[131]1
R language[167]1
Table 7. In which selected studies are used each of the five evaluation measures (MQ5).
Table 7. In which selected studies are used each of the five evaluation measures (MQ5).
Evaluation MeasuresRef.N° of Studies
RMSE[19,20,21,26,32,33,41,43,50,52,56,63,68,70,71,72,84,87,88,89,93,95,96,98,100,101,103,104,107,108,109,110,112,115,116,117,118,119,120,121,122,123,124,127,128,129,136,142,145,146,147,148,151,152,153,154,155,157,159,160,169,170,171,172] 64
MAE[19,26,33,35,41,44,51,52,64,68,71,72,84,86,88,92,95,96,107,109,113,116,119,120,123,125,126,140,147,151,152,153,154,160,166] 35
MAPE[26,43,64,78,83,84,88,93,95,109,110,114,116,143,144,147,151,152,154,160,164,167,170] 23
MSE[26,35,57,59,64,68,80,87,90,91,95,97,113,118,121,130,133,134,138,140,147,153,160,165,171] 25
R[32,38,54,87,99,100,115,154]8
  R 2 [35,41,48,51,52,71,100,145,146,147,169,170]12
Table 8. Resolution approaches used in each of the selected studies (MQ6).
Table 8. Resolution approaches used in each of the selected studies (MQ6).
Resolution ApproachesReferencesN° of Studies
ML-based EMS[23,24,25,27,29,30,31,33,35,36,37,39,40,42,45,49,53,55,58,60,61,65,66,67,69,70,73,74,77,88,90,91,98,100,104,105,113,126,129,132,133,135,136,139,141,146,149,156,166,174] 50
ML-based speed profile prediction[19,20,21,26,28,32,44,50,56,63,72,84,89,92,93,95,96,97,101,112,115,116,117,118,119,122,123,124,125,127,128,131,142,153,159] 35
Other ML-based approachesOther studies in [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174]71
Table 9. In which selected studies are used offline and online training architectures (MQ6).
Table 9. In which selected studies are used offline and online training architectures (MQ6).
Training ArchitectureReferencesN° of Studies
Offline[23,32,35,38,45,47,49,51,55,57,59,73,79,80,81,82,83,86,88,91,94,103,107,109,110,116,122,128,131,133,134,136,141,149,156,157,159,161,162,169]40
Online[31,33,74,106,142]5
Table 10. Elements most used over the years based on our answers to MQs for the 156 selected papers.
Table 10. Elements most used over the years based on our answers to MQs for the 156 selected papers.
YearsMQ1-a
Figure 2
MQ1-b
Figure 3
MQ2
Figure 4
MQ3
Figure 5
MQ4-a
Figure 7
MQ4-b
Figure 8
MQ5
Figure 9
MQ6-a
Figure 11
MQ6-b
Figure 12
2008GPSData int.Traffic
Road s.
Urban, Rural, Highw.Indiv.C #-ML-SPDOffline
2009Driv. c.Data tr.
Data red.
PSAT-
2010GPSTrafficUrbanMatlabMAPE-
2011PeMSData int.Urban Highw.PSATRMSEML-SPD
2012Driv. c.Data red.Highw.MSEML-EMS
ML-SPD
Offline
2013Data tr.
Data int.
Traffic
Road s.
Vehicle
UrbanC++,
Weka
MAE, MSE-
2014GPS PeMSData int.Traffic
Driver pr.
Matlab C++
Weka
MAPEML-SPDOffline
2015GPSTrafficHighw.RRMSEML-EMS
ML-SPD
2016Driv. c.
Bigdata
Traffic
Driver pr.
UrbanMatlabRMSE MAE MAPE -
2017Driv. c.Road s.RMSEML-SPD
2018Driv. c.
GPS
TrafficML-EMS
2019Driv. c.ML-SPD
2020Road s.ML-EMS
2021Traffic
2022 *Reinf.MSE
* For 2022, only the first three months of the year are considered.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Adnane, M.; Khoumsi, A.; Trovão, J.P.F. Efficient Management of Energy Consumption of Electric Vehicles Using Machine Learning—A Systematic and Comprehensive Survey. Energies 2023, 16, 4897. https://doi.org/10.3390/en16134897

AMA Style

Adnane M, Khoumsi A, Trovão JPF. Efficient Management of Energy Consumption of Electric Vehicles Using Machine Learning—A Systematic and Comprehensive Survey. Energies. 2023; 16(13):4897. https://doi.org/10.3390/en16134897

Chicago/Turabian Style

Adnane, Marouane, Ahmed Khoumsi, and João Pedro F. Trovão. 2023. "Efficient Management of Energy Consumption of Electric Vehicles Using Machine Learning—A Systematic and Comprehensive Survey" Energies 16, no. 13: 4897. https://doi.org/10.3390/en16134897

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop