Next Article in Journal
Electric Vehicle Drivetrain Efficiency and the Multi-Speed Transmission Question
Previous Article in Journal
Research on the Multimode Switching Control of Intelligent Suspension Based on Binocular Distance Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A High-Precision Car-Following Model with Automatic Parameter Optimization and Cross-Dataset Adaptability

1
School of Mechanical Engineering, Guangxi University, Nanning 530004, China
2
Guangxi Key Laboratory for International Join for China-ASEAN Comprehensive Transportation, Nanning University, Nanning 541699, China
*
Authors to whom correspondence should be addressed.
World Electr. Veh. J. 2023, 14(12), 341; https://doi.org/10.3390/wevj14120341
Submission received: 8 November 2023 / Revised: 1 December 2023 / Accepted: 5 December 2023 / Published: 7 December 2023

Abstract

:
Despite the significant impact of network hyperparameters on deep learning car-following models, there has been relatively little research on network hyperparameters of deep learning car-following models. Therefore, this study proposes a car-following model that combines particle swarm optimization (PSO) and gated recurrent unit (GRU) networks. The PSO-GRU car-following model is trained and tested using data from the natural driving database. The results demonstrate that compared to the intelligent driver model (IDM) and the GRU car-following model, the PSO-GRU car-following model reduces the mean squared error (MSE) for the speed simulation of following vehicles by 88.36% and 72.92%, respectively, and reduces the mean absolute percentage error (MAPE) by 64.81% and 50.14%, respectively, indicating a higher prediction accuracy. Dataset 3 from the drone video trajectory database of Southeast University and NGSIM’s I-80 dataset are used to study the car-following model’s cross-dataset adaptability, that is, to verify its transferability. Compared to the GRU car-following model, the PSO-GRU car-following model reduces the standard deviation of the test results by 60.64% and 32.89%, highlighting its more robust prediction stability and better transferability. Verifying the ability of the car-following model to produce the stop-and-go phenomenon can evaluate its transferability more comprehensively. The PSO-GRU car-following model outperforms the GRU car-following model in creating stop-and-go sensations through platoon simulation tests, demonstrating its superior transferability. Therefore, the proposed PSO-GRU car-following model has higher prediction accuracy and cross-dataset adaptability compared to other car-following models.

1. Introduction

Car-following behavior is a fundamental study in microscopic traffic flow theory and is a crucial component of autonomous driving technology [1]. Car-following models with high predictive accuracy and excellent transferability play a critical role in studying traffic flow characteristics, enhancing driving safety, and improving traffic efficiency. Numerous scholars and researchers have developed various car-following models to more accurately describe drivers’ car-following behavior in traffic flow [2].
Existing car-following models can be categorized into mathematical models and data-driven models [3]. Mathematical car-following models are rigorously defined, with most parameters holding precise physical meanings. However, integrating diversified factors into these models increases its complexity, leading to challenges in the calibration model parameters and resulting in significant errors in the results. This complexity makes it difficult to accurately describe driving behavior [4]. With the advancements in data collection technology and the development of machine learning, data-driven car-following models have been developed [5]. These models exhibit strong capabilities in fitting nonlinear data, automatically capturing the features of trajectory data, and exploring the underlying patterns of car-following behavior, thus achieving model optimization and improvement. Similarly, as an essential branch of machine learning, deep learning has been applied to study car-following behavior. Wei and Liu used the self-learning support vector regression method to build a car-following model. They found the ‘neutral line’ phenomenon caused by the intensity difference between acceleration and deceleration [6]. Zhou et al. proposed a car-following model based on recurrent neural networks (RNNs) and demonstrated its superiority over the intelligent driving model in predicting traffic oscillation [7]. Two variants of RNN, namely long short-term memory (LSTM) and GRUs, are also applied to car-following behavior modeling. Wang et al. proposed a car-following model based on GRUs, which integrates the driver memory effect and more input variables, and its simulation accuracy is higher than that of the RNN and IDM models [8]. Huang et al. proposed a car-following model based on LSTM and studied the impact of asymmetric driving behavior on traffic flow. The results showed that the model can effectively simulate traffic flow characteristics and perform better than other models [9]. Additionally, Wang et al. studied the hysteresis phenomenon of stop-and-go waves. The findings reveal that only the car-following model with a long memory, utilizing LSTM, can accurately simulate the hysteresis phenomenon, highlighting the applicability of LSTM in car-following behavior modeling [10]. Wu et al. used deep learning to mimic human drivers’ memory, attention, and prediction mechanisms, establishing a car-following model based on MAP. The results demonstrate that the model can generate a time–space map similar to actual traffic volumes [11]. Lin et al. adopted the LSTM model structure of planned sampling and one-way interconnection, which reduced the time and space error propagation in the car-following model [12]. Ma et al. proposed a car-following model based on multi-sequence pairs and multi-sequences, which realized multi-step prediction. This model is superior to IDM and LSTM models in reproducing trajectories and capturing heterogeneous driving behaviors. It can produce different hysteresis levels and improve simulation accuracy and the stability of traffic flow [13]. Mo et al. proposed a car-following model based on a neural network that combines the strengths of physical and deep learning models. The model proves capable of predicting acceleration, accurately determining model parameters, and exhibiting excellent performance [14]. Qu et al. established a car-following model based on CNN-BiLSTM-Attention for trajectory prediction, showing high prediction accuracy [15]. Naing et al. introduced a new JTPG-LSTM neural network capable of capturing precise and safe driving behaviors, verified in a dynamic data-driven simulation system [16]. Lu et al. established a car-following model using an improved sequence-to-sequence deep learning framework, considering the kinematic information of multiple leading vehicles. This enhancement improves the model’s ability to learn heterogeneous driving behaviors and reshape traffic oscillation [17]. Qin et al. proposed a new car-following model combining CNN and LSTM, which can predict the speed of the following vehicle more accurately [18].
Deep learning-based car-following models have achieved significant progress. However, these methods are mainly limited by their reliance on experience, extensive experimentation, and the manual setting of network hyperparameters. Such parameter settings are highly random and may lead to a decrease in the model’s predictive performance. In addition, the research on car-following models’ transferability needs to be improved. The transferability of the model means that the model trained on one dataset can perform well on another without retraining [19]. Road types, traffic rules, and cultural backgrounds can influence driving behavior, leading to varied car-following behaviors in different spatial locations or countries. Directly applying a model trained on trajectory data from one road segment to trajectory data from other parts or another country may introduce significant errors, limiting the model’s practical application. Therefore, it is essential to verify whether the model can perform well on entirely new road segments. This verification process is crucial for providing a reliable theoretical reference for the decision-making of intelligent driving vehicles and for reducing the costs associated with model development.
Therefore, this paper proposes a car-following model utilizing the particle swarm optimization (PSO) algorithm to optimize the hyperparameters of the GRU. The objective is to further enhance the GRU’s understanding of human driving behavior in traffic flow, enabling its adaptation to different datasets and ensuring high spatial and regional transferability. The main contributions of this paper are as follows: firstly, the PSO-GRU car-following model in this paper combines the PSO algorithm with the GRU and makes full use of the advantages of both. By optimizing the number of hidden layer neurons, the dropout rate, and the batch size of the GRU, the shortcomings of manually determining the hyperparameters of the GRU model are overcome, and the workload is reduced. Secondly, the experimental results show that the PSO-GRU car-following model has a higher prediction accuracy than the IDM and GRU car-following models, and the mean squared error (MSE) for following vehicle speed simulation is reduced by 88.36% and 72.92%, respectively, and the mean absolute percentage error (MAPE) is reduced by 64.81% and 50.14%, respectively. Finally, the spatial and regional transferability of the GRU car-following model and the PSO-GRU car-following model are compared and verified using dataset 3 of the UAV video trajectory database of Southeast University and the I-80 dataset of NGSIM. The experimental results show that the transferability of the PSO-GRU car-following model is better than that of the GRU car-following model.
The rest of this paper is as follows: Section 2 discusses the data preparation and the principles of the PSO-GRU car-following model; Section 3 presents the results and discussion; Section 4 summarizes the findings of this study.

2. Data and Methodology

2.1. Data Preparation

The experimental data in this paper primarily originate from dataset 1 and dataset 3 of the UAV video trajectory database of Southeast University, along with the I-80 dataset of NGSIM. Dataset 1 and dataset 3 were acquired by the UTE (Ubiquitous Traffic Eyes) team using a 4K high-definition camera drone to capture aerial footage of traffic flows on various roads in Nanjing, China. The team used algorithms to extract the videos’ high-resolution vehicle trajectory data [20]. The datasets contain information such as the position coordinates, speed, acceleration, spacing, headway, and lane of each vehicle. The NGSIM’s I-80 dataset contains traffic congestion data for several hundreds of meters on the I-80 highway in the United States, including trajectory data developed by Cambridge Systems, and the dataset is continuously maintained and updated [21]. Data were collected from vehicles traveling in lanes 2 to 5 between 5:00 p.m. and 5:15 p.m. Following specific selection criteria, 400 car-following periods of observational data were filtered and obtained from the three datasets.
(1) The leading vehicle and the following vehicle are regarded as a unit.
(2) Each car-following unit stays in the same lane during the car-following process, with no overtaking or lane-changing behavior.
(3) Each car-following period lasts at least 15 s, and the longitudinal gap does not exceed 120 m.
Due to noise in the obtained data, the Savitzky–Golay filtering method was applied to smooth the trajectory data. The data comparison is illustrated in Figure 1.

2.2. Particle Swarm Optimization Algorithm

Based on their research on the regularities of bird flock behavior, the scientists Kennedy and Eberhart modified Heppner’s avian model and jointly proposed the PSO algorithm [22]. The PSO algorithm has a global search ability and can better solve non-convex and high-dimensional problems. Therefore, it is applied to the hyperparameter optimization of neural networks. It treats the problem’s search space to be solved as the movement space of a flock of birds, where each bird is considered a potential solution (called a particle) to the optimization problem. The fundamental idea behind the PSO algorithm is to find the optimal solution through collaboration and information sharing among individuals in the swarm. Each particle continuously explores the solution space with a certain velocity, and other particles in the swarm influence its search behavior to varying degrees. At the same time, the particles can also keep track of their personal best position experienced so far. Under the interaction of the two, it gathers the best place of its history ( p ij ) and the best location of group history ( p gi ). Each particle updates its position and flight velocity in the solution space in each iteration by tracking the personal and swarm’s global best places. The following formulas govern the updates for speed and position:
V i j ( k + 1 ) = w V i j ( k ) + c 1 r 1 ( k ) [ p i j ( k ) X i j ( k ) ] + c 2 r 2 ( k ) [ p g i ( k ) X i j ( k ) ]
X i j ( k + 1 ) = X i j ( k ) + V i j ( k + 1 )
where c 1 and c 2 are learning factors adjusting the maximum step size towards the p best and the g best . r 1 and r 2 are random numbers in the range [0, 1]; w is the inertia weight, and k is the current generation of evolution.
To balance the global and local search capabilities and improve the convergence performance of the PSO algorithm, the linearly decreasing inertia weight strategy proposed by Shi Yuhui et al. is adopted [23]. The expression is as follows:
w = w max ( w max w min k max ) k
where k max is the maximum number of evolutionary generations, w max is the maximum inertia weight, and w min is the minimum.

2.3. Gated Recurrent Unit Neural Network

The GRU is a particular type of RNN and a practical variant of LSTM. GRUs address long-term dependencies, vanishing gradients, and exploding gradients during backpropagation in RNN [24]. GRUs merge the forget gate and input gate from LSTM into a single update gate ( z t ), removing the output gate found in LSTM and introducing a reset gate ( r t ). Additionally, the GRU eliminates the need to use a separate memory cell for linear self-updates; instead, it directly utilizes gate controls for linear self-updates within the hidden unit. Compared to LSTM, the GRU structure is more straightforward, featuring fewer network parameters. This simplicity facilitates quicker convergence and accelerated training, thereby significantly enhancing the training efficiency. The internal structure of the GRU is illustrated in Figure 2.
In Figure 2, x t represents the input information at the current time, h t 1 and h t are the hidden state of the previous time and the hidden state transmitted to the next time, respectively. It acts as a memory in the neural network, recording the transmission of historical information. r t and z t are the GRU’s reset gate and update gate, respectively. The sigmoid function σ maps the data to the values in the interval [0, 1], and the tanh function maps the data to the values in [−1, 1].
Both the update gate and the reset gate receive the previous moment’s hidden state and the current moment’s input information as input. The reset gate combines the memory of the last moment with the new input information, while the update gate controls the degree to which the knowledge of the previous hidden state is passed to the current hidden state. Their formulas are as follows [25]:
r t = σ ( W r [ h t 1 , x t ] )
z t = σ ( W z [ h t 1 , x t ] )
The candidate hidden state h ˜ t and h t hidden state update methods are:
h ˜ t = tanh ( W h [ r t h t 1 , x t ] )
h t = z t h t 1 + ( 1 z t ) h ˜
When the value of r t is larger, it means that more of the previous information needs to be remembered and more of the new input is combined with the memory earlier. Conversely, if the value is lower, the last information has less relevance to the future prediction, and the hidden state of the previous moment should be discarded more. Specifically, when the value of r t is close to 0, the last moment’s content should be ignored entirely. The closer the value of z t is to 1, the higher the degree of hidden state retention at the previous moment, which helps capture the long-term dependence of time series data.

2.4. Input and Output Variables

In the process of car-following in real life, the driver of a following vehicle is often affected by the stimulation of the leading vehicle in the previous period, and the driver will adjust his speed according to the state of the leading vehicle. A reasonable setting of the input and output variables of the model can effectively improve the model’s prediction accuracy. In the existing data-driven car-following model, the more input variables are the distance between the leading vehicle and the following vehicle ( Δ x ), the relative speed ( Δ v ), and the speed of the car-following vehicle ( v f ) [13]. Therefore, the input variables of the model are set as Δ x , Δ v , v f , and the output variable is the car’s speed v f ( t + ( P N ) Δ T ) at the time ( t + ( P N ) Δ T ) , which is expressed as Formula (8):
v f ( t + ( P N ) Δ T ) = f ( v f ( t ) , Δ v f ( t ) , Δ x f ( t ) , v f ( t Δ T ) , Δ v f ( t Δ T ) , Δ x f ( t Δ T ) , , v f ( t N Δ T ) , Δ v f ( t N Δ T ) , Δ x f ( t N Δ T ) )
where P denotes the prediction time step, and N represents the memory time step.

2.5. PSO-GRU Car-Following Model

This fusion model uses particle swarm optimization to improve the predictive ability of the GRU car-following model. The structure of the GRU car-following model is shown in Figure 3, including the input layer, the hidden layer (GRU), the fully connected layer, and the output layer. Different hyperparameters have a significant influence on the performance of the GRU car-following model. These parameters are generally selected based on experience, as finding the global optimal parameter set is challenging and lacks sufficient empirical experimental support [26,27,28]. Due to the limitation of computational cost, it is often difficult to perform a systematic grid search to select the best parameters. Therefore, we introduce the PSO-GRU car-following model to determine the most suitable model structure for the data. Based on the previous research experiments [26,29], three parameters (the number of neurons in the hidden layer, the dropout rate, and the batch size) were selected as the optimization objects of PSO. According to the characteristics of the screened car-following trajectory data, the number of neurons in the hidden layer is set to [ 0 ,   256 ] , the dropout rate is set to [ 0 ,   0.2 ] , and the batch size is set to [ 4 ,   32 ] . The fitness value of the particle is determined by the average absolute percentage error between the predicted and actual speeds of the following vehicle, using the following formula.
f i t = 1 M i = 1 M ( | v f sim ( i ) v f obs ( i ) | / v f obs ( i ) ) × 100 %
where M represents the number of samples, v f sim ( i ) and v f obs ( i ) are the i-th simulated and observed values of the following car speed, respectively.
The flow chart of the PSO-GRU car-following model is shown in Figure 4. The specific steps of the PSO-GRU car-following model are as follows:
Step 1: To improve the convergence speed and accuracy of the model, the model’s input variables are normalized. Then, the normalized data is divided into the training set and the test set.
Step 2: Hyperparameters optimization of the GRU car-following model.
(1) Initialize the particle swarm. We set the population size to 3 and maximum iterations to 20. The particle dimension is set to 3, with learning factors c 1 and c 2 set to 1.5. Inertia weights w max and w min are set to 0.9 and 0.4, respectively. Then, the population individuals are initialized to generate random particles.
(2) The random particles generated by (1) are assigned to the GRU car-following model, and the training set and the PSO algorithm are used to train the GRU model.
(3) Initialize individual optimal position p and the optimal value, as well as global optimal position g and the optimal value. Calculate the fitness value of each particle using Formula (9). The individual and global extremum are compared and updated for each particle.
(4) According to Formulas (1) and (2), the velocity V i j ( k ) and position X i j ( k ) of the particles are updated, and the boundary conditions are processed.
(5) Determine whether to meet the end condition. If the needs are met, the optimization result X best is output; otherwise, go back to Step 3 in Step 2.
Step 3: Use the optimal hyperparameters output in Step 2 to build the GRU car-following model and perform model compilation training.
Step 4: Input the test set into the trained model to predict the following vehicle’s speed, update the next vehicle state according to the rate, and calculate the MSE and the MAPE.
For the model comparison test, the sensitivity experiment was conducted with different parameter settings in the GRU car-following model, leading to the determination of the final parameter settings. The configurations for both the PSO-GRU car-following model and the GRU car-following model can be found in Table 1.

2.6. Metrics of the Model Performance

This article evaluates the model’s predictive performance using the MSE and the MAPE, with the calculation formulas for both indicators as follows:
F MSE ( y f ) = 1 n j = 1 n [ y f sim ( j ) y f obs ( j ) ] 2
F MAPE ( y f ) = 1 n j = 1 n ( | y f sim ( j ) y f obs ( j ) | / y f obs ( j ) ) × 100 %
where n represents the number of samples, and y f sim and y f obs represent the predicted and observed values of the following vehicle.

3. Results and Discussion

3.1. Comparison of Prediction Accuracy

The PSO-GRU car-following model proposed in this paper is compared to the IDM and GRU car-following models to verify its advantages. One hundred seventy-five car-following periods (70%) extracted from the natural driving database were used for model training, and the remaining seventy-five car-following periods were used to test the trained model. The results of the PSO-GRU model optimization hyperparameters are shown in Table 2. Simultaneously, the simulation parameters of the IDM model are calibrated using 75 car-following segments to minimize the car-following vehicle speed MSE. Because different traffic conditions and driver characteristics will cause different results in IDM parameter calibration, they are an interval and not a fixed value, as shown in Table 3. Table 4 shows the statistical results of the MSE and the MAPE between the simulated and observed values of the following vehicle speed of the three models. The experimental results show that for dataset 1, compared with the IDM model and the GRU model, the MSE of the PSO-GRU car-following model is reduced by 88.36% and 72.92%, respectively, and the MAPE is reduced by 64.81% and 50.14% respectively. In dataset 3, the MSE decreased by 91.60% and 50.23%, respectively, and the MAPE decreased by 37.43% and 31.48%, respectively. Therefore, the PSO-GRU model performs best; its error distribution range and mean value are minor and significantly lower than the other two models. To show the simulation results of different models more intuitively, a car-following period is randomly selected from the test data of dataset 1. The number of the leading vehicle is 719, and the number of the following vehicle is 727. The simulation results are shown in Figure 5. It can be seen from the diagram that all three models can keep up with the changing trend of the observed data. However, the simulation results of the PSO-GRU car-following model are closer to the observed data, accurately capturing the movement of speed changes. This highlights the importance of neural network hyperparameters for the GRU car-following model to understand human driving behavior. In summary, it shows that the hyperparameters obtained by PSO adaptive optimization can make the GRU network more effectively learn the behavior characteristics of human drivers and improve the model’s prediction accuracy. Therefore, the PSO-GRU car-following model is superior to the IDM and single GRU car-following models.
The higher the model’s prediction accuracy, the stronger the ability to produce traffic characteristics. In car following, the driver should not only follow the guide car, but also avoid collision. These two processes are interrelated and interact with each other, so the car-following process is in a system with a slight oscillation to adjust the car-following distance and speed difference. The ability to restore traffic characteristics is crucial to verify the performance of the model, so it is also necessary to ascertain the ability of the proposed model to restore the oscillation phenomenon [7]. To evaluate the prediction accuracy of the three models more comprehensively, Figure 6 shows the oscillating spiral of the simulation results of the leading vehicle 719 and the following vehicle 727 of the natural driving database. The PSO-GRU model can produce oscillations closer to the observed data.

3.2. Transferability Verification of the Car-Following Model

In the current research on car-following behavior modeling, most studies have not verified the model’s transferability [30]. The parameter calibration of the mathematical car-following model is complicated, cumbersome, and unable to adapt to complex traffic conditions. Furthermore, it falls short of fully capturing complex driver behavior. [4]. Therefore, the IDM lacks transferability as a classic representative of the mathematical car-following model. This study only verifies the transferability of the GRU car-following model and the PSO-GRU car-following model. Three datasets collected from two freeways in Nanjing, China, and the I-80 freeway in California, United States, are used to verify the transferability of the car-following model. The relevant information from the three datasets is shown in Table 5. Because dataset 1 has a unique traffic condition, i.e., a transition stage from non-congestion to congestion, and its average speed is at the middle level of the three datasets, the model trained by dataset 1 can study the transferability and better simulate the car-following model’s performance.
Dataset 3 and NGSIM’s I-80 dataset are used to test the trained model to verify spatial and regional transferability. One hundred car-following periods were randomly selected from dataset 3, and one hundred fifty car-following periods in NGSIM’s I-80 dataset were used for testing. Figure 7 shows the error distribution of the PSO-GRU model and GRU model’s relative velocity prediction results from different test datasets. The statistical results of the error distribution are shown in Table 6. The first column of Table 6 represents the error interval, and the second to fourth columns represent the proportion of the number of samples in the error interval to the total test samples. The results show that the error values of the two car-following models are mainly concentrated near 0. According to Figure 7a, on dataset 3, compared with the GRU car-following model, the test error distribution of the PSO-GRU car-following model is narrow, with a range of about −0.4 to 0.4, and most of the errors are distributed between −0.15 and 0.15. According to the test results, the standard deviations of the PSO-GRU car-following model and the GRU car-following model are 0.098 and 0.249, respectively. The former is 60.64% lower than the latter. The results show that the PSO-GRU car-following model has a higher prediction accuracy and a more stable prediction performance in the test of spatial transferability. As displayed in Figure 7b, the error distribution range of the two car-following models on NGSIM’s I-80 dataset has been expanded, and most of the errors between the two are distributed between −0.5 and 0.5. However, compared with the GRU car-following model, the frequency of the PSO-GRU car-following model near 0 is significantly larger than that of the GRU car-following model. According to the test results, the standard deviations of the PSO-GRU car-following model and the GRU car-following model on NGSIM’s I-80 dataset are calculated to be 0.459 and 0.684, respectively, and the former is 32.89% lower than the latter. The results show that the performance of the two car-following models is affected when using the vehicle trajectory data of highways in another country for regional transferability tests. However, compared with the GRU car-following model, the PSO-GRU car-following model still has a higher prediction accuracy and a more stable prediction performance in the trial. In summary, the PSO-GRU car-following model shows a more stable prediction performance in different datasets, indicating that it can better reproduce the driver’s expected car-following behavior, and it is superior to the GRU car-following model in terms of transferability.
When the traffic flow density is large, the vehicle spacing is small, and the front speed restricts the speed of any vehicle in the platoon. The driver has to accelerate and decelerate frequently to ensure driving safety, forming a stop-and-go phenomenon. This phenomenon often occurs on congested highways. Therefore, a comprehensive evaluation of transferability requires verifying the ability of vehicle-following models to reproduce the phenomenon of stop-and-go waves. To further explore the power of the GRU car-following model and the PSO-GRU car-following model to produce the stop-and-go sensation in real traffic, this paper extracts a platoon of stop-and-go waves from dataset 3 and NGSIM’s I-80 dataset for simulation. Figure 8 and Figure 9 compare the actual and the simulated trajectory. In the platoon simulation process of producing the stop-and-go phenomenon, the error of the simulation trajectory of the GRU car-following model is greater than that of the PSO-GRU car-following model, as shown in Figure 8 and Figure 9. The results show that compared with the GRU car-following model, the PSO-GRU car-following model has a more vital ability to capture and learn the car-following characteristics between the platoon vehicles. The PSO-GRU car-following model can better simulate the stop-and-go phenomenon. Therefore, the transferability of the PSO-GRU car-following model is better than that of the GRU car-following model.

4. Conclusions

This paper proposes a car-following model that combines the PSO algorithm and the GRU network to improve prediction accuracy and cross-dataset adaptability. The model combines the advantages of PSO and the GRU. By using PSO to optimize the hyperparameters of the GRU network, it not only improves the ability of the neural network to learn the characteristics of vehicle driving data, but also excavates the inherent law of car-following behavior so that the PSO-GRU car-following model has higher prediction accuracy and better cross-dataset adaptability than the other two car-following models. This study only considers the influence of a few input variables on car-following behavior. Future research can further explore more driving features as model input features to improve the overall performance of the model. Furthermore, how to combine fuel economy with deep learning vehicle-tracking models remains an urgent issue that needs to be addressed. In the future, more efforts and experimental research are needed.

Author Contributions

Conceptualization, P.Q. and S.B.; methodology, S.B.; software, S.L.; validation, P.Q., S.B. and X.L.; resources, P.Q.; data curation, S.B. and F.W.; writing—original draft preparation, S.B.; project administration, P.Q. and Y.P.; funding acquisition, P.Q. and Y.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Guangxi Science and Technology Major Special Fund (Task Book No. Guike AA23062001) and the Guangxi Science and Technology Base and Talent Project for Guangxi Science and Technology Plan Project: Construction of Guangxi Transportation New Technology Transfer Center Platform (No. Guike AD23026029).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wu, C.; Li, B.; Bei, S.Y.; Zhu, Y.H.; Tian, J.; Hu, H.Z.; Tang, H.R. Research on Short-Term Driver Following Habits Based on Ga-Bp Neural Network. World Electr. Veh. J. 2022, 13, 171. [Google Scholar] [CrossRef]
  2. Fang, R. Research on the Ssidm Modeling Mechanism for Equivalent Driver’s Behavior. World Electr. Veh. J. 2023, 14, 242. [Google Scholar] [CrossRef]
  3. Papathanasopoulou, V.; Antoniou, C. Towards Data-Driven Car-Following Models. Transp. Res. Part C-Emerg. Technol. 2015, 55, 496–509. [Google Scholar] [CrossRef]
  4. Saifuzzaman, M.; Zheng, Z. Incorporating Human-Factors in Car-Following Models: A Review of Recent Developments and Research Needs. Transp. Res. Part C-Emerg. Technol. 2014, 48, 379–403. [Google Scholar] [CrossRef]
  5. Tang, T.; Gui, Y.; Zhang, J.; Wang, T. Car-Following Model Based on Deep Learning and Markov Theory. J. Transp. Eng. Part A-Syst. 2020, 146, 04020104. [Google Scholar] [CrossRef]
  6. Wei, D.; Liu, H. Analysis of Asymmetric Driving Behavior Using a Self-Learning Approach. Transp. Res. Part B-Methodol. 2013, 47, 1–14. [Google Scholar] [CrossRef]
  7. Zhou, M.; Qu, X.; Li, X. A Recurrent Neural Network Based Microscopic Car Following Model to Predict Traffic Oscillation. Transp. Res. Part C-Emerg. Technol. 2017, 84, 245–264. [Google Scholar] [CrossRef]
  8. Wang, X.; Jiang, R.; Li, L.; Lin, Y.; Zheng, X.; Wang, F. Capturing Car-Following Behaviors by Deep Learning. IEEE Trans. Intell. Transp. Syst. 2018, 19, 910–920. [Google Scholar] [CrossRef]
  9. Huang, X.; Sun, J.; Sun, J. A Car-Following Model Considering Asymmetric Driving Behavior Based on Long Short-Term Memory Neural Networks. Transp. Res. Part C-Emerg. Technol. 2018, 95, 346–362. [Google Scholar] [CrossRef]
  10. Wang, X.; Jiang, R.; Li, L.; Lin, Y.; Wang, F. Long Memory Is Important: A Test Study on Deep-Learning Based Car-Following Model. Phys. A-Stat. Mech. Its Appl. 2019, 514, 786–795. [Google Scholar] [CrossRef]
  11. Wu, Y.; Tan, H.; Chen, X.; Ran, B. Memory, Attention and Prediction: A Deep Learning Architecture for Car-Following. Transp. B-Transp. Dyn. 2019, 7, 1553–1571. [Google Scholar] [CrossRef]
  12. Lin, Y.; Wang, P.; Zhou, Y.; Ding, F.; Wang, C.; Tan, H. Platoon Trajectories Generation: A Unidirectional Interconnected Lstm-Based Car-Following Model. IEEE Trans. Intell. Transp. Syst. 2022, 23, 2071–2081. [Google Scholar] [CrossRef]
  13. Ma, L.; Qu, S. A Sequence to Sequence Learning Based Car-Following Model for Multi-Step Predictions Considering Reaction Delay. Transp. Res. Part C-Emerg. Technol. 2020, 120, 102785. [Google Scholar] [CrossRef]
  14. Mo, Z.; Shi, R.; Di, X. A Physics-Informed Deep Learning Paradigm for Car-Following Models. Transp. Res. Part C-Emerg. Technol. 2021, 130, 103240. [Google Scholar] [CrossRef]
  15. Qu, D.; Wang, S.; Liu, H.; Meng, Y. A Car-Following Model Based on Trajectory Data for Connected and Automated Vehicles to Predict Trajectory of Human-Driven Vehicles. Sustainability 2022, 14, 7045. [Google Scholar] [CrossRef]
  16. Naing, H.; Cai, W.; Nan, H.; Tiantian, W.; Liang, Y. Dynamic Data-Driven Microscopic Traffic Simulation Using Jointly Trained Physics-Guided Long Short-Term Memory. Acm Trans. Model. Comput. Simul. 2022, 32, 28. [Google Scholar] [CrossRef]
  17. Lu, W.; Yi, Z.; Liang, B.; Rui, Y.; Ran, B. Learning Car-Following Behaviors for a Connected Automated Vehicle System: An Improved Sequence-to-Sequence Deep Learning Model. IEEE Access 2023, 11, 28076–28089. [Google Scholar] [CrossRef]
  18. Qin, P.; Li, H.; Li, Z.; Guan, W.; He, Y. A Cnn-Lstm Car-Following Model Considering Generalization Ability. Sensors 2023, 23, 660. [Google Scholar] [CrossRef]
  19. Comert, G.; Bezuglov, A.; Cetin, M. Adaptive Traffic Parameter Prediction: Effect of Number of States and Transferability of Models. Transp. Res. Part C-Emerg. Technol. 2016, 72, 202–224. [Google Scholar] [CrossRef]
  20. Wan, Q.; Peng, G.; Li, Z.; Inomata, F. Spatiotemporal Trajectory Characteristic Analysis for Traffic State Transition Prediction near Expressway Merge Bottleneck. Transp. Res. Part C-Emerg. Technol. 2020, 117, 102682. [Google Scholar] [CrossRef]
  21. Coifman, B.; Li, L. A Critical Evaluation of the Next Generation Simulation (Ngsim) Vehicle Trajectory Dataset. Transp. Res. Part B-Methodol. 2017, 105, 362–377. [Google Scholar] [CrossRef]
  22. Seliman, S.; Sadek, A.; He, Q. Optimal Variable, Lane Group-Based Speed Limits a Freeway Lane Drops: A Multiobjective Approach. J. Transp. Eng. Part A-Syst. 2020, 146, 04020074. [Google Scholar] [CrossRef]
  23. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation Proceedings, Anchorage, AK, USA, 4–9 May 1998. [Google Scholar] [CrossRef]
  24. Guo, L.; Qin, Z.; Ge, P.; Gao, T. Adaptive Lane-Departure Prediction Method with Support Vector Machine and Gated Recurrent Unit Models. J. Transp. Eng. Part A-Syst. 2022, 148, 04022103. [Google Scholar] [CrossRef]
  25. Luo, X.; Zhang, D. A Cascaded Deep Learning Framework for Photovoltaic Power Forecasting with Multi-Fidelity Inputs. Energy 2023, 268, 126636. [Google Scholar] [CrossRef]
  26. Christiano, P.; Leike, J.; Brown, T.; Martic, M.; Legg, S.; Amodei, D. Deep Reinforcement Learning from Human Preferences. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  27. De Fauw, J.; Ledsam, J.R.; Romera-Paredes, B.; Nikolov, S.; Tomasev, N.; Blackwell, S.; Askham, H.; Glorot, X.; O’Donoghue, B.; Visentin, D.; et al. Clinically Applicable Deep Learning for Diagnosis and Referral in Retinal Disease. Nat. Med. 2018, 24, 1342–1350. [Google Scholar] [CrossRef]
  28. Santoro, A.; Raposo, D.; Barrett, D.; Malinowski, M.; Pascanu, R.; Battaglia, P.; Lillicrap, T. A Simple Neural Network Module for Relational Reasoning. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  29. Silver, D.; Schrittwieser, J.; Simonyan, K.; Antonoglou, I.; Huang, A.; Guez, A.; Hubert, T.; Baker, L.; Lai, M.; Bolton, A.; et al. Mastering the Game of Go without Human Knowledge. Nature 2017, 550, 354–359. [Google Scholar] [CrossRef] [PubMed]
  30. He, Z.; Xu, R.; Xie, D.; Zong, F.; Zhong, R. A Review of Data-Driven Car-Following Models. J. Transp. Syst. Eng. Inf. Technol. 2021, 21, 102–113. [Google Scholar]
Figure 1. Comparison of smoothed and unsmoothed data for speed.
Figure 1. Comparison of smoothed and unsmoothed data for speed.
Wevj 14 00341 g001
Figure 2. The structure of GRU.
Figure 2. The structure of GRU.
Wevj 14 00341 g002
Figure 3. GRU car-following model structure.
Figure 3. GRU car-following model structure.
Wevj 14 00341 g003
Figure 4. The process of PSO-GRU car-following model.
Figure 4. The process of PSO-GRU car-following model.
Wevj 14 00341 g004
Figure 5. Comparison of simulation results of three car-following models: (a) Speed simulation comparison; (b) Comparison of vehicle spacing simulation.
Figure 5. Comparison of simulation results of three car-following models: (a) Speed simulation comparison; (b) Comparison of vehicle spacing simulation.
Wevj 14 00341 g005
Figure 6. Δv−Gap oscillating spiral.
Figure 6. Δv−Gap oscillating spiral.
Wevj 14 00341 g006
Figure 7. Histogram of errors in predicted relative speed: (a) dataset 3; (b) I−80 dataset.
Figure 7. Histogram of errors in predicted relative speed: (a) dataset 3; (b) I−80 dataset.
Wevj 14 00341 g007
Figure 8. Comparison of two models in dataset 3 to produce the stop-and-go phenomenon.
Figure 8. Comparison of two models in dataset 3 to produce the stop-and-go phenomenon.
Wevj 14 00341 g008
Figure 9. Comparison of two models in NGSIM’s I−80 dataset to produce the stop-and-go phenomenon.
Figure 9. Comparison of two models in NGSIM’s I−80 dataset to produce the stop-and-go phenomenon.
Wevj 14 00341 g009
Table 1. Comparison of parameter settings between the GRU car-following model and PSO-GRU car-following model.
Table 1. Comparison of parameter settings between the GRU car-following model and PSO-GRU car-following model.
ParameterModel
GRUPSO-GRU
Memory time step55
Prediction time step1010
Epochs2020
Number of neurons in the GRU layer100Adaptive optimization
Batch size32Adaptive optimization
Dropout rate0.2Adaptive optimization
Table 2. Hyperparameters of PSO-GRU car-following model.
Table 2. Hyperparameters of PSO-GRU car-following model.
ParameterDataset
Dataset 1Dataset 3
Number of neurons in the GRU layer21581
Batch size56
Dropout rate0.0130.163
Table 3. Parameters of IDM.
Table 3. Parameters of IDM.
ParameterValue
Time headway (s)0.5~1.4
Maximum acceleration (m/s2)1~2.5
Desired deceleration (m/s2)1~3
Desired speed (m/s2)7~25
Table 4. Statistical results of speed simulation error.
Table 4. Statistical results of speed simulation error.
DatasetData SourceMSEMAPE (%)
MinimumMeanMaximumMinimumMeanMaximum
Dataset 1IDM0.08760.45622.24502.115.038.23
GRU0.02450.19610.83422.093.556.42
PSO-GRU0.00720.05310.13561.221.773.38
Dataset 3IDM0.06290.38461.4531.355.539.14
GRU0.02470.06490.12182.525.058.69
PSO-GRU0.01580.03230.05262.023.465.58
Table 5. Related information of three datasets.
Table 5. Related information of three datasets.
Dataset 1Dataset 3I-80 Dataset
Collection placesNanjing, ChinaNanjing, ChinaCalifornia
Number of lanesTwo-way ten laneTwo-way ten laneTwo-way twelve lanes
Traffic flow stateThe transition from non-congested to congestedcongestioncongestion
Collection period7:00–7:058:10–8:1917:00–17:15
Flow/(veh/h)14,69670817124
Average speed/(m/s)9.084.7922.34
Maximum speed limit/(km/h)8080104
Table 6. The interval distribution ratio of transferability test error of the GRU model and PSO-GRU model.
Table 6. The interval distribution ratio of transferability test error of the GRU model and PSO-GRU model.
Error IntervalDistribution of Test Errors (%)
Dataset 3I-80 Dataset
GRUPSO-GRUGRUPSO-GRU
[−0.15, 0.15]51.6087.8121.1030.55
[−0.30, 0.30]82.0299.3938.8754.79
[−0.50, 0.50]95.1999.9957.5176.84
[−1.00, 1.00]99.43100.0086.4196.95
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qin, P.; Bin, S.; Pang, Y.; Li, X.; Wu, F.; Liu, S. A High-Precision Car-Following Model with Automatic Parameter Optimization and Cross-Dataset Adaptability. World Electr. Veh. J. 2023, 14, 341. https://doi.org/10.3390/wevj14120341

AMA Style

Qin P, Bin S, Pang Y, Li X, Wu F, Liu S. A High-Precision Car-Following Model with Automatic Parameter Optimization and Cross-Dataset Adaptability. World Electric Vehicle Journal. 2023; 14(12):341. https://doi.org/10.3390/wevj14120341

Chicago/Turabian Style

Qin, Pinpin, Shenglin Bin, Yanzhi Pang, Xing Li, Fumao Wu, and Shiwei Liu. 2023. "A High-Precision Car-Following Model with Automatic Parameter Optimization and Cross-Dataset Adaptability" World Electric Vehicle Journal 14, no. 12: 341. https://doi.org/10.3390/wevj14120341

Article Metrics

Back to TopTop