Next Article in Journal
Research on Effective Design Methods of Core Beam of Full Bridge Aeroelastic Model
Next Article in Special Issue
Estimation of Intelligent Commercial Vehicle Sideslip Angle Based on Steering Torque
Previous Article in Journal
LBFNet: A Tomato Leaf Disease Identification Model Based on Three-Channel Attention Mechanism and Quantitative Pruning
Previous Article in Special Issue
Multi-View Joint Learning and BEV Feature-Fusion Network for 3D Object Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wheel Odometry with Deep Learning-Based Error Prediction Model for Vehicle Localization

State Key Laboratory of Automotive Simulation and Control, Jilin University, Changchun 130025, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(9), 5588; https://doi.org/10.3390/app13095588
Submission received: 7 April 2023 / Revised: 25 April 2023 / Accepted: 28 April 2023 / Published: 30 April 2023
(This article belongs to the Special Issue Autonomous Vehicles: Technology and Application)

Abstract

:
Wheel odometry is a simple and low-cost localization technique that can be used for localization in GNSS-deprived environments; however, its measurement accuracy is affected by many factors, such as wheel slip, wear, and tire pressure changes, resulting in unpredictable and variable errors, which in turn affect positioning performance. To improve the localization performance of wheel odometry, this study developed a wheel odometry error prediction model based on a transformer neural network to learn the measurement uncertainty of wheel odometry and accurately predict the odometry error. Driving condition characteristics including features describing road types, road conditions, and vehicle driving operations were considered, and models both with and without driving condition characteristics were compared and analyzed. Tests were performed on a public dataset and an experimental vehicle. The experimental results demonstrate that the proposed model can predict the odometry error with higher accuracy, stability, and reliability than the LSTM and WhONet models under multiple challenging and longer GNSS outage driving conditions. At the same time, the transformer model’s overall performance can be improved in longer GNSS outage driving conditions by considering the driving condition characteristics. Tests on the experimental vehicle demonstrate the model’s generalization capability and the improved positioning performance of dead reckoning when using the proposed model. This study explored the possibility of applying a transformer model to wheel odometry and provides a new solution for using deep learning in localization.

1. Introduction

Autonomous driving has become a broad research topic in recent years, and it has great potential to enhance driving safety and improve transport efficiency [1]. An autonomous driving system includes many modules, such as decision, planning, and control modules [2], which require accurate knowledge of the vehicle’s position to perform correct driving decisions and actions [3]. For example, an error of a few decimeters may cause the vehicle to position itself in the wrong lane, leading to incorrect driving decisions, and thus to traffic accidents. Therefore, self-driving cars require robust localization systems with decimeter-level or even centimeter-level accuracy [4].
Global navigation satellite systems (GNSS) [5] are the most commonly used method for vehicle localization. A GNSS is suitable for open areas but is less reliable in obscured environments involving structures such as tall buildings and viaducts due to signal occlusion, multipath errors, and other factors. A GNSS can reach the centimeter level in open areas with real-time kinematic (RTK) [6] technology; however, the effect of multipath errors on localization estimates persists, so GNSS alone cannot provide consistent and reliable localization accuracy. A GNSS is usually integrated with an IMU [7] to form a localization system. IMUs consist of accelerometers and gyroscopes, which can measure vehicle acceleration and angular velocity. This information can be further used to calculate vehicle position relative to its initial position in a process called dead reckoning. However, IMU measurements are affected by multiple noises that are amplified during the multiple integrations of acceleration to displacement and angular velocity to attitude angle, grow exponentially, and accumulate over time.
Wheel odometry measures the number of pulses per unit of time at the wheel through wheel encoders to calculate wheel speed and travel distance. Due to the necessity of an anti-lock brake system (ABS), wheel encoder sensors are mounted on the vehicle, making wheel odometry a localization technology that can be used universally. Compared to IMU’s accelerometer sensors, wheel odometry requires fewer integration steps to determine the vehicle position, reducing errors in the integration process and providing a better positioning solution. Wheel odometry was initially widely used in mobile robotics and was then gradually applied to vehicles to complete localization tasks. Thrun, S et al. [8] improved the problem of poor estimation of vehicle attitude and IMU bias when GNSS signals are weak or down, which occurs in common GNSS/IMU fusion algorithms, by considering wheel odometry. Funk, N et al. [9] complemented visual–inertial odometry with wheel odometry for automotive applications. In addition, wheel odometry can be the primary localization algorithm in some special cases, such as low light conditions, low speed driving, and parking scenes [10,11].
The accuracy of wheel odometry-based localization is affected by many factors; on the one hand, it is affected by model parameters, such as tire diameter. Although these can be calibrated during installation, they can change dynamically when the vehicle is driven due to certain factors, such as wheel wear and load changes, which can affect the odometry output. On the other hand, the measurement accuracy of odometry is affected by wheel slip. In addition, unevenness on the road surface, such as bumps and potholes, interferes with the odometry results. Therefore, the output errors of wheel odometry have high uncertainty, and accurately predicting the wheel odometry error is key to improving wheel odometry-based localization accuracy.
Fazekas, M et al. proposed an off-line iterative estimation algorithm in [12] and an online estimation method in [13] in which Kalman filter and the least squares algorithm are performed in an iterative loop for wheel circumference estimation of autonomous vehicles to improve wheel odometry’s accuracy. They further estimated the wheel circumferences recursively to improve the wheel speed estimation with a nonlinear least squares method in [14]. The study [15] calibrated the parameters of the wheel odometry model with Gauss–Newton regression and Kalman filter. Welte A et al. [16] presented a method able to accurately calibrate the model parameters by using a Rauch–Tung–Striebel smoothing scheme which enables to obtain state estimates close to the ground truth. In summary, these studies used traditional state estimation methods to improve wheel odometry’s accuracy. However, these methods rely on precise parameter adjustment, which lacks adaptability to various driving conditions. On the other hand, the wheels are affected by many factors of road conditions and their own parameters so that it is difficult to obtain an accurate model under the action of multiple factors using traditional state estimation methods. Deep learning is a data-driven approach that can learn complex nonlinear properties and uncertainty from data. Researchers have proposed deep learning-based techniques to discover the error drift characteristics of IMUs over time for better GPS/INS combined navigation solutions [17,18,19,20]. Several deep learning techniques have also been applied to wheel odometry: an LSTM model was proposed to learn the uncertainty of wheel odometry in [21], and a WhONet model was proposed in [22], which further improved the performance compared to the LSTM model.
The transformer model [23] emerged in the field of natural language processing (NLP), and it was first used for machine translation tasks to achieve SOTA effects. A transformer model can fully utilize the GPU resources for parallel computation to accelerate the training speed. Compared with traditional models such as RNN and LSTM that deal with time series, transformer models do not rely on past hidden states to capture the dependence on previous information but learns the association of information at each moment in a time series as a whole, avoiding the risk of losing past information and reducing the performance degradation caused by long-term dependence.
This study aims to explore a new low-cost localization method that combines wheel odometry and deep learning techniques. The main contributions of this paper are as follows:
  • A transformer-based error prediction model is developed to learn the measurement uncertainty of wheel odometry, and the end-to-end correspondence from the wheel speed to the travel distance error is established. Tests on a publicly available dataset and an experimental vehicle are performed to verify that the proposed model can accurately predict the output error of wheel odometry and improve the accuracy, stability, and reliability compared to LSTM and WhONet under various driving conditions. In addition, the model trained using the public dataset is transferred to the experimental vehicle to verify the model’s generalization capability and the improved positioning performance of dead reckoning by combining the model.
  • Driving condition characteristics, including features describing road types, road conditions, and vehicle driving operations, are considered. Models with different inputs are developed and compared, one with and the other without driving condition characteristics. The tests demonstrate that the model that considers driving characteristics has better adaptability to longer GNSS outage driving conditions.
The rest of the paper is organized as follows: Section 2 describes the dead reckoning method based on wheel odometry; Section 3 presents the Transformer model and the selected input features; Section 4 defines the dataset and metrics used to train the model and evaluate its performance, as well as the model parameters, in addition to showing the process of model parameter tuning; Section 5 demonstrates the results based on a publicly available dataset and when the model is applied to real vehicles, followed by a relevant discussion; finally, the entire paper is concluded in Section 6.

2. Dead Reckoning Based on Wheel Odometry

The dead reckoning (DR) system relies on odometry, which measures the distance traveled by the vehicle, and the angular velocity obtained from the chassis or a gyroscope in IMUs, to estimate the relative position forward from the current position.
The wheel odometry is based on wheel encoders to track the number of revolutions each wheel has made, and each wheel’s measured angular velocity can be derived. Then, the distance traveled by each wheel per unit of time is calculated by combining the wheel radius. According to wheel odometry, the vehicle’s measured travel distance from moment t 1 to moment t can be expressed as:
x m e a = t 1 t w r l r + w r r r / 2   d t
where w r l and w r r are the measured left and right rear wheel angular velocities, respectively, and r is the nominal wheel radius.
Many factors influence the measurement accuracy of wheel odometry. First, the measurement of wheel angular velocity contains errors; second, some factors, such as the vehicle load changes, change the wheel radius dynamic; third, some aspects, such as wheel slip and road potholes, lead to the wheel rolling distance not being equal to the actual distance the vehicle travels.
The vehicle’s real travel distance from moment t 1 to moment t can be expressed as:
x r e a l = t 1 t w r l + ε w r l r + ε r l + w r r + ε w r r r + ε r r   d t / 2 + ε s
where ε w r l is the error of the actual left rear wheel angular velocity compared with w r l , ε w r r is the error of the real right rear wheel angular velocity compared with w r r , ε r l is the error between r and the real left rear wheel radius during vehicle driving, ε r r is the error between r and the real right rear wheel radius during vehicle driving, and ε s is the error between the rolling distance and the actual travel distance.
Let x b i a s be the error of the travel distance:
x b i a s = x m e a x r e a l
where x m e a is the measured travel distance.
In practice, the point coordinates measured by RTK-GPS have centimeter-level accuracy and can be regarded as the true value of the localization coordinates. Suppose p 1 , p 2 , , p k are the recorded points from moment t 1 to moment t , then x r e a l can be expressed as:
x r e a l = i = 1 k 1 p i + 1 p i
where p i + 1 p i is the distance between p i and p i + 1 on the Earth’s surface, which can be calculated according to the Vincenty formula given in [24].
Suppose the vehicle’s travel distance in a short time Δ t is Δ s and the vehicle’s heading angle is ψ , the transverse and longitudinal displacements Δ x and Δ y are:
Δ x = Δ s cos ψ
Δ y = Δ s sin ψ
The relative displacements between two moments t 1 and t 2 can be regarded as accumulating relative displacements in a very short time Δ t . Therefore, the transverse displacement x 12 and the longitudinal displacement y 12 from moment t 1 to moment t 2 can be expressed as:
x 12 = i = 1 n Δ s i cos ψ i
y 12 = i = 1 n Δ s i sin ψ i
According to Equations (5)–(8), after obtaining the travel distance between adjacent moments with the wheel odometry, the vehicle’s relative displacement between two moments can be calculated by combining the vehicle’s heading angle. Figure 1 shows the role of the wheel odometry error prediction model in the localization system. With the established model, the travel distance error can be predicted accurately, and the travel distance output by the odometry can be corrected so that a more accurate position of the next moment can be obtained after dead reckoning.

3. Transformer Model

As the output of wheel odometry is affected by many factors, the measurement error has high uncertainty and keeps changing and accumulating over time. Transformer is a deep learning model that processes time series, and the use of Transformer is studied in this paper to learn the error characteristics of wheel odometry from time series for more accurate localization. In this section, a Transformer-based error prediction model is established to predict the error of the wheel odometry output.
The Transformer model is based on encoder/decoder architecture. The original input is translated into the model input containing location information through embedding and location coding. The encoder consists of a stack of N identical layers with two sub-layers. The first sub-layer is a multi-head self-attention layer used to extract the features of different attention heads of the model. The second sub-layer is a fully connected feed-forward network, which can enhance the nonlinear representation ability of the model. Both sub-layers use residual connection and normalization. Residual connection further enhances the fitting ability of the model, and normalization can avoid the problem of the slow convergence of the model caused by the parameters being too large or too small. The decoder is also composed of N identical layers stacked. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which is used to perform a multi-head attention calculation on the output of the last encoder. Similar to the encoder, the decoder uses residual connection and normalization for each sub-layer. The first self-attention sub-layer of the decoder is designed in a masked form to ensure that the prediction of position i depends only on the known output at positions less than i. The output of the decoder is converted to the required dimension through a linear layer, and the value with the highest probability calculated through the softmax layer is the final output of the model.
Considering that different driving conditions may influence the wheel states and affect the odometry output, two different Transformer error prediction models are designed. The first model contains four-wheel wheel speed as the input to the encoder, and the second model considers the driving condition features and four-wheel wheel speed as the input to the encoder. Both models output the predicted error sequence. The Transformer model that does not consider the driving conditions is referred to as Transformer-NDC for convenience, and the Transformer model that considers the driving conditions is referred to as Transformer-DC for convenience. The structures of the two models are shown in Figure 2.
Referring to the different driving conditions in the public dataset (IO-VNBD) [25] and common driving condition features, a total of 29 features describing different driving conditions were selected, as shown in Table 1. Features 1–9 describe different road types, features 10–15 describe different road conditions, and features 16–29 describe various vehicle driving operations. The data subsets are preprocessed in combination with the selected features. The feature value takes the value of 1 when the corresponding driving condition of the data subset contains the related feature; otherwise, it takes the value of 0.

4. Experimental Setup

The model is trained using the Pytorch framework commonly used for deep learning. The dataset for training and testing is the IO-VNB dataset [25], which records driving data with a total length of 5700 km and a total driving time of 98 h in multiple driving conditions. This dataset captures multiple signals such as longitudinal vehicle acceleration, yaw rate, heading angle, GPS coordinates (latitude and longitude), and wheel speed from each timestamp of the vehicle ECU at a sampling interval of 10 Hz.

4.1. Training

To facilitate model comparison, the model’s training set is consistent with the training set in [22], as shown in Table 2, which is characterized by about 1590 min of drive time over a total distance of 1165 km. The model training parameters are shown in Table 3. The model uses the Adam optimizer, where β 1 = 0.9 ,   β 2 = 0.98 ,   ε = 10 9 . During training, the learning rate varies according to the following equation:
l r a t e = d model 0.5 min s t e p _ n u m 0.5 , s t e p _ n u m w a r m u p _ s t e p s 1.5
where w a r m u p _ s t e p s is a constant taking the value of 100. Equation (9) means that the learning rate increases linearly in the first w a r m u p _ s t e p s step and decreases proportionally to the reciprocal of the square root of the number of steps s t e p _ n u m . The loss function used in training is the mean absolute error loss function.

4.2. Testing

Two evaluation metrics, the cumulative root mean square error (CRSE) and the cumulative true error (CTE), are listed in [22]. The CRSE describes the cumulative root mean square of the prediction error per 1 s within the total duration Nt. This metric ignores the negative sign of the estimation error and thus provides a better understanding of the performance of the positioning technique. CTE measures the sum of the prediction errors per 1 s within the total duration Nt. The CTE metric mainly reflects whether the estimated value is higher or lower than the actual value. Compared with the CRSE, the CTE is less realistic when comparing the performances of positioning techniques. Therefore, the CRSE is used to evaluate the model performance in this paper. The CRSE is defined as follows:
CRSE = t = 1 N t e p r e d 2
where t is the sampling period and e p r e d is the prediction error.
Mean (µ): the average value of the CRSE for all series to reveal the average prediction accuracy of the model in each driving condition.
μ = 1 N s i = 1 N s CRSE
where N s is the total number of sequences in each driving condition.
Standard deviation ( σ ): this metric indicates the variation in the CRSE in all series. It demonstrates the stability of the model.
σ = CRSE i μ 2 N s
Maximum (max): the maximum CRSE value among all sequences evaluated in each driving condition. It indicates the model’s reliability, which means the model cannot be applied to odometry error correction if the maximum CRSE is too large.
Minimum (min): the minimum CRSE value of all sequences evaluated in each driving condition. It indicates the model’s highest possible accuracy.
The data subsets for performance evaluation are shown in Table 4. The test was first performed on the V-Vw12 dataset, which depicts near-straight-line driving on a highway, to assess the model’s performance under relatively easy driving conditions. Nevertheless, the highway scenario can be considered challenging due to the large distances covered per second. Since more challenging driving conditions, such as wet roads, roundabouts, and hard brake, are pretty challenging for the localization algorithm [26] and introduce high uncertainty to wheel odometry measurements, the model’s performance was simultaneously evaluated under challenging driving conditions. The data subsets used in these tests were divided into a number of test sequences of 10 s lengths each that predicted the changes in the subsequent 1 s. In addition, the data subsets were split into several test sequences of 30, 60, 120, or 180 s depending on the outage driving condition being evaluated to test the model’s performance in longer-term GNSS outage driving conditions.

4.3. Model’s Parameter Adjustment Process

Taking Transformer-NDC as an example, the parameter adjustment process is shown in this section. Taking the test on the V-Vfb02d dataset as an example, the four rightmost columns of Table 5 list the model’s corresponding performance on the four evaluation metrics. The parameter values listed for the baseline model are the final parameter values selected.
The five sections A–E listed in Table 5 test the effect of changing the corresponding parameters on the results. A changes h with respect to the baseline, and it can be seen that the mean and maximum error increase when h decreases to 4, indicating a decrease in the accuracy and reliability of the model. When h increases to 16, although the maximum error is slightly reduced, the number of model parameters increases and the model’s computational efficiency decreases, so this parameter value is not chosen. B changes N with respect to the baseline, and the maximum error increases when N decreases to 4 and increases to 8, indicating a decrease in the model’s reliability, and the mean error increases when N decreases to 4, indicating a decrease in the model’s accuracy. C changes dmodel relative to the baseline, and when dmodel decreases to 256 and increases to 1024, the maximum error increases significantly, indicating a decrease in the model’s reliability, and the standard deviation increases showing that the model stability decreases. D changes dff with respect to the baseline, and it can be seen that the mean and maximum error increase when dff decreases to 16. When dff increases to 64, there is a large increase in the maximum value, mean, and standard deviation of error, indicating that the reliability, accuracy, and stability of the model decrease significantly. E changes Dropout relative to the baseline, and when Dropout increases to 0.1 and 0.2, the maximum value, mean, and standard deviation of error increase significantly, indicating that the reliability, accuracy, and stability of the model decrease significantly.

5. Results and Discussion

5.1. Tests on the Dataset

In this section, the proposed wheel odometry error prediction model based on Transformer was compared with the LSTM model proposed in [21] and the WhONet model proposed in [22] to evaluate the model performance using the metrics described in Section 4.2.

5.1.1. Tests in Challenging Driving Conditions

Motorway

The test results of the motorway scenario are displayed in Table 6 and Figure 3. It can be seen that the four metrics of Transformer-DC and Transformer-NDC were equal and were lower than those of LSTM and WhONet. Compared with LSTM and WhONet, both Transformer models showed 76% and 52% error reductions in the max metric, 83% and 69% error reductions in the µ metric, and 92% and 85% error reductions in the min metric, respectively, demonstrating better positioning reliability and positioning accuracy. All the models had a low standard deviation below 0.1 m and had good stability in this driving condition. On the whole, all four models show good performance in this driving condition. Compared with the LSTM and WhONet, the proposed model significantly improves the average and maximum prediction accuracy.

Sharp Cornering and Successive Left and Right Turns

Table 7 and Figure 4 illustrate the test results for the driving condition with sharp cornering and successive left and right turns. The four metrics of Transformer-DC and Transformer-NDC on all three data subsets were approximately equal. Compared to LSTM, the two proposed models show considerable performance improvements, specifically up to 76%, 93%, 90%, and 80% on the four metrics. Compared to WhONet, the performances on the max and standard deviation metrics were identical; the minimum and mean errors were reduced by up to 73% and 68%, respectively. In summary, the proposed models improve the reliability, highest possible accuracy, average accuracy, and standard deviation compared to LSTM, and they improve the highest possible and average accuracy compared to WhONet under this driving condition.

Wet Road

The test results for the wet road scenario are shown in Table 8 and Figure 5. Both Transformer-DC and Transformer-NDC performed approximately the same on the three data subsets. Compared to LSTM, the two proposed models show considerable performance improvements, specifically up to 75%, 90%, 90%, and 85% on the four metrics. Compared to WhONet, although there was only a slight improvement in the model’s performance in the four metrics on the V-Vtb8 and V-Vtb11 data subsets, the performances on the max, mean, and standard deviation metrics were improved dramatically by 75%, 67%, 84%, and 78%, respectively, on the V-Vtb13 data subset.

Quick Changes in Vehicle’s Acceleration

Table 9 and Figure 6 demonstrate the test results for the driving condition with quick changes in the vehicle’s acceleration. Both Transformer-DC and Transformer-NDC had almost the same performance on the two data subsets. On the V-Vfb02e dataset, compared to LSTM, the two proposed models show 57%, 94%, 86%, and 72% improvements on the four metrics; compared to WhONet, the two proposed models show 55%, 82%, 82%, and 78% improvements on the four metrics. On the V-Vta12 dataset, compared to LSTM, the two proposed models show 54%, 92%, 83%, and 64% improvements on the four metrics; compared to WhONet, the two proposed models show 54%, 64%, 74%, and 59% improvements on the four metrics. In summary, the proposed models significantly improved the reliability, highest possible accuracy, average accuracy, and stability compared to LSTM and WhONet in this driving condition.

Roundabout

Table 10 and Figure 7 list the test results in the roundabout scenario. Both Transformer-DC and Transformer-NDC had approximately the same performance on the two data subsets. The proposed models outperform LSTM. Specifically, the models can enhance the performance by up to 77%, 92%, 91%, and 81% on all four metrics. Compared to WhONet, the proposed models show up to 35%, 79%, 83%, and 73% improvements on all four metrics. In summary, the proposed models significantly improved the reliability, highest possible accuracy, average accuracy, and stability compared to LSTM and WhONet in this driving condition.

Hard Brake

Test results for the hard brake driving condition are displayed in Table 11 and Figure 8. The results show that Transformer-DC and Transformer-NDC exhibited almost identical performance on the two data subsets. The four metrics of the proposed models are reduced by up to 96%, 93%, 97%, and 95%, respectively, compared to LSTM. Compared to WhONet, the two proposed models show up to 84%, 95%, 92%, and 77% improvements in the four metrics. It is thus proven that the accuracy, stability, and reliability of the transformer-based model are significantly improved compared with LSTM and WhONet under this challenging driving condition.
In summary, the proposed model has higher accuracy, stability, and reliability under various challenging driving conditions where it is difficult to measure odometry signals accurately compared with LSTM and WhONet. Meanwhile, the Transformer models with and without the driving condition characteristics can obtain approximately the same performance under these challenging driving conditions by adjusting the parameters.

5.1.2. Tests in Longer GNSS Outage Driving Conditions

Since WhONet has better performance than LSTM, as shown in Table 6, Table 7, Table 8, Table 9, Table 10 and Table 11, in this section, the proposed model was compared further with WhONet to evaluate their performance over four longer-term GNSS outages of 30 s, 60 s, 120 s, and 180 s using the corresponding data subsets in Table 4, and the evaluation results are shown in Table 12, Table 13, Table 14 and Table 15.

30 s GNSS Outage

Test results on the 30 s GNSS outage driving condition are displayed in Table 12 and Figure 9. The results show that Transformer-NDC demonstrated significantly better performance than the WhONet model. Specifically, the errors on the four metrics were reduced by up to 50%, 81%, 66%, and 60%, respectively, demonstrating that the Transformer model has higher accuracy and better stability and reliability than the WhONet model in this driving condition. In addition, the performance of Transformer-DC on the max metric is improved compared to Transformer-NDC, which proves that considering the driving condition characteristics boosts the model’s reliability.
Table 12. Results under the 30 s GNSS outage driving condition.
Table 12. Results under the 30 s GNSS outage driving condition.
IO-VNB DatasetPerformance MetricsModel Error (m)
CRSE
WhONetTransformer-NDCTransformer-DC
V-Vtb3max1.351.350.93
min0.110.050.04
μ0.670.250.24
σ0.200.170.14
V-Vfb02amax8.574.313.65
min0.280.070.08
μ1.010.340.34
σ0.830.330.44
V-Vfb02bmax1.701.311.27
min0.370.070.06
μ1.040.360.35
σ0.300.220.22
Figure 9. Test results under the 30 s GNSS outage driving condition. (a) V-Vtb3; (b) V-Vfb02a; (c) V-Vfb02b.
Figure 9. Test results under the 30 s GNSS outage driving condition. (a) V-Vtb3; (b) V-Vfb02a; (c) V-Vfb02b.
Applsci 13 05588 g009

60 s GNSS Outage

Table 13 and Figure 10 illustrate the test results under the 60 s GNSS outage driving condition. It can be seen that compared with the WhONet model, Transformer-NDC improved the performance enormously. Specifically, the errors were reduced on the four metrics greatly by up to 25%, 82%, 66%, and 42% on the V-Vfb02a and V-Vfb02b data subsets; on the V-Vtb3 data subset, although the errors on the max and µ metrics were slightly increased, the mean error was decreased tremendously by 62%. Additionally, Transformer-DC improved the performance on the four metrics by up to 32%, 82%, 67%, and 23% on the three datasets compared with the WhONet model, and it reduced the maximum error by up to 33% compared to Transformer-NDC. In summary, the proposed models have higher accuracy and better stability and reliability in error prediction compared to WhONet, and considering the driving condition characteristics can enhance the model’s reliability.
Table 13. Results under the 60 s GNSS outage driving condition.
Table 13. Results under the 60 s GNSS outage driving condition.
IO-VNB DatasetPerformance MetricsModel Error (m)
CRSE
WhONetTransformer-NDCTransformer-DC
V-Vtb3max2.292.541.71
min0.290.160.12
μ1.310.500.47
σ0.250.340.27
V-Vfb02amax9.627.246.55
min0.870.190.19
μ2.020.680.68
σ1.110.640.86
V-Vfb02bmax2.942.412.40
min1.100.200.20
μ2.080.710.69
σ0.480.380.38
Figure 10. Test results under the 60 s GNSS outage driving condition. (a) V-Vtb3; (b) V-Vfb02a; (c) V-Vfb02b.
Figure 10. Test results under the 60 s GNSS outage driving condition. (a) V-Vtb3; (b) V-Vfb02a; (c) V-Vfb02b.
Applsci 13 05588 g010

120 s GNSS Outage

The test results under the 120 s GNSS outage driving condition are listed in Table 14 and Figure 11. Compared with the WhONet model, except for the slight increment on the σ metric on the V-Vtb3 data subset, both Transformer models improved the performance on the four metrics. Specifically, Transformer-NDC reduced the errors on the four metrics by up to 50%, 81%, 67%, and 35%, respectively, and the Transformer-DC reduced the errors on the four metrics by up to 50%, 84%, 71%, and 35%, respectively, proving that the proposed models have higher accuracy and better stability and reliability compared with the WhONet model in this driving condition. In addition, the values on the four metrics of Transformer-DC are reduced compared to Transformer-NDC, demonstrating that considering the driving condition characteristics improves the model’s accuracy, stability, and reliability.
Table 14. Results under the 120 s GNSS outage driving condition.
Table 14. Results under the 120 s GNSS outage driving condition.
IO-VNB DatasetPerformance MetricsModel Error (m)
CRSE
WhONetTransformer-NDCTransformer-DC
V-Vtb3max4.013.642.80
min1.750.340.30
µ2.620.990.95
σ0.190.610.49
V-Vfb02amax 11.757.567.30
min 2.050.490.38
µ4.071.351.19
σ1.411.161.08
V-Vfb02bmax 5.762.882.86
min2.520.480.41
µ4.151.421.38
σ0.880.570.57
Figure 11. Test results under the 120 s GNSS outage driving condition. (a) V-Vtb3; (b) V-Vfb02a; (c) V-Vfb02b.
Figure 11. Test results under the 120 s GNSS outage driving condition. (a) V-Vtb3; (b) V-Vfb02a; (c) V-Vfb02b.
Applsci 13 05588 g011

180 s GNSS Outage

Table 15 and Figure 12 show the test results under the 180 s GNSS outage driving condition. Compared with WhONet, Transformer-NDC improved the performance on the max, min, and µ metrics by up to 51%, 79%, and 66%, respectively, exhibiting that the Transformer model has higher accuracy and better reliability than the WhONet model in this driving condition. By considering the driving condition characteristics, Transformer-DC reduced the error on the four metrics on the V-Vtb3 and V-Vfb02a data subsets and the error on the max and µ metrics on the V-Vfb02b data subset, which proves that considering the driving condition characteristics improves the model’s accuracy, stability, and reliability.
Table 15. Results under the 180 s GNSS outage driving condition.
Table 15. Results under the 180 s GNSS outage driving condition.
IO-VNB DatasetPerformance MetricsModel Error (m)
CRSE
WhONetTransformer-NDCTransformer-DC
V-Vtb3max4.974.313.83
min2.810.640.51
µ3.931.441.39
σ0.210.770.66
V-Vfb02amax 13.029.908.54
min 3.670.760.64
µ6.082.031.80
σ1.551.681.62
V-Vfb02bmax 8.093.993.68
min4.040.830.89
µ6.232.132.06
σ0.770.720.73
Figure 12. Test results under the 180 s GNSS outage driving condition. (a) V-Vtb3; (b) V-Vfb02a; (c) V-Vfb02b.
Figure 12. Test results under the 180 s GNSS outage driving condition. (a) V-Vtb3; (b) V-Vfb02a; (c) V-Vfb02b.
Applsci 13 05588 g012
In conclusion, the proposed Transformer models can accurately predict odometry errors under different longer GNSS outage driving conditions and have higher accuracy, stability, and reliability than the WhONet model. Additionally, Transformer-DC, which considers the driving condition characteristics, can better adapt to different driving conditions than Transformer-NDC, and this improves the model’s overall performance.
From the tests in Section 5.1, it can seen that whether considering the driving condition characteristics or not, the proposed Transformer model can predict odometry errors with higher accuracy, stability, and reliability than LSTM and WhONet under various driving conditions including challenging and longer GNSS outage driving conditions, which may cause the wheel odometry output to become unpredictable and to accumulate errors. The results showcase the superiority of the model and its great potential in terms of improving positioning performance. Although the Transformer models with and without the driving condition characteristics can perform roughly equally well under challenging driving conditions, the Transformer model’s overall performance can be improved under longer GNSS outage driving conditions by considering the driving condition characteristics. It can be inferred that considering the driving condition characteristics can improve the reliability and stability of long-term prediction by enhancing the adaptability to various driving conditions, while this is not so evident in short-term prediction.

5.2. Tests on the Experimental Vehicle

In this section, Transformer-NDC is taken as an example, and the model’s generalization capability and improved dead reckoning performance based on wheel odometry combined with the model was verified on the experimental vehicle, which is shown in Figure 13. RTK equipment was loaded on the vehicle to provide centimeter-level-accuracy GPS coordinates that can be used as true values to verify the localization accuracy.
To verify the generalization capability of the model, the model trained on the public dataset was directly applied to this experimental vehicle. The experiment was conducted in a region of Changchun, and the model’s performance on the real vehicle data was compared with that on the V-Vta8 dataset under similar driving conditions, as is shown in Table 16. It can be seen that the corresponding maximum and mean metrics of CRSE increase after the model transfers, but not much, and there is a slight decrease in the minimum and standard deviation, so the model still has good model prediction function after the model is transferred.
The vehicle driving route during the experiment is shown in Figure 14, and the RTK-GPS coordinates, the positioning coordinates obtained by dead reckoning alone, and the positioning coordinates obtained by dead reckoning combined with the odometry error prediction model were recorded. It can be seen from the enlarged parts in the figure that the green curve and the blue curve overlap approximately in most cases, which is due to the fact that the vehicle driving along the lane is the most dominant driving behavior, and the odometry output error mainly affects the longitudinal positioning error of the vehicle, which cannot be directly reflected in this figure.
Figure 15 compares the error of the original dead reckoning with the error of the dead reckoning after compensation using the error prediction model proposed in this paper. In Figure 15b,c, the error after compensation using the error prediction model is significantly lower compared to the original dead reckoning, and in Figure 5a, although the error of the original dead reckoning is relatively lower than the error after combining with the prediction model compensation between the first 1000 points and the 4500th to 5000th points, which is caused by the deep learning-based model’s uncertainty, both are small and less than 0.5 m, and the error after combining with the prediction model compensation is lower than the original dead reckoning at other points most of the time. It is thus proven that the proposed error prediction model can effectively compensate for the output error of the wheel odometry in the x and y directions and overall, thus improving the accuracy of the dead reckoning and the performance of the positioning system.
Table 17 compares the performance indexes of the original dead reckoning error and the error after error compensation with the model. It can be seen that the maximum value of the positioning error is significantly reduced, and the mean and standard deviation of the error are also reduced after the error is compensated for by the error prediction model, indicating that the proposed error prediction model can effectively predict the odometry error and improve the reliability, stability, and accuracy of odometry positioning.
In summary, the model performance when transferring the model trained on the public dataset to the real vehicle application declined compared to the performance on the dataset, but it is still effective at predicting the odometry error and improving the accuracy and stability of localization. Predictably, if the model is trained using the data collected from the real vehicle and then applied to the real vehicle, the accuracy will be higher, but the workload simultaneously increases. This experiment validates the generalization capability of the model and provides a basis for the application and generalization of the model to multiple vehicle types.

6. Conclusions

In this study, a Transformer-based wheel odometry error prediction model is established to accurately predict the accumulated and changing travel distance errors during vehicle driving so as to improve the positioning performance of wheel odometry. The study and its findings can be summarized as follows:
  • Two different Transformer error prediction models are designed. Both models contain four-wheel wheel speed as the input. One considers the driving condition features, which contain multiple features describing different road types, road conditions, and driving operations, as the input, and the other does not.
  • The performance of the proposed model is evaluated and compared with LSTM and WhONet under challenging driving conditions and longer-term GNSS outage driving conditions. The results demonstrate that the proposed model has higher accuracy, stability, and reliability under various challenging driving conditions and longer-term GNSS outage driving conditions. Specifically, the model performs up to 96%, 94%, 97%, and 95% better than LSTM and up to 84%, 95%, 92%, and 78% better than WhONet on the max, min, mean, and standard deviation metrics under various challenging conditions. In addition, Transformer-NDC performs up to 51%, 84%, 71%, and 60% better than WhONet on the max, min, mean, and standard deviation metrics under longer-term GNSS outage driving conditions. Although the Transformer models with and without the driving condition characteristics achieve approximately the same performance under challenging driving conditions, Transformer-DC, which considers the driving condition characteristics, can better adapt to driving conditions with longer-term GNSS outage than Transformer-NDC and improve the model’s overall performance.
  • Tests on an experimental vehicle are conducted to verify the model’s generalization capability and the improved dead reckoning positioning performance by combining the model. In particular, a model trained on a public dataset is directly applied to the experimental vehicle, and the model’s performance on the experimental vehicle is compared with that on a data subset with the similar driving condition. The results show that although the model’s performance declines slightly, the mean and standard deviation of the model’s errors are only 0.6 m and 0.2 m, which demonstrates that the model is still effective at predicting the odometry error and improving the accuracy and stability of localization.
This study explored the possibility of applying the Transformer model to wheel odometry, providing a new solution for the application of deep learning to localization. Although the test on the experimental vehicle validated the model’s generalization capability and provides a basis for the application and generalization of the model to multiple vehicle types, in future work, more study is needed to further develop and explore related techniques to enhance the generalization of the Transformer model to multiple vehicle types and to improve the model accuracy after transferring so that the efficiency of model utilization can be improved, in addition, the related techniques to improve the performance of the Transformer model and the use of other deep-learning models to localization can be studied.

Author Contributions

Conceptualization, H.D.; data curation, K.H.; investigation, K.H.; methodology, K.H.; project administration, N.X. and K.G.; resources, H.D. and N.X.; software, K.H.; supervision, K.G.; validation, K.H.; writing—original draft, K.H.; writing—review and editing, K.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Jilin Province major science and technology special projects of China (grant number 20220301033GX) and the National Natural Science Foundation of China (grant number U1864206).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yurtsever, E.; Lambert, J.; Carballo, A.; Takeda, K. A survey of autonomous driving: Common practices and emerging technologies. IEEE Access 2020, 8, 58443–58469. [Google Scholar] [CrossRef]
  2. Badue, C.; Guidolini, R.; Carneiro, R.V.; Azevedo, P.; Cardoso, V.B.; Forechi, A.; Jesus, L.; Berriel, R.; Paixao, T.M.; Mutz, F.; et al. Self-driving cars: A survey. Expert Syst. Appl. 2021, 165, 113816. [Google Scholar] [CrossRef]
  3. Viana, K.; Zubizarreta, A.; Diez, M. A Reconfigurable Framework for Vehicle Localization in Urban Areas. Sensors 2022, 22, 2595. [Google Scholar] [CrossRef] [PubMed]
  4. Zhang, Y.; Wang, L.; Jiang, X.; Zeng, Y.; Dai, Y. An efficient LiDAR-based localization method for self-driving cars in dynamic environments. Robotica 2022, 40, 38–55. [Google Scholar] [CrossRef]
  5. Gyagenda, N.; Hatilima, J.V.; Roth, H.; Zhmud, V. A review of GNSS-independent UAV navigation techniques. Robot. Auton. Syst. 2022, 152, 104069. [Google Scholar] [CrossRef]
  6. Zhang, J.; Wen, W.; Huang, F.; Wang, Y.; Chen, X.; Hsu, L.-T. GNSS-RTK Adaptively Integrated with LiDAR/IMU Odome-try for Continuously Global Positioning in Urban Canyons. Appl. Sci. 2022, 12, 5193. [Google Scholar] [CrossRef]
  7. Lyu, P.; Bai, S.; Lai, J.; Wang, B.; Sun, X.; Huang, K. Optimal time difference-based TDCP-GPS/IMU navigation using graph optimization. IEEE Trans. Instrum. Meas. 2021, 70, 9514710. [Google Scholar] [CrossRef]
  8. Thrun, S.; Montemerlo, M.; Dahlkamp, H.; Stavens, D.; Aron, A.; Diebel, J.; Fong, P.; Gale, J.; Halpenny, M.; Hoffmann, G.; et al. Stanley: The robot that won the DARPA Grand Challenge. J. Field Robot. 2006, 23, 661–692. [Google Scholar] [CrossRef]
  9. Funk, N.; Alatur, N.; Deuber, R. Autonomous Electric Race Car Design; EVS30 Symposium: Stuttgart, Germany, 2017. [Google Scholar]
  10. Schwesinger, U.; Bürki, M.; Timpner, J.; Rottmann, S.; Wolf, L.; Paz, L.M.; Grimmett, H.; Posner, I.; Newman, P.; Häne, C.; et al. Automated valet parking and charging for e-mobility. In Proceedings of the 2016 IEEE Intelligent Vehicles Symposium (IV), Gotenburg, Sweden, 19–22 June 2016; pp. 157–164. [Google Scholar]
  11. Brunker, A.; Wohlgemuth, T.; Frey, M.; Gauterin, F. Odometry 2.0: A slip-adaptive EIF-based four-wheel-odometry model for parking. IEEE Trans. Intell. Veh. 2018, 4, 114–126. [Google Scholar] [CrossRef]
  12. Fazekas, M.; Németh, B.; Gáspár, P. Iterative parameter identification method of a vehicle odometry model. IFAC-PapersOnLine 2019, 52, 199–204. [Google Scholar] [CrossRef]
  13. Fazekas, M.; Németh, B.; Gáspár, P.; Sename, O. Vehicle odometry model identification considering dynamic load transfers. In Proceedings of the 2020 28th Mediterranean Conference on Control and Automation (MED), Saint-Raphaël, France, 15–18 September 2020; IEEE: Piscatvie, NJ, USA, 2020; pp. 19–24. [Google Scholar]
  14. Fazekas, M.; Gáspár, P.; Németh, B. Velocity Estimation via Wheel Circumference Identification. Period. Polytech. Transp. Eng. 2021, 49, 250–260. [Google Scholar] [CrossRef]
  15. Fazekas, M.; Gáspár, P.; Németh, B. Calibration and improvement of an odometry model with dynamic wheel and lateral dynamics integration. Sensors 2021, 21, 337. [Google Scholar] [CrossRef] [PubMed]
  16. Welte, A.; Xu, P.; Bonnifait, P. Four-wheeled dead-reckoning model calibration using RTS smoothing. In Proceedings of the International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 312–318. [Google Scholar]
  17. Malleswaran, M.; Vaidehi, V.; Manjula, S.; Deborah, S.A. Performance comparison of HONNs and FFNNs in GPS and INS integration for vehicular navigation. In Proceedings of the 2011 International Conference on Recent Trends in Information Technology (ICRTIT), Chennai, India, 3–5 June 2011; pp. 223–228. [Google Scholar]
  18. Malleswaran, M.; Vaidehi, V.; Saravanaselvan, A.; Mohankumar, M. Performance Analysis of Various Artificial Intelligent Neural Networks For Gps/Ins Integration. Appl. Artif. Intell. 2013, 27, 367–407. [Google Scholar] [CrossRef]
  19. Dai, H.-F.; Bian, H.-W.; Wang, R.-Y.; Ma, H. An INS/GNSS integrated navigation in GNSS denied environment using recurrent neural network. Def. Technol. 2019, 16, 334–340. [Google Scholar] [CrossRef]
  20. Fang, W.; Jiang, J.; Lu, S.; Gong, Y.; Tao, Y.; Tang, Y.; Yan, P.; Luo, H.; Liu, J. A LSTM Algorithm Estimating Pseudo Measurements for Aiding INS during GNSS Signal Outages. Remote Sens. 2020, 12, 256. [Google Scholar] [CrossRef]
  21. Onyekpe, U.; Palade, V.; Kanarachos, S.; Christopoulos, S.R.G. Learning uncertainties in wheel odometry for vehicular localisation in GNSS deprived environments. In Proceedings of the 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA), Miami, FL, USA, 14–17 December 2020. [Google Scholar]
  22. Onyekpe, U.; Palade, V.; Herath, A.; Kanarachos, S.; Fitzpatrick, M.E. WhONet: Wheel Odometry neural Network for vehicular localisation in GNSS-deprived environments. Eng. Appl. Artif. Intell. 2021, 105, 104421. [Google Scholar] [CrossRef]
  23. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
  24. Onyekpe, U.; Kanarachos, S.; Palade, V.; Christopoulos, S.G. Vehicular Localisation at High and Low Estimation Rates during GNSS Outages: A Deep Learning Approach. In Deep Learning Applications; Wani, V.P.M.A., Khoshgoftaar, T., Eds.; Springer: Singapore, 2020; Volume 2, pp. 229–248. [Google Scholar]
  25. Onyekpe, U.; Palade, V.; Kanarachos, S.; Szkolnik, A. IOVNBD: Inertial and Odometry Benchmark Dataset for Ground Vehicle Positioning. arXiv 2020, arXiv:2005.01701. [Google Scholar]
  26. Onyekpe, U.; Palade, V.; Kanarachos, S. Learning to Localise Automated Vehicles in Challenging Environments Using Inertial Navigation Systems (INS). Appl. Sci. 2021, 11, 1270. [Google Scholar] [CrossRef]
Figure 1. Role of the wheel odometry error prediction model in the localization system.
Figure 1. Role of the wheel odometry error prediction model in the localization system.
Applsci 13 05588 g001
Figure 2. Two different Transformer models. (a) Transformer model that does not consider the characteristics of driving conditions. (b) Transformer model that considers the characteristics of driving conditions.
Figure 2. Two different Transformer models. (a) Transformer model that does not consider the characteristics of driving conditions. (b) Transformer model that considers the characteristics of driving conditions.
Applsci 13 05588 g002
Figure 3. Test results for the motorway scenario.
Figure 3. Test results for the motorway scenario.
Applsci 13 05588 g003
Figure 4. Test results for the driving condition with sharp cornering and successive left and right turns. (a) V-Vw6; (b) V-Vw7; (c) V-Vw8.
Figure 4. Test results for the driving condition with sharp cornering and successive left and right turns. (a) V-Vw6; (b) V-Vw7; (c) V-Vw8.
Applsci 13 05588 g004
Figure 5. Test results for the wet road scenario. (a) V-Vtb8; (b) V-Vtb11; (c) V-Vtb13.
Figure 5. Test results for the wet road scenario. (a) V-Vtb8; (b) V-Vtb11; (c) V-Vtb13.
Applsci 13 05588 g005
Figure 6. Test results for the driving condition with quick changes in vehicle acceleration. (a) V-Vfb02e; (b) V-Vta12.
Figure 6. Test results for the driving condition with quick changes in vehicle acceleration. (a) V-Vfb02e; (b) V-Vta12.
Applsci 13 05588 g006
Figure 7. Test results for the roundabout scenario. (a) V-Vta11; (b) V-Vfb02d.
Figure 7. Test results for the roundabout scenario. (a) V-Vta11; (b) V-Vfb02d.
Applsci 13 05588 g007
Figure 8. Test results for the hard brake driving condition. (a) V-Vw16b; (b) V-Vw17.
Figure 8. Test results for the hard brake driving condition. (a) V-Vw16b; (b) V-Vw17.
Applsci 13 05588 g008
Figure 13. Experimental vehicle.
Figure 13. Experimental vehicle.
Applsci 13 05588 g013
Figure 14. Localization trajectory comparison.
Figure 14. Localization trajectory comparison.
Applsci 13 05588 g014
Figure 15. Error of the dead reckoning and the dead reckoning with the Transformer model relative to the real trajectory. (a) Error in the x coordinate; (b) Error in the y coordinate; (c) The total error.
Figure 15. Error of the dead reckoning and the dead reckoning with the Transformer model relative to the real trajectory. (a) Error in the x coordinate; (b) Error in the y coordinate; (c) The total error.
Applsci 13 05588 g015
Table 1. Selected characteristics describing different driving conditions.
Table 1. Selected characteristics describing different driving conditions.
Serial NumberFeaturesSerial NumberFeaturesSerial NumberFeaturesSerial NumberFeatures
1Roundabout9Valleys17Sharp left and right turns25Bumps
2Mountain roads10Rain18Swift maneuvers26Zig-zag driving
3Country roads11Dirt roads19Slipping27Approximate straight-line motion
4Expressway12Gravel roads20Successive left and right turns28Stationary
5Town-center driving13Wet roads21Varying acceleration within a short time29Parking
6Inner-city driving14Mud roads22U-turns
7Winding roads15Potholes23Reverse driving
8Residential roads16Hard brake24Drifts
Table 2. IO-VNB data subsets used for the Transformer model training.
Table 2. IO-VNB data subsets used for the Transformer model training.
Serial NumberIO-VNB Data SubsetSerial NumberIO-VNB Data Subset
1V-S118V-Vta27
2V-S219V-Vta28
3V-S3c20V-Vta29
4V-S421V-Vta30
5V-St122V-Vtb1
6V-M23V-Vtb2
7V-Y224V-Vtb5
8V-Vta225V-Vtb9
9V-Vta826V-Vw4
10V-Vta927V-Vw5
11V-Vta1028V-Vw14b
12V-Vta1329V-Vw14c
13V-Vta1630V-Vfa01
14V-Vta1731V-Vfa02
15V-Vta2032V-Vfb01a
16V-Vta2133V-Vfb01b
17V-Vta22
Table 3. Training parameters of Transformer-NDC and Transformer-DC.
Table 3. Training parameters of Transformer-NDC and Transformer-DC.
ParametersTransformer-NDCTransformer-DC
Number of encoders and decoders N66
Output dimension dmodel of the sublayers and the embedding layer512256
Layer number dff of the intermediate layer in fully connected feed-forward networks3264
Number of attention heads h88
Batch size3232
Epochs100100
Time step (sliding window)11 s11 s
Random inactivation ratio Dropout00
Activation functionReLUReLU
Table 4. IO-VNB data subsets for performance evaluation.
Table 4. IO-VNB data subsets for performance evaluation.
Driving ConditionIO-VNB Data SubsetTotal Travel Time, Distance Traveled, Speed and Acceleration
MotorwayV-Vw121.75 min, 2.64 km, 82.6 to 97.4 km/h, −0.06 to 0.07 g
Sharp cornering and successive left and right turnsV-Vw62.1 min, 1.08 km, 3.3 to 40.7 km/h, −0.34 to 0.26 g
V-Vw72.8 min, 1.23 km, 0.4 to 42.2 km/h, −0.37 to 0.37 g
V-Vw82.7 min, 1.12 km, 0.0 to 46.4 km/h, −0.37 to 0.27 g
Wet roadV-Vtb81.2 min, 1.35 km, 60.9 to 76.5 km/h, −0.35 to 0.08 g
V-Vtb110.7 min, 0.84 km, 65.1 to 75.3 km/h, −0.05 to 0.12 g
V-Vtb132.1 min, 0.99 km, 7.5 to 43.3 km/h, −0.31 to 0.22 g
Quick changes in vehicle’s accelerationV-Vfb02e1.6 min, 1.52 km, 37.4 to 73.9 km/h, −0.24 to 0.19 g
V-Vta121.1 min, 1.27 km, 44.7 to 85.3 km/h, −0.44 to 0.13 g
RoundaboutV-Vta111.0 min, 0.92 km, 26.8 to 97.7 km/h, −0.45 to 0.15 g
V-Vfb02d1.5 min, 0.84 km, 0.0 to 57.3 km/h, −0.33 to 0.31 g
Hard brakeV-Vw16b2.0 min, 1.99 km, 1.3 to 86.3 km/h, −0.75 to 0.29 g
V-Vw170.5 min, 0.54 km, 31.5 to 72.7 km/h, −0.8 to 0.19 g
Longer GNSS outageV-Vtb313.8 min, 0.71 km, 0.0 to 37.5 km/h, −0.23 to 0.33 g
V-Vfb02a59.9 min, 96.5 km, 0.0 to 122.3 km/h, −0.5 to 0.37 g
V-Vfb02b18.3 min, 7.69 km, 0.0 to 84.3 km/h, −0.5 to 0.35 g
Table 5. Parameter adjustment process of Transformer-NDC (parameter values not listed are the same as the corresponding parameter values of the baseline model).
Table 5. Parameter adjustment process of Transformer-NDC (parameter values not listed are the same as the corresponding parameter values of the baseline model).
NdmodeldffhDropoutmaxminµσ
baseline651232800.360.030.090.07
A 4 0.420.030.120.06
16 0.330.030.090.07
B4 0.380.030.120.06
8 0.420.030.090.08
C 1024 0.510.030.090.10
256 0.510.030.090.10
D 16 0.430.030.110.06
64 0.850.030.250.20
E 0.11.130.040.140.20
0.20.940.040.200.18
Table 6. Test results for the motorway scenario.
Table 6. Test results for the motorway scenario.
IO-VNB DatasetPerformance MetricsModel Error (m)
CRSE
LSTMWhONetTransformer-NDCTransformer-DC
V-Vw12max0.680.330.160.16
min0.250.130.020.02
µ0.460.260.080.08
σ0.080.040.030.03
Table 7. Test results for the driving condition with sharp cornering and successive left and right turns.
Table 7. Test results for the driving condition with sharp cornering and successive left and right turns.
IO-VNB DatasetPerformance MetricsModel Error (m)
CRSE
LSTMWhONetTransformer-NDCTransformer-DC
V-Vw6max1.450.570.680.68
min0.340.110.040.03
μ0.840.360.120.12
σ0.290.090.100.10
V-Vw7max1.930.610.620.64
min0.290.080.040.04
μ1.140.360.120.12
σ0.460.130.110.11
V-Vw8max1.910.570.460.48
min0.430.110.030.03
μ1.190.370.120.12
σ0.440.120.090.09
Table 8. Test results for the wet road scenario.
Table 8. Test results for the wet road scenario.
IO-VNB DatasetPerformance MetricsModel Error (m)
CRSE
LSTMWhONetTransformer-NDCTransformer-DC
V-Vtb8max0.480.190.140.14
min0.230.080.070.07
μ0.330.140.100.09
σ0.040.020.010.01
V-Vtb11max0.370.140.120.13
min0.240.050.050.05
μ0.310.110.090.09
σ0.030.020.020.02
V-Vtb13max2.332.330.590.58
min0.200.060.020.02
μ1.240.750.120.12
σ0.730.490.110.11
Table 9. Test results for the driving condition with quick changes in vehicle acceleration.
Table 9. Test results for the driving condition with quick changes in vehicle acceleration.
IO-VNB DatasetPerformance MetricsModel Error (m)
CRSE
LSTMWhONetTransformer-NDCTransformer-DC
V-Vfb02emax2.952.821.271.28
min0.540.170.030.03
μ1.431.140.200.20
σ0.650.810.180.18
V-Vta12max2.302.271.051.05
min0.650.140.050.05
μ1.370.890.230.23
σ0.550.490.200.20
Table 10. Test results for the roundabout scenario.
Table 10. Test results for the roundabout scenario.
IO-VNB DatasetPerformance MetricsModel Error (m)
CRSE
LSTMWhONetTransformer-NDCTransformer-DC
V-Vta11max7.887.354.764.77
min0.240.110.040.03
μ2.421.750.310.30
σ2.782.660.730.73
V-Vfb02dmax1.550.490.360.48
min0.370.140.030.03
μ1.040.300.090.09
σ0.370.090.070.09
Table 11. Test results for the hard brake driving condition.
Table 11. Test results for the hard brake driving condition.
IO-VNB DatasetPerformance MetricsModel Error (m)
CRSE
LSTMWhONetTransformer-NDCTransformer-DC
V-Vw16bmax2.930.880.360.36
min0.100.100.010.01
μ1.490.360.060.06
σ0.950.220.050.05
V-Vw17max3.510.870.140.15
min0.410.610.030.03
μ2.130.780.060.06
σ0.010.010.030.03
Table 16. Error comparison of the experimental vehicle and the corresponding data subset.
Table 16. Error comparison of the experimental vehicle and the corresponding data subset.
Performance MetricsModel Error (m)
CRSE
V-Vta8Experimental Vehicle
max1.021.46
min0.050.02
µ0.330.60
σ0.210.20
Table 17. Comparison between original dead reckoning and dead reckoning with Transformer model.
Table 17. Comparison between original dead reckoning and dead reckoning with Transformer model.
Performance Metrics (m)Dead ReckoningDead Reckoning with Transformer Model
max4.012.99
min0.060
µ1.581.34
σ0.840.74
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

He, K.; Ding, H.; Xu, N.; Guo, K. Wheel Odometry with Deep Learning-Based Error Prediction Model for Vehicle Localization. Appl. Sci. 2023, 13, 5588. https://doi.org/10.3390/app13095588

AMA Style

He K, Ding H, Xu N, Guo K. Wheel Odometry with Deep Learning-Based Error Prediction Model for Vehicle Localization. Applied Sciences. 2023; 13(9):5588. https://doi.org/10.3390/app13095588

Chicago/Turabian Style

He, Ke, Haitao Ding, Nan Xu, and Konghui Guo. 2023. "Wheel Odometry with Deep Learning-Based Error Prediction Model for Vehicle Localization" Applied Sciences 13, no. 9: 5588. https://doi.org/10.3390/app13095588

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop