Next Article in Journal
Development of a Lightweight Single-Band Bathymetric LiDAR
Next Article in Special Issue
Multivehicle 3D Cooperative Positioning Algorithm Based on Information Geometric Probability Fusion of GNSS/Wireless Station Navigation
Previous Article in Journal
Monitoring of Atmospheric Carbon Dioxide over Pakistan Using Satellite Dataset
Previous Article in Special Issue
Preparatory Railway Track Geometry Estimation Based on GNSS and IMU Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lidar- and V2X-Based Cooperative Localization Technique for Autonomous Driving in a GNSS-Denied Environment

1
Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Republic of Korea
2
School of Electronics Engineering, Inha University, Incheon 22212, Republic of Korea
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(22), 5881; https://doi.org/10.3390/rs14225881
Submission received: 7 September 2022 / Revised: 11 November 2022 / Accepted: 17 November 2022 / Published: 20 November 2022
(This article belongs to the Special Issue GNSS for Urban Transport Applications)

Abstract

:
Autonomous vehicles are equipped with multiple heterogeneous sensors and drive while processing data from each sensor in real time. Among the sensors, the global navigation satellite system (GNSS) is essential to the localization of the vehicle itself. However, if a GNSS-denied situation occurs while driving, the accident risk may be high due to the degradation of the vehicle positioning performance. This paper presents a cooperative positioning technique based on the lidar sensor and vehicle-to-everything (V2X) communication. The ego-vehicle continuously tracks surrounding vehicles and objects, and localizes itself using tracking information from the surroundings, especially in GNSS-denied situations. We present the effectiveness of the cooperative positioning technique by constructing a GNSS-denied case during autonomous driving. A numerical simulation using a driving simulator is included in the paper to evaluate and verify the proposed method in various scenarios.

Graphical Abstract

1. Introduction

As defined by the Society of Automotive Engineers (SAE) International in 2016, autonomous driving technology is subdivided into six levels, depending on the required conditions and functions [1]. Among them, Level 2 is the stage of a driving assistance system that simultaneously performs longitudinal and lateral control, and the vehicle is driven in a partially automated state with the human driver still having to concentrate on driving. Level 3 refers to a stage where, as with Level 2, the vehicle is operated in a partially automated state; however, driver intervention is much reduced. Level 4 or higher performs autonomous driving without driver intervention depending on a given constraint, such as weather conditions. A more advanced autonomous driving system must be implemented to achieve the step after Level 3, in which the driver intervention is gradually reduced.
The autonomous driving system works in the order of recognition, decision, and control; its smooth operation requires a processing system, an electronic control unit, and various sensors [2,3]. In an autonomous driving system, recognition sensors consist of cameras, lidar, radar, and others to provide the relative position of the surrounding objects. Ego-vehicle positioning sensors can include a global navigation satellite system (GNSS) receiver and an inertial measurement unit (IMU) to provide location and status information. Among them, the GNSS sensor receives satellite signals at a low power on the ground, originally transmitted by satellites orbiting at an altitude of >20,000 km. It processes signals to provide the user location, velocity, direction, and time information on a global scale [4]. However, GNSS-equipped vehicles may be affected by multipath or degraded signals and signal blockage due to the signal-receiving environment (i.e., GNSS-denied environment). Moreover, the failure or system malfunction of a GNSS sensor could lead to an unpredictable accident. Therefore, it is necessary to estimate the current position in any situation for safe driving.
Nazaruddin et al. presented the ego-vehicle localization method, based on the IMU sensor in the GNSS disconnection situation, which may be caused by high dependence on the external environment or a low sampling rate. This technique predicts the next coordinates from the previous GNSS and current IMU sensor data. The localization uses an error-state Kalman filter (KF) with a residual long short-term memory model [5]. A similar approach to the GNSS outage using two complementary sensors, such as the IMU and odometer, was proposed in [6], where the odometer and GNSS measurements were exploited to correct IMU errors during GNSS availability. Then, the odometer was used to correct IMU errors during GNSS outages to improve the positioning accuracy while ensuring the continuity of the navigation solution.
Furthermore, various localization methods, such as the visual or laser simultaneous localization and mapping (SLAM) technique, have been proposed using perception sensors, such as cameras, radar, and lidar, cooperatively to achieve better navigational information [7]. Satellite or geographic information system images are used with visual odometry, even with more storage and computational requirements [8,9]. These methods can simultaneously support two functions, such as visually surveilling the required geographical areas and nearby obstacles and estimating the navigational state (e.g., position, velocity, and attitude), especially in GNSS-denied environments.
However, these methods may have the drawback that the predetermined position information of surrounding objects is required for localization. To overcome this limitation, vehicle-to-everything (V2X) communication [10], where status information including the position of the surrounding objects can be exchanged, is ideally suited for sensor fusion. A radar-based ego-vehicle localization method using vehicle-to-vehicle (V2V) communication was proposed in [11]. In GNSS-denied situations, this method estimates the absolute coordinates of the ego-vehicle by combining the relative coordinates of the object vehicles obtained using radar with the absolute coordinates of the object vehicles received through V2V communication. Ma et al. presented a vehicle-to-infrastructure (V2I)-based vehicle localization algorithm, where a low-cost IMU-assisted single roadside unit and a GNSS receiver-based localization algorithm based on the least-squares method were used to reduce the deployment costs [12]. Perea-Strom et al. presented a fusion method of wheel odometry, IMU, GNSS, and lidar using the adaptive Monte Carlo localization algorithm, where a particle weighting model to integrate GNSS measurements was adopted to perform ego-vehicle localization in a 2D map [13].
One way to improve the accuracy of localization algorithms from surrounding objects is to increase their confidence in the surrounding objects. To do this, it is necessary to apply a tracking algorithm to the surrounding objects. The KF-based tracking algorithm using 2D or 3D object information obtained through the camera and lidar is widely used for continuous object tracking [14]. In addition, a study on model-free tracking based on the support vector machine classifier learned with the histogram of oriented gradients feature was performed [15]. Research on the object-tracking algorithm incorporated with deep learning is still underway. For example, a convolutional neural network was used as a feature extractor, and more accurate tracking was performed through the adaptive hedge method [16]. Furthermore, the two-stage Siamese re-detection architecture and tracklet-based dynamic programming algorithm were combined for the re-detection and tracking of objects even after long occlusion [17].
This paper presents a localization method for the ego-vehicle in GNSS-denied environments, using lidar and V2X communication cooperatively. Sensor fusion and localization for surrounding vehicles and objects, and the ego-vehicle using the lidar sensor and V2X communication are presented. Multiobject tracking (MOT) using an extended KF (EKF) and a localization method using a particle filter (PF) are described. An autonomous driving simulator modeled by each sensor is used to verify this, and comparative verification is performed based on the truth and predicted trajectory obtained from the simulator.

2. Sensor Fusion and Localization

2.1. Overview

The cooperative localization presented in this paper is a two-step approach. In the first step, the method estimates the coordinates of the surrounding vehicles and objects based on the lidar sensor and V2X communication data. In the second step, the method estimates the ego-vehicle coordinates based on the results of the first step when the GNSS signal is unavailable. Figure 1 is the block diagram of the cooperative localization process. It consists of data processing module, sensor fusion module, and localization module. The first two are for localizing surrounding vehicles and objects, and the latter is for the ego-vehicle. For simplicity, in this paper, only the surrounding vehicles are assumed to be used, but the same metric can be applied to the surrounding objects.
In the data processing module, the lidar sensor recognizes surrounding vehicles and their localization relative to the ego-vehicle. Simultaneously, the status information for the surrounding vehicles is independently obtained through the V2X (in fact, V2V) communication. Then, surrounding vehicle information obtained from the data processing module is merged in the sensor fusion module to perform highly reliable MOT in real time. In the localization module, the ego-vehicle coordinates are estimated through the PF using the MOT results of the sensor fusion module.

2.2. Data Processing

The data processing module of cooperative localization processes the lidar sensor output and V2X communication messages independently. It detects the surrounding vehicles based on the point cloud data (PCD) of the lidar sensor and receives a basic safety message (BSM) of V2X communication to obtain the surrounding vehicle data [18]. To recognize the vehicle from the PCD, a series of processes such as ground filtering, clustering, and classification should be performed [19]. Also, the L-shape fitting method is applied to vehicle detection and tracking [20].
Figure 2 is the entire process to find the position and heading of the surrounding vehicle in PCD. First, a Gradient-based ground removal algorithm is applied for ground filtering [21]. There is little difference in point gradient between channels on the ground, but it is used as there is a large difference in the slope in objects. Vehicle candidate clustering for the filtered object point is performed by applying a voxel grid. After that, clustering between adjacent voxels is performed, and a vehicle cluster candidate group is selected based on the size of the cluster generated at this time. Finally, the L-shape fitting is performed to extract the heading and position information of the vehicle candidate cluster. In fact, the L-shape fitting is the problem of defining two perpendicular lines for each segmented cluster of points based on the L-shape rectangle model assumption for vehicle detection from lidar data. This algorithm finds a rectangle that fits best to all points contained in the vehicle cluster by using all possible orientation of the rectangle, which allows us to obtain L-shaped and 2D boundary boxes in the vehicle cluster.
The V2X communication receives the surrounding vehicle states, including the coordinates, such as the heading, velocity, and vehicle size, via BSM messages. For simplicity, in this paper, only the position on the 2D plane is assumed. The coordinates of a surrounding vehicle in the global coordinates (i.e., earth-centered-earth-fixed or latitude-longitude-height) are converted to local coordinates on the 2D plane (i.e., north-east-down coordinates) of which origin is at the ego-vehicle via the lidar sensor output, where the lidar sensor is assumed to be mounted at the center of the ego-vehicle so that it has the same coordinate system as the ego-vehicle’s body frame. The heading angle of the ego-vehicle is used to obtain the local coordinates of the surrounding vehicle as follows:
N E = cos θ sin θ sin θ cos θ x s v y s v ,
where ( x s v ,   y s v ) and (N, E) represent the surrounding vehicle coordinates at the ego-vehicle’s body coordinates and at the local north-east-down coordinates, respectively, and θ is the heading angle of the ego-vehicle. Figure 3 shows the relationship between coordinate systems. Figure 4 shows the output of each sensor after matching the coordinate system, and the surrounding vehicle information is output from the same coordinate system.

2.3. Sensor Fusion for Surrounding Vehicles

Sensor fusion is a method of processing surrounding environment information by fusing data output from multiple sensors [22]. Through this, reliable data with reduced false detection probability can be extracted. Recently, various research efforts have been underway to converge heterogeneous sensors in various ways to improve the performance of autonomous driving systems. Sensor fusion can be classified as early or late fusion [23]. Early fusion is a preprocessing method that integrates unimodal features based on the raw data output from each sensor. However, it has the drawback that fusion results become ambiguous due to sensitive responses to sensor failure or noise. Late fusion is a post-processing method based on preprocessed sensor data, which has the advantage that any sensor failures do not significantly affect the remaining processes. In this paper, a late fusion method based on the simple online and real-time tracking (SORT) algorithm using post-processed data from the lidar sensor and V2X communication (mainly for MOT) is used to minimize the effect caused by sensor defects [24]. As depicted in Figure 5, SORT is used with the KF and Hungarian algorithms.
In this paper, an EKF and intersection over union (IoU)-based SORT are employed to predict the nonlinear movement direction, as displayed in Figure 6 [25].
Prediction in the sensor fusion module is performed based on the information of the surrounding vehicle from the data processing module to predict the surrounding vehicle’s next position and heading, as illustrated in Figure 6. After that, the IoU matching is conducted to match the prediction and the previous information. Finally, the next moving paths are updated using the matched information of the surrounding vehicles.
First, the state vector of the EKF is the result of tracking the i th surrounding vehicle:
X i = x i y i v i ψ i ,
where x and y represent surround vehicle coordinates, v is the velocity, ψ is the heading, and i denotes the vehicle identifier. The corresponding value is obtained by taking weighted mean for the value calculated in the process of lidar and V2X module. Based on the calculated value of lidar module with high distance accuracy, V2X module is given a high weight if the position of the vehicle acquired by V2X module is similar to the calculated result of lidar, and a low weight if the difference is large. The matrix used for EKF can be expressed as follows:
F i ,   k 1 = 1 0 k cos ψ i ,   k 1 v i ,   k 1 k sin ψ i ,   k 1 0 1 k sin ψ i ,   k 1 v i ,   k 1 k cos ψ i ,   k 1 0 0 1 0 0 0 0 1 , H i ,   k = 1 + v i ,   k 1 k cos ψ i ,   k 1   0 v i ,   k 1 k sin ψ i ,   k 1 0 1 + v i ,   k 1 k cos ψ i ,   k 1 v i ,   k 1 k cos ψ i ,   k 1 0 0 1 , z i ,   k = x i ,   k y i ,   k ψ i ,   k ,
where F denotes the state transition matrix, H represents the measurement transition matrix, z is the measurement vector, and k indicates the time index. Next, the prediction is performed using Equation (4):
X i ,   k = F i ,   k 1 X i , k 1 , P i ,   k = F i ,   k 1 P i ,   k 1 F i ,   k 1 T + Q ,
where X i ,   k and P i ,   k denote the predicted state vector and its error covariance for the i th surrounding vehicle, and Q is defined as the system noise covariance matrix. Based on the predicted coordinates of the surrounding vehicles, the IoU should be calculated to match the previously tracked surrounding vehicle coordinates [26]. As presented in Figure 4, to create a 2D bounding box, the coordinates, heading, and vehicle size of the surrounding vehicles at k 1 and k are used. Then, matching and unmatching are performed by designating the IoU threshold value. As in Equation (5), the intersection of the predicted b o x k and the previously tracked b o x k 1 is divided into the union. Next, classification proceeds by acquiring an IoU score to compare with the threshold as follows:
I o U b o x k , b o x k 1 = b o x k b o x k 1 b o x k b o x k 1 = b o x k b o x k 1 b o x k + b o x k 1 b o x k b o x k 1 .
If the score is higher than the threshold, they are the same vehicle, in which case the ID of the previous vehicle is assigned to the predicted vehicle coordinates. Otherwise, it is an unclear vehicle and is deleted after a certain time. In addition, if the score is 0, it is a new vehicle, and a new track is created. When the track list is updated, the last update of the EKF proceeds with Equation (6):
K i ,   k = P i ,   k H i , k T H i ,   k P i ,   k H i , k T + R 1 , X i , k = X i ,   k + K i ,   k z i ,   k H i ,   k X i ,   k , P i ,   k = I K i ,   k H i ,   k P i ,   k ,
where K i ,   k is the Kalman gain, R denotes the measurement noise covariance matrix, and X i , k and P i ,   k represent the updated state vector and error covariance of the i th surrounding vehicle.

2.4. Localization for the Ego-vehicle

Localization of the ego-vehicle is a core technology for autonomous driving. It estimates the coordinates and direction of the ego-vehicle in the space using vehicle state information when the GNSS signal is strong or the surrounding environment information [27]. As a localization technique using the surrounding environment information, particularly for GNSS-denied environments, various localization techniques based on radio waves using the time difference of arrival, angle of arrival, or received signal strength indication additionally corrected by KF or PF have been proposed [28,29,30]. In addition, a typical localization technique based on the coordinates of surrounding objects is SLAM. This method uses algorithms suitable for each sensor to estimate the coordinates of the ego-vehicle by comparing the characteristics of the current environment sensed by various sensors with the previously generated map [31,32,33,34].
In the localization of the proposed method, the ego-vehicle must be localized based on the surrounding vehicle coordinates received from the sensor fusion. Therefore, we used a PF that precisely localizes the ego-vehicle using a landmark and its own state information. Figure 7 depicts localizing the ego-vehicle based on the fused surrounding vehicle list. The heading, velocity, and yaw rate of the ego-vehicle, which can be extracted from the in-vehicle sensors (e.g., inertial sensors), and the surrounding vehicle coordinates are processed in the PF to update the ego-vehicle coordinates.
State vector X ^ , state transition matrix A , control matrix B , and control vector u are used to localize the ego-vehicle in the PF as follows:
X ^ = p x p y ϕ , A = diag 1 ,   1 ,   1 ,   0 , B = k cos ϕ 0 k sin ϕ 0 0 k , u = v w ,
where p x , p y , ϕ , v , and w denote the ego-vehicle coordinates, heading, velocity, and yaw rate. When the GNSS sensor of the ego-vehicle works well, p x and p y are always updated, but when the GNSS sensor is disconnected or in a denied environment, the state vector is only composed of the previous p x and p y . The particles of the initial PF include X ^ n and the weight, ω n , as presented in Equation (8):
S ^ k = { X ^ j , k , ω j , k |   j = 1 , n } , j = 1 n ω j , k = 0 = 1 ,
where j represents the number of particles, and S ^ k indicates a set of particles and weights. In addition, the PF method in this paper must consider the effects of noise. Therefore, the process of predicting X ^ n , k is calculated by including noise in u k , as in Equation (9):
X ^ j , k = A j , k 1 X ^ j , k 1 + B j , k 1 f u j , k 1 ,
where f is a function that generates random Gaussian noise, and X ^ j , k denotes the predicted j th particle state vector. Next, the weights of the PF must be updated to obtain the optimal ego-vehicle coordinates. Equation (10) is an expression that calculates weights based on the ego and surrounding vehicle coordinates. After calculating the difference between the coordinates of the predicted particle state vector and the surrounding vehicle, the weights are updated using the normal distribution function.
ω j , k = j = 1 n i = 1 m P j , x P j , y , P j , x = 1 2 π σ 2 e p x , k j x i , k 2 2 σ 2 , P j , y = 1 2 π σ 2 e p y , k j y i , k 2 2 σ 2 ,
where P j , x and P j , y represent the normal distribution function of each coordinate, m is the number of surrounding vehicles, and σ is the standard deviation. Finally, by summing the product of the updated weights and predicted particle state vector, the optimal ego-vehicle coordinates are calculated using Equation (11):
X ^ k = j = 1 n ω j , k X ^ j , k , T = n ω j , k ,
where X ^ k represents the updated state vector, and T denotes the number of new particles. In addition, new particles proportional to the weights are generated using the systematic resampling method [35].

3. Experiments

3.1. Experimental Environment Configuration

A GNSS-blocking scenario must be set in advance to verify the cooperative localization technique presented in this paper, which introduces a high risk of accidents during autonomous driving. Therefore, an experimental environment was constructed using an autonomous driving simulation tool based on a commercial game engine to solve these constraints [36]. This simulation tool is divided into various parts, as shown in Figure 8.
The main driving simulation includes the simulation module, where an experiment scenario can be configured. It is possible to freely set a nonplayer character artificial intelligence in the commercial game, static, and dynamic objects, weather or road environment, and so on. The analysis module analyzes the object recognition rate or path planning error to test the autonomous driving algorithm. The sensor module allows the checking of the output data of various sensors, such as cameras, lidar, radar, ultrasonic, GNSS, IMUs, and others, modeled in the autonomous driving simulator with a graphical user interface. In addition, the vehicle dynamic simulation function, virtual map, vehicle, object, external conditions, and other factors can be set as similar to the real environment, so it has safe conditions for autonomous driving experiments.
The multiuser part is an independent HW environment with driving simulator functions. It is expandable to multiple computers connected to a main driving simulation server, where multiple users (or clients) can drive in a single scenario in the main server [37], which is ideally suited for implementing a function to perform V2X communication with other clients. Each connected client transmits probe vehicle data and BSM information to the server in a multithreaded transmission control protocol/internet protocol, and automatically transmits and receives BSM messages through user datagram protocol communication for each client. The V2X message of the autonomous driving simulator consists of a header and a payload, as specified in the SAE J2735 standard, and the message is transmitted and received in the same period [38].
The GNSS hardware-in-the-loop simulation (HILS) comprises a hardware device consisting of a GNSS generator and receiver. The GNSS signal generator receives the ego-vehicle coordinates from the driving simulator and emits the corresponding GNSS radio frequency (RF) signals to air in real time. The GNSS receiver receives and processes GNSS RF signals to provide the ego-vehicle coordinates, which are used to intentionally generate a malfunction of the autonomous driving system that may occur when the GNSS sensor of the autonomous vehicle is affected by an RF interference signal or blockage. The driving route of the autonomous driving simulator equipped with the GNSS HILS can be logged on the electronic map in real time. In addition, a small jammer is installed independently to emulate the GNSS-denied environment [39].

3.2. Sensor Fusion and Multiobject Tracking Result of Surrounding Vehicles

Figure 9a is a screenshot of an autonomous driving test scenario implemented using a driving simulator. Figure 9b depicts the direction of the surrounding vehicles driving from an aerial view. We generated four vehicles that traveled simultaneously along a given route in the simulator and assigned vehicle IDs according to their distance in close order to the ego-vehicle. After that, we proceeded with the experiment using the ground truth position information of the surrounding vehicles provided by the simulator. It is noted that the driving simulator has a capability to obtain the true information of surrounding objects as well as the ego-vehicle itself in the virtual world by accessing the internal memory of the software.
The IoU threshold was set to vary from 0.1 to 0.9, and the surrounding vehicles were set to MOT within the range of lidar measurements to verify the sensor fusion method for the surrounding vehicles. Figure 10 presents the performance evaluation with the precision and recall of the MOT measurement results for different IoU thresholds [40]. Precision is the ratio of what the model classifies as true to what is actually true, whereas recall is the ratio of what the model predicts as true to what is actually true. In the case of an IoU of 0.1, the detection rate of 80% or more was achieved compared to the actual vehicle, but in terms of accuracy, about 60% of the precision was achieved. In contrast, for an IoU of 0.9, an accuracy of 85% or more was achieved, but a detection rate of 6% or less was measured. As a result, as the IoU increases, the precision and detection rate are inversely proportional. Therefore, this paper set a 0.5 IoU threshold with a relatively small deviation.
Next, to analyze the performance of the sensor fusion presented in this paper, experiments were conducted based on the same scenario as in Figure 9. Figure 11 shows the predicted vehicle trajectory for Vehicle 1 with the ground truth data. This confirms that our algorithm tracks the vehicle traveling along the established route well. Table 1 shows the results of performing MOT on the test scenario in Figure 9 by vehicle ID. In this experiment, as the distance from the ego-vehicle increases (i.e., the vehicle ID increases), the root-mean-square error (RMSE) also increases, as expected from the characteristics of the lidar sensor.

3.3. Localization Results for the Ego-vehicle

Next, a curved driving path scenario was constructed to verify the localization method for the ego-vehicle in the cooperative technique. Figure 12 is a satellite and road view to demonstrate the route for the experiment. The ego-vehicle drives from the starting point to the goal point in Figure 12b, and the surrounding vehicles are placed around the driving path. The jammer is turned on when the ego-vehicle drives through the GNSS-denied point.
Figure 13 displays the results of the localization. The black dots mark the coordinates of the surrounding vehicles, and the blue line indicates the trajectory of the ego-vehicle. The other colored lines indicate the trajectories according to the number of tracked surrounding vehicles. The ego-vehicle driven while tracking one vehicle predicts an unclear trajectory. However, as the number of surrounding vehicles increases, the actual and predicted trajectories become closer. Table 2 lists the numerical results of each trajectory as the RMSE. The positioning error decreases as the number of vehicles increases.
Finally, the trajectory errors of the different geometrical distributions of the surrounding vehicles are analyzed according to the distance and position of the surrounding vehicles. The trajectory varies depending on the location of the objects when localization is performed based on the surrounding objects. The scenario constructed to proceed with the experiment is presented in Figure 14.
Four experiments were performed for each scenario to measure the appropriate number of particles in a PF used in localization. Figure 15 displays the localization results where the black line and dots are the actual trajectories of the ego and surrounding vehicles, respectively. After adjusting the number of particles, the predicted trajectory result is presented as a colored line. In Scenarios A to C, the surrounding vehicles are on both sides of the actual trajectory but located gradually closer to the driving route. In Scenario D, the surrounding vehicles are clustered on the left.
Figure 16 depicts the results of analyzing the position error for each trajectory. Scenarios A and B each produced an error within 1 m, whereas maximum errors of 2 and 4 m were obtained in C and D, respectively. In addition, when the number of particles increases, the errors for C and D decrease relatively rapidly.
In each scenario, the position dilution of precision is calculated to analyze the localization error effect according to the sensor network arrangement, which is also used to check the evenness of the surrounding vehicle arrangement [41]. Figure 17 presents the results of each scenario run with 100 particles from the start of driving to 100 s. Scenarios A and B converge to zero, but the maximum errors of 2.6 and 5.3 occurred in C and D, respectively. Through this experiment, the position is estimated accurately when the surrounding vehicles are evenly distributed. In contrast, the error increases as the surrounding vehicles are gathered, or the distance from the ego-vehicle increases.
Additionally, Table 3 provides the RMSE results listed by the number of particles and the average localization time for each scenario. Similarly, Scenario A, where the surrounding vehicles are evenly distributed, has a low error, whereas the dense Scenario D has a large error. In addition, as the number of particles increases, the localization update period is higher. Through this process, if the number of particles is adjusted considering the arrangement of the surrounding vehicles, localization is performed close to the actual trajectory.

4. Conclusions

This paper presents a lidar sensor and V2X communication-based cooperative localization technique for autonomous vehicles. The proposed cooperative localization method comprises sensor fusion for surrounding vehicles and objects, and localization for the ego-vehicle. For the sensor fusion of the lidar output and V2X data for surrounding vehicles and objects, the SORT algorithm with the EKF in the late fusion manner was employed with the IoU-based MOT to minimize the sensor defect effect. The localization of the ego-vehicle using the information from the sensor fusion for the surrounding vehicles and objects was adopted in the method. The PF was employed to precisely localize the ego-vehicle based on the fused surrounding vehicle list. The cooperative localization method is expected to be ideally suited for autonomous vehicles, especially in a GNSS-denied environment.
Algorithm development and verification were performed using an autonomous driving simulator, with a GNSS HILS and a jammer intentionally used in the system to configure the GNSS-denied environment to evaluate the cooperative localization technique. The coordinates of the surrounding vehicles were tracked with high accuracy using sensor fusion performing MOT. Different localization phenomena were captured in the localization according to the number of tracked surrounding vehicles, the distance to the ego-vehicle, and the density.
The simulation in this paper included only the case of static surrounding vehicles; however, the synchronization between lidar and V2X information may be problematic when the surrounding vehicles and the ego-vehicle are running simultaneously. In further works, we will conduct experimental tests incorporating a deep learning method into the object tracking algorithm for moving surrounding vehicles.

Author Contributions

Conceptualization, J.-H.W.; methodology, M.-S.K.; software, M.-S.K. and J.-U.I.; validation, M.-S.K. and J.-H.W.; formal analysis, M.-S.K., J.-U.I. and J.-H.W.; investigation, M.-S.K.; resources, M.-S.K., J.-H.A. and J.-U.I.; data curation, M.-S.K.; writing—original draft preparation, M.-S.K. and J.-H.A.; writing—review and editing, J.-U.I. and J.-H.W.; supervision, J.-H.W.; project administration, J.-H.W.; funding acquisition, J.-H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2020R1A2C2013091), and INHA UNIVERSITY research grant.

Conflicts of Interest

The authors declare that the research was conducted in the absence of anycommercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Koopman, P.; Wagner, M. Autonomous vehicle safety: An interdisciplinary challenge. IEEE Intell. Transp. Syst. Mag. 2017, 9, 90–96. [Google Scholar] [CrossRef]
  2. Van Brummelen, J.; O’Brien, M.; Gruyer, D.; Najjaran, H. Autonomous vehicle perception: The technology of today and tomorrow. Transp. Res. Part C Emerg. Technol. 2018, 89, 384–406. [Google Scholar] [CrossRef]
  3. Varghese, J.Z.; Boone, R.G. Overview of autonomous vehicle sensors and systems. In Proceedings of the 2015 International Conference on Operations Excellence and Service Engineering, Orlando, FL, USA, 10–11 September 2015; pp. 178–191. [Google Scholar]
  4. Kaplan, E.D.; Hegarty, C. Understanding GPS/GNSS: Principles and Applications; Artech House: Norwood, MA, USA, 2017. [Google Scholar]
  5. Nazaruddin, Y.Y.; Maani, F.A.; Muten, E.R.; Sanjaya, P.W.; Tjahjono, G.; Oktavianus, J.A. An Approach for the Localization Method of Autonomous Vehicles in the Event of Missing GNSS Information. In Proceedings of the 2021 60th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), Tokyo, Japan, 8–10 September 2021; pp. 881–886. [Google Scholar]
  6. Aftatah, M.; Lahrech, A.; Abounada, A.; Soulhi, A. GPS/INS/Odometer data fusion for land vehicle localization in GPS denied environment. Mod. Appl. Sci. 2016, 11, 62. [Google Scholar] [CrossRef]
  7. Magree, D.; Johnson, E.N. Combined Laser and Vision aided Inertial Navigation for an Indoor Unmanned Aerial Vehicle. In Proceedings of the 2014 American Control Conference, Portland, OR, USA, 4–6 June 2014; pp. 1900–1905. [Google Scholar]
  8. Conte, G.; Doherty, P. An Integrated UAV Navigation System Based on Aerial Image Matching. In Proceedings of the 2008 IEEE Aerospace Conference, Big Sky, MT, USA, 1–8 March 2008; pp. 1–10. [Google Scholar]
  9. Gu, D.Y.; Zhu, C.F.; Guo, J.; Li, S.X.; Chang, H.X. Vision-aided UAV navigation using GIS data. In Proceedings of 2010 IEEE International Conference on Vehicular Electronics and Safety, Qingdao, China, 15–17 July 2010; pp. 78–82. [Google Scholar]
  10. Abboud, K.; Omar, H.A.; Zhuang, W. Interworking of DSRC and Cellular Network Technologies for V2X Communications: A Survey. IEEE Trans. Veh. Technol. 2016, 65, 9457–9470. [Google Scholar] [CrossRef]
  11. Kim, J.S.; Choi, W.Y.; Jeong, Y.W.; Chung, C.C. Vehicle Localization Using Radar Calibration with Disconnected GPS. In Proceedings of the 2021 21st International Conference on Control, Automation and Systems (ICCAS), Jeju, Republic of Korea, 12–15 October 2021; pp. 1371–1376. [Google Scholar]
  12. Ma, S.; Wen, F.; Zhao, X.; Wang, Z.M.; Yang, D. An efficient V2X based vehicle localization using single RSU and single receiver. IEEE Access 2019, 7, 46114–46121. [Google Scholar] [CrossRef]
  13. Perea-Strom, D.; Morell, A.; Toledo, J.; Acosta, L. GNSS integration in the localization system of an autonomous vehicle based on Particle Weighting. IEEE Sens. J. 2020, 20, 3314–3323. [Google Scholar] [CrossRef]
  14. Asvadi, A.; Girao, P.; Peixoto, P.; Nunes, U. 3D object tracking using RGB and LIDAR data. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 1255–1260. [Google Scholar]
  15. Qi, Y.; Yao, H.; Sun, X.; Sun, X.; Zhang, Y.; Huang, Q. Structure-aware multi-object discovery for weakly supervised tracking. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 466–470. [Google Scholar]
  16. Qi, Y.; Zhang, S.; Qin, L.; Yao, H.; Huang, Q.; Lim, J.; Yang, M.H. Hedged deep tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4303–4311. [Google Scholar]
  17. Voigtlaender, P.; Luiten, J.; Torr, P.H.; Leibe, B. Siam r-cnn: Visual tracking by re-detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 6578–6588. [Google Scholar]
  18. Fujii, S.; Fujita, A.; Umedu, T.; Kaneda, S.; Yamaguchi, H.; Higashino, T.; Takai, M. Cooperative vehicle positioning via V2V communications and onboard sensors. In Proceedings of the 2011 IEEE Vehicular Technology Conference (VTC Fall), San Francisco, CA, USA, 5–8 September 2011; pp. 1–5. [Google Scholar]
  19. Li, Y.; Ibanez-Guzman, J. Lidar for autonomous driving: The principles, challenges, and trends for automotive lidar and perception systems. IEEE Signal Process. Mag. 2020, 37, 50–61. [Google Scholar] [CrossRef]
  20. Zhang, X.; Xu, W.; Dong, C.; Dolan, J.M. Efficient L-shape fitting for vehicle detection using laser scanners. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 54–59. [Google Scholar]
  21. Kim, J.K.; Kim, J.W.; Kim, J.H.; Jung, T.H.; Park, Y.J.; Ko, Y.H.; Jung, S. Experimental studies of autonomous driving of a vehicle on the road using LiDAR and DGPS. In Proceedings of the 2015 15th International Conference on Control, Automation and Systems (ICCAS), Busan, Republic of Korea, 13–16 October 2015; pp. 1366–1369. [Google Scholar]
  22. Wu, H.; Siegel, M.; Stiefelhagen, R.; Yang, J. Sensor fusion using Dempster-Shafer theory [for context-aware HCI]. IMTC/2002. In Proceedings of the 19th IEEE Instrumentation and Measurement Technology Conference (IEEE Cat. No.00CH37276), Anchorage, AK, USA, 21–23 May 2002; Volume 1, pp. 7–12. [Google Scholar]
  23. Snoek, C.G.M.; Worring, M.; Smeulders, A.W.M. Early versus late fusion in semantic video analysis. In Proceedings of the 13th Annual ACM International Conference on Multimedia, Singapore, 6–11 November 2005; pp. 399–402. [Google Scholar]
  24. Wojke, N.; Bewley, A.; Paulus, D. Simple online and realtime tracking with a deep association metric. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 3645–3649. [Google Scholar]
  25. Mauri, A.; Khemmar, R.; Decoux, B.; Ragot, N.; Rossi, R.; Trabelsi, R.; Boutteau, R.; Ertaud, J.Y.; Savatier, X. Deep Learning for Real-Time 3D Multi-Object Detection, Localisation, and Tracking: Application to Smart Mobility. Sensors 2020, 20, 532. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Zhou, D.; Fang, J.; Song, X.; Guan, C.; Yin, J.; Dai, Y.; Yang, R. Iou loss for 2d/3d object detection. In Proceedings of the 2019 International Conference on 3D Vision (3DV), Québec, QC, Canada, 16–19 September 2019; pp. 85–94. [Google Scholar]
  27. Kramer, B.; MacKinnon, A. Localization: Theory and experiment. Rep. Prog. Phys. 1993, 56, 1469. [Google Scholar] [CrossRef]
  28. Kaune, R. Accuracy studies for TDOA and TOA localization. In Proceedings of the 2012 15th International Conference on Information Fusion, Singapore, 9–12 July 2012; pp. 408–415. [Google Scholar]
  29. Xue, W.; Qiu, W.; Hua, X.; Yu, K. Improved Wi-Fi RSSI measurement for indoor localization. IEEE Sens. J. 2017, 17, 2224–2230. [Google Scholar] [CrossRef]
  30. Wann, C.D.; Yeh, Y.J.; Hsueh, C.S. Hybrid TDOA/AOA indoor positioning and tracking using extended Kalman filters. In Proceedings of the 2006 IEEE 63rd Vehicular Technology Conference, Melbourne, Australia, 7–10 May 2006; pp. 1058–1062. [Google Scholar]
  31. Bailey, T.; Nieto, J.; Guivant, J.; Stevens, M.; Nebot, E. Consistency of the EKF-SLAM algorithm. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 3562–3568. [Google Scholar]
  32. Wen, W.; Hsu, L.T.; Zhang, G. Performance analysis of NDT-based graph SLAM for autonomous vehicle in diverse typical driving scenarios of Hong Kong. Sensors 2018, 18, 3928. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef] [Green Version]
  34. Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. Robot. Sci. Syst. 2014, 2, 1–9. [Google Scholar]
  35. Hol, J.D.; Schon, T.B.; Gustafsson, F. On resampling algorithms for particle filters. In Proceedings of the 2006 IEEE Nonlinear Statistical Signal Processing Workshop, Cambridge, UK, 13–15 September 2006; pp. 79–82. [Google Scholar]
  36. Lim, J.U.; Kang, M.S.; Park, D.H.; Won, J.H. Development of Commercial Game Engine-based Low Cost Driving Simulator for Researches on Autonomous Driving Artificial Intelligent Algorithms. Korean ITS J. 2021, 20, 242–263. [Google Scholar]
  37. FiveM. Available online: https://fivem.net/ (accessed on 6 September 2022).
  38. Kenney, J.B. Dedicated short-range communications (DSRC) standards in the United States. Proc. IEEE 2011, 99, 1162–1182. [Google Scholar] [CrossRef]
  39. Jammer4uk. Available online: https://www.jammer4uk.com/product/2017-usb-gps-jammer-usb-gps-l1-l2-blocker (accessed on 6 September 2022).
  40. Davis, J.; Goadrich, M. The relationship between Precision-Recall and ROC curves. In Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, USA, 25–29 June 2006; pp. 233–240. [Google Scholar]
  41. Langley, R.B. Dilution of precision. GPS World 1999, 10, 52–59. [Google Scholar]
Figure 1. Block diagram of the two-step cooperative localization method.
Figure 1. Block diagram of the two-step cooperative localization method.
Remotesensing 14 05881 g001
Figure 2. Process to find the position and heading of the surrounding vehicle using point cloud data (Red dots).
Figure 2. Process to find the position and heading of the surrounding vehicle using point cloud data (Red dots).
Remotesensing 14 05881 g002
Figure 3. Relationship between coordinate systems.
Figure 3. Relationship between coordinate systems.
Remotesensing 14 05881 g003
Figure 4. Data processing from the lidar and V2X modules [1,2,3].
Figure 4. Data processing from the lidar and V2X modules [1,2,3].
Remotesensing 14 05881 g004
Figure 5. Sensor fusion: (a) simple online and real-time tracking (SORT) and (b) fusion output. Black box as vehicle contour [1,2,4].
Figure 5. Sensor fusion: (a) simple online and real-time tracking (SORT) and (b) fusion output. Black box as vehicle contour [1,2,4].
Remotesensing 14 05881 g005
Figure 6. Block diagram of the sensor fusion module for tracking the surrounding vehicles.
Figure 6. Block diagram of the sensor fusion module for tracking the surrounding vehicles.
Remotesensing 14 05881 g006
Figure 7. Localization of the ego-vehicle.
Figure 7. Localization of the ego-vehicle.
Remotesensing 14 05881 g007
Figure 8. Block diagram of the autonomous driving simulation.
Figure 8. Block diagram of the autonomous driving simulation.
Remotesensing 14 05881 g008
Figure 9. Test scenario: (a) view of the driving simulator; (b) aerial view.
Figure 9. Test scenario: (a) view of the driving simulator; (b) aerial view.
Remotesensing 14 05881 g009
Figure 10. Multiobject tracking results for surrounding vehicles: (a) precision and (b) recall [1,2,3,4].
Figure 10. Multiobject tracking results for surrounding vehicles: (a) precision and (b) recall [1,2,3,4].
Remotesensing 14 05881 g010
Figure 11. Tracking results for an object.
Figure 11. Tracking results for an object.
Remotesensing 14 05881 g011
Figure 12. Ego-vehicle localization scenario: (a) satellite view and (b) road view.
Figure 12. Ego-vehicle localization scenario: (a) satellite view and (b) road view.
Remotesensing 14 05881 g012
Figure 13. Localization results for the ego-vehicle with the number of surrounding vehicles [1,2,3,4].
Figure 13. Localization results for the ego-vehicle with the number of surrounding vehicles [1,2,3,4].
Remotesensing 14 05881 g013
Figure 14. Four scenarios for vehicle localization testing: (a) satellite view and (b) road view.
Figure 14. Four scenarios for vehicle localization testing: (a) satellite view and (b) road view.
Remotesensing 14 05881 g014
Figure 15. Localization technique results according to vehicle arrangement and the number of particles.
Figure 15. Localization technique results according to vehicle arrangement and the number of particles.
Remotesensing 14 05881 g015
Figure 16. Localization technique position errors according to the vehicle arrangement and number of particles.
Figure 16. Localization technique position errors according to the vehicle arrangement and number of particles.
Remotesensing 14 05881 g016
Figure 17. Position dilution of precision analysis results for each scenario.
Figure 17. Position dilution of precision analysis results for each scenario.
Remotesensing 14 05881 g017
Table 1. Results of root mean square error by vehicle ID.
Table 1. Results of root mean square error by vehicle ID.
IDX (m)Y (m)
10.310.22
20.340.33
30.420.38
40.470.42
Table 2. Localization performance according to the number of surrounding vehicles with the root-mean-square error.
Table 2. Localization performance according to the number of surrounding vehicles with the root-mean-square error.
No. of VehiclesX (m)Y (m)
17.1912.86
21.971.11
30.390.31
40.210.17
Table 3. Root-mean-square errors of the localization technique according to the vehicle arrangement and number of particles.
Table 3. Root-mean-square errors of the localization technique according to the vehicle arrangement and number of particles.
No. of ParticlesTime (ms)Scenario AScenario BScenario CScenario D
X (m)Y (m)X (m)Y (m)X (m)Y (m)X (m)Y (m)
Particles: 1004.20.180.170.180.250.640.931.862.31
Particles: 50016.70.170.150.170.230.340.590.991.5
Particles: 100042.30.160.140.170.220.310.550.951.26
Particles: 200081.40.150.130.150.160.280.470.761.08
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kang, M.-S.; Ahn, J.-H.; Im, J.-U.; Won, J.-H. Lidar- and V2X-Based Cooperative Localization Technique for Autonomous Driving in a GNSS-Denied Environment. Remote Sens. 2022, 14, 5881. https://doi.org/10.3390/rs14225881

AMA Style

Kang M-S, Ahn J-H, Im J-U, Won J-H. Lidar- and V2X-Based Cooperative Localization Technique for Autonomous Driving in a GNSS-Denied Environment. Remote Sensing. 2022; 14(22):5881. https://doi.org/10.3390/rs14225881

Chicago/Turabian Style

Kang, Min-Su, Jae-Hoon Ahn, Ji-Ung Im, and Jong-Hoon Won. 2022. "Lidar- and V2X-Based Cooperative Localization Technique for Autonomous Driving in a GNSS-Denied Environment" Remote Sensing 14, no. 22: 5881. https://doi.org/10.3390/rs14225881

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop