Next Article in Journal
Lifting Wavelet-Assisted EM Joint Estimation and Detection in Cooperative Spectrum Sensing
Previous Article in Journal
Adaptive Fuzzy Modal Matching of Capacitive Micromachined Gyro Electrostatic Controlling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

INS/LIDAR/Stereo SLAM Integration for Precision Navigation in GNSS-Denied Environments

by
Nader Abdelaziz
1,2,* and
Ahmed El-Rabbany
1
1
Department of Civil Engineering, Toronto Metropolitan University, Toronto, ON M5B 2K3, Canada
2
Department of Civil Engineering, Tanta University, Tanta 31527, Egypt
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(17), 7424; https://doi.org/10.3390/s23177424
Submission received: 10 July 2023 / Revised: 17 August 2023 / Accepted: 24 August 2023 / Published: 25 August 2023
(This article belongs to the Topic Multi-Sensor Integrated Navigation Systems)

Abstract

:
Traditionally, navigation systems have relied solely on global navigation satellite system (GNSS)/inertial navigation system (INS) integration. When a temporal loss of GNSS signal lock is encountered, these systems would rely on INS, which can sustain short bursts of outages, albeit drift significantly in prolonged outages. In this study, an extended Kalman filter (EKF) is proposed to develop an integrated INS/LiDAR/Stereo simultaneous localization and mapping (SLAM) navigation system. The first update stage of the filter integrates the INS with the LiDAR, after which the resultant navigation solution is integrated with the stereo SLAM solution, which yields the final integrated navigation solution. The system was tested for different driving scenarios in urban and rural environments using the raw Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset in the complete absence of the GNSS signal. In addition, the selected KITTI drives covered low and high driving speeds in feature-rich and feature-poor environments. It is shown that the proposed INS/LiDAR/Stereo SLAM navigation system yielded better position estimations in comparison to using the INS without any assistance from onboard sensors. The accuracy improvement was expressed as a reduction of the root-mean-square error (RMSE) by 83% and 82% in the horizontal and up directions, respectively. In addition, the proposed system outperformed the positioning accuracy of some of the state-of-the-art algorithms.

1. Introduction

Precise positioning is one significant obstacle to overcome to achieve accurate vehicular navigation. An effective navigation system should be capable of functioning effectively in any driving scenario, such as in urban or rural areas with high or low traffic, and in varying weather and lighting conditions. Moreover, incorporating redundant sensors is a vital aspect to consider when creating a navigation system, which formulates a robust navigation system. For a navigation system to function securely and accurately, it is necessary to have multiple onboard sensors, which ensures that if one sensor fails, for any reason, the other sensors can compensate for the malfunction, and thereby the system continues to operate safely and effectively [1,2,3].
The integration of the global navigation satellite system (GNSS) with the inertial navigation system (INS) has been extensively researched and implemented in numerous studies [4,5,6,7,8,9,10,11]. Typically, the GNSS and the inertial measurement unit (IMU) measurements are merged using a Kalman filter [12], which can employ various integration methods between the GNSS and the IMU, such as loosely coupled and tightly coupled integrations. Loosely coupled integration of GNSS/INS involves combining independent outputs of both systems to enhance navigation accuracy and reliability, while tightly coupled integration integrates GNSS measurements directly into INS computations [13,14]. However, relying solely on INS in the event of prolonged GNSS signal outages can lead to degraded positioning accuracy, especially when utilizing a low-cost micro-electro-mechanical system (MEMS) IMU. As a result, it is crucial to mount more onboard sensors, such as camera(s) or LiDAR sensors.
Using a camera or a LiDAR sensor for positioning operates on simultaneous localization and mapping (SLAM) algorithms. SLAM can be formally defined as the ability to map the surroundings while simultaneously monitoring the sensor’s location [15]. Many visual SLAM (VSLAM) algorithms were proposed as collected in [16]. Visual SLAM employs visual markers or landmarks to establish a map of the surroundings and determine the sensor’s position. Typically, visual SLAM utilizes a camera as the primary sensor and analyzes images to identify environmental features. By examining the motion of these features between successive images, visual SLAM can approximate the sensor’s location while generating a map of the area. Visual SLAM can be executed using various types of cameras, including monocular cameras [17,18,19,20,21], which retrieve the trajectory up to a scale factor, and stereo cameras [22,23,24], which correct the scale factor. It is worth mentioning that visual SLAM has the key benefit of capturing considerably more environmental details compared to using LiDAR. However, contrary to LiDAR SLAM, visual SLAM is considerably impacted by variations in illumination. It is worth mentioning that in some lighting conditions (i.e., bright sunlight), the LiDAR sensor performance can be negatively affected as a result of interference or backscattering from highly reflective targets [25].
Similar to the developed visual SLAM algorithms, a large number of studies proposed many LiDAR SLAM algorithms, the leading of which is the LiDAR odometry and mapping (LOAM) [26] which was tested on the KITTI odometry benchmark [27]. As a result, numerous variations of LOAM, including A-LOAM and Kitware [28,29,30,31], have been created, which have enhanced the computational efficiency of the original LOAM algorithm. It is worth mentioning that a major advantage of LiDAR SLAM is that it is not impacted by lighting conditions because the LiDAR scanner is an active sensor.
Considering the advantages of both LiDAR sensors and cameras, the most ideal scenario would be to integrate both sensors (i.e., the camera and LiDAR) to obtain rich environmental information while reducing any negative effects of illumination variations.
Several studies have incorporated an INS with either a LiDAR sensor or a camera. In our previous work in [32], we introduced a loosely-coupled integrated navigation system, which aimed to combine INS and LiDAR SLAM using an EKF. The efficacy of this novel navigation approach was evaluated across a range of driving scenarios and environments, utilizing the raw residential and highway datasets from the KITTI dataset. The study comprised three distinct case studies. The first case study considered residential datasets from the KITTI collection, encompassing a total driving duration of 48 min during a complete GNSS signal outage. The second case study delved into highway datasets spanning a total duration of 7 min, encompassing all available highway datasets within the raw KITTI data. Paralleling the first case study, the LiDAR SLAM system showcased better positional results in extended datasets. The third case study tackled scenarios involving intermittent GNSS signal outages, simulating situations like urban canyons and tunnels on highways. The results demonstrated significant enhancements when compared to the same datasets in the first case study, which dealt with complete GNSS signal loss. Across all driving scenarios, the integrated INS/LiDAR SLAM navigation system showcased substantial improvements in positional accuracy for residential datasets, achieving an average RMSE reduction of 88% and 32% in the horizontal and vertical components, respectively. The improvements for highway datasets were approximately 70% and 0.2% for the horizontal and vertical components, respectively. Comparisons were drawn between the proposed system and three state-of-the-art LiDAR SLAM algorithms. While the system slightly outperformed its counterparts for residential datasets, it significantly outshone other SLAM algorithms in highway environments. This comprehensive investigation underscores the potential of the proposed integrated navigation system across various driving scenarios and its notable contributions to enhancing navigation accuracy and reliability.
Similarly, another study [33] integrated INS and LiDAR SLAM in an unmanned aerial vehicle (UAV) mapping system to address a malfunctioning GNSS, and the developed GNSS/INS/LiDAR SLAM system was able to overcome GNSS outages. In another research [34], an EKF was used to fuse data from 3D-RISS, GNSS, and LiDAR odometry (LO) to improve positioning accuracy, yielding a 64% decrease in positioning errors compared to using the INS only. In [35], a navigation model that integrated a monocular camera, IMU, and GNSS to navigate ground vehicles in GNSS environments that are susceptible to signal outages was proposed. The system yielded up to a 74% reduction in position error compared to using only GNSS data.
In this research, we extend our work in [32] by proposing an integration between the INS, LiDAR SLAM, and stereo visual SLAM, which leverages the often overlooked capabilities of the INS as a navigation sensor that can produce reliable information. Additionally, the developed navigation model stands out in terms of achieving redundancy, which is crucial for safe and efficient navigation. The integration is executed using a series of EKFs in a loosely coupled integration scheme. To comprehensively validate the performance of our advanced navigation model, we conducted extensive testing across various drives utilizing raw data from the KITTI datasets. These drives encompassed a wide spectrum of driving scenarios, ranging from intricate urban environments to sprawling rural landscapes, and included variations in driving speeds. Finally, our integrated system is weighed against some state-of-the-art models.

2. Navigation System Architecture

2.1. LiDAR SLAM

The Kitware SLAM [28], an updated version of the LOAM algorithm [26], is the LiDAR SLAM employed in this study. In our work in [32], a thorough explanation of Kitware SLAM and the various improvements it offers over LOAM is presented.

2.2. Stereo Visual SLAM

ORB-SLAM [21,24,36] can be considered a superior state-of-the-art visual SLAM algorithm that can be implemented for both monocular and stereo cameras. Therefore, ORB-SLAM3 [36,37] was adopted in this research to process the stereo images of the KITTI data.
The pipeline of the stereo SLAM system consists of feature detection, feature matching, pose estimation, point cloud construction, and bundle adjustment. Using a scale-invariant feature transform (SIFT) algorithm, ORB features are detected and matched in the left and right stereo images, followed by orientation estimation using the intensity centroid method. The matched points are utilized to calculate the fundamental matrix, and the relative camera pose between frames is then estimated by solving the five-point algorithm with RANSAC to exclude outliers. Using a robust optimizer known as the robust information matrix optimization (RIMO) algorithm, which takes into account the feature measurements’ uncertainty, the camera pose estimation is further refined. The environment’s 3D point cloud is generated by triangulating the matched feature points in the left and right camera images using relative pose estimates. Using bundle adjustment, the point cloud is optimized to minimize the reprojection error of the 3D points onto the stereo images. Utilizing the Levenberg–Marquardt algorithm with the Schur complement factorization, the optimization is carried out. The optimization of bundle adjustment includes robust loss functions to manage outliers and noise. Using a bag-of-words approach, loop closures are identified by comparing the current image to a database of prior images. The method represents images using histograms of visual words and matches the current image to the database entry with the highest degree of similarity. Using geometric constraints, the loop closure is validated, and then the map and camera configurations are optimized using bundle adjustment.

2.3. Integrated Navigation Systems

2.3.1. Coordinate Transformations

SLAM pose estimations are computed relative to the sensor’s local reference frame. In other words, all poses in LiDAR SLAM are in reference to the first LiDAR frame, while in Visual SLAM, they are based on the first camera frame. Therefore, all local coordinates will be transformed from local frames to the WGS84 ellipsoid, which is necessary for the subsequent Kalman filtering. Figure 1 illustrates the process of transformation graphically.
Pose transformations are executed using 4 × 4 homogenous transformation matrices. Let P i c a m and P i L i denote the coordinates of a point expressed in the camera frame and LiDAR frame, respectively. Let P i e c e f denote the same point expressed in the WGS84 reference frame. The same point referenced to the WGS84 reference frame is denoted by P i e c e f . Equations (1)–(4) represent the sequence of homogeneous transformations.
P i e c e f = ( R / t ) l e c e f ( R / t ) b l ( R / t ) L i b ( R / t ) c a m L i P i c a m
P i e c e f = ( R / t ) l e c e f ( R / t ) b l ( R / t ) L i b P i L i
R c a m l = R b l R L i b R c a m L i
R L i l = R b l R L i b
where ( R / t ) s u b s c r i p t o r i g i n s u p e r s c r i p t d e s t i n a t i o n symbolizes the 4 × 4 homogeneous transformation matrix from the origin frame to the destination frame. That is, ( R / t ) c a m L i is the homogenous transformation from the camera frame to the LiDAR frame, ( R / t ) L i b denotes the homogenous transformation from the LiDAR frame to the body frame, ( R / t ) b l is the transformation from the body frame to the local-level frame, and ( R / t ) l e c e f is the transformation from the local-level frame to the WGS84 reference frame.

2.3.2. INS/LiDAR/Stereo SLAM Integration

The measurements of the IMU, LiDAR, and the stereo camera are fused using the proposed integrated navigation system through a series of EKFs as illustrated by the block diagram in Figure 2. Firstly, the INS solution is fused with the LiDAR SLAM pose estimations through the first EKF, forming the navigation solution of the INS/LiDAR SLAM integration. The latter is then fused with the stereo SLAM solution in the second EKF to form the final integrated INS/LiDAR/Stereo navigation system. Finally, the updated errors are fed back into the IMU’s full mechanization, which creates a closed-loop error scheme.
The EKF mathematical and stochastic models (i.e., the system model, the design matrix (H), and the system dynamic matrix (F)) are the same as our work in [32]. The state vector is given by Equation (5). The first and second update vectors are given by Equations (6) and (7), respectively.
δ x = δ r δ v δ ε δ b a δ b g T
δ Z k 1 = δ r 1 δ ε 1 T = φ I N S L i λ I N S L i h I N S L i p I N S L i r I N S L i y I N S L i T
δ Z k 2 = δ r 2 δ ε 2 T = φ N S / L i s t e r e o λ N S / L i s t e r e o h I N S / L i s t e r e o p I N S / L i s t e r e o r I N S / L i s t e r e o y I N S / L i s t e r e o T
where δ r = δ ϕ δ λ δ h T is the error vector of the position; δ v = δ v e δ v n δ v u T is the error vector of the velocity, δ ε = δ p δ r δ y T is the error vector of the attitude; δ b a = δ b a x δ b a y δ b a z T is the error vector of the accelerometer bias; δ b g = δ b g x δ b g y δ b g z T is the error vector of the gyroscope bias; δ Z k 1 , δ Z k 2 are the measurement update vectors; δ r 1 , δ r 2 are the position error vectors in the first and second update stages, respectively (i.e., latitude error: φ I N S L i = φ I N S φ L i ); and δ ε 1 , δ ε 2 are the attitude error vectors in both update stages.

3. Data Acquisition and Description

In this research, the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset [38] is adopted as the data source, which is available through [39]. The KITTI dataset stands out as a valuable resource for researchers internationally and they can publish their results on the KITTI odometry benchmark [27] to enable fair comparisons between different algorithms and methods. It is renowned for its diverse sensor suite, which enables capturing data from various perspectives. The platform’s sensor configuration includes an integrated GNSS/IMU unit (OXTS RT3003), a 360-degree rotating mechanical LiDAR with 64 beams (Velodyne HDL-64E), and two Sony stereo pairs that collect both color and grayscale images. These sensors work in harmony to provide a rich and comprehensive view of the environment surrounding the data collection vehicle.
The KITTI dataset was gathered in Germany, specifically in the town of Karlsruhe. The choice of location offers an advantage for researchers, as it presents a wide range of real-world driving scenarios and environmental conditions. The dataset encompasses urban, rural, and highway scenes, making it applicable to various driving applications worldwide.
The KITTI dataset is comprised of several categories, namely raw data, object detection and tracking, road segmentation, and odometry. Each category serves a specific purpose for vehicular navigation and autonomous driving research. The raw data consists of raw sensor measurements, including images, point clouds, and sensor calibration parameters. Researchers can utilize these data to develop and test their algorithms for tasks such as object detection, semantic segmentation, and depth estimation. It is worth noting that the raw datasets are available in time unsynchronized and synchronized data, where the latter is widely utilized. The object detection and tracking dataset focuses on providing annotated 2D and 3D bounding box information for objects of interest, such as cars, pedestrians, and cyclists. It allows researchers to train and evaluate object detection and tracking algorithms. The third category of data, road segmentation, is designed to tackle road scene understanding. It provides pixel-level annotations for road regions, enabling the development of algorithms for road segmentation, a crucial task for autonomous navigation. Finally, the odometry dataset offers precise ego-motion estimates, allowing researchers to evaluate and refine their localization and mapping algorithms. It consists of synchronized camera-LIDAR sequences with accurate ground truth pose information. However, it does not contain raw IMU measurements (i.e., raw accelerations and angular rotations). Therefore, in order to take advantage of the IMU’s raw measurements which are essential components of the proposed navigation models, the raw dataset is the one considered in this study.
The structure of the KITTI raw dataset is a well-defined, consistent structure across all drives. Each dataset is organized into sequences, where each sequence represents a continuous driving segment. For instance, a sequence might cover a specific urban area or a highway stretch. Each sequence contains multiple data modalities, such as grayscale and color images, Velodyne point clouds, and GPS/IMU measurements. Additionally, the dataset includes calibration files for sensor alignment and synchronization purposes.
In this research, both residential and highway datasets are considered, which forms two case studies. The drive number of each of the raw KITTI datasets is presented in Table 1. In addition, the table presents the length, the elapsed drive time, average speed, and the number of frames of each dataset.

4. Analysis and Results

4.1. Residential Datasets—Case Study 1

The first case study analyzes seven drives of the raw KITTI dataset, in which the driving environment is in urban areas at low speeds with high-density features. All seven datasets add up to approximately 17 kilometers of driving distance for a total driving time of around 34 min. In addition, all drives were used to test the performance of the developed integrated navigation system in residential driving environments. The first residential drive to consider was D-34, which is a 920-m drive segment averaging a speed of around 26 km/h. Figure 3 presents the position error in the ENU frame (navigation frame) for four navigation solutions, namely the INS, the LiDAR SLAM, the Stereo SLAM, and the final integrated navigation solution. These errors are calculated in reference to the reference trajectory provided by the integrated GNSS/INS unit (OXTS). The errors of the attitude angles (roll, pitch and yaw) are shown in Figure 4.
From Figure 3, it is observable that the INS solution drifts significantly shortly after the vehicle starts the drive, which is typical behaviour of the sole use of an IMU. Both the LiDAR and Stereo SLAM yield significantly more accurate results when compared to the INS, with the Stereo SLAM being the most accurate. Consequently, the final navigation solution followed that of Stereo SLAM for positioning results. On the contrary, in Figure 4, the INS produces more accurate and stable estimations of the attitude angles as opposed to the LiDAR and Stereo SLAM estimations. This is a result of the high-quality IMU utilized by the KITTI data collection platform. In addition, this behaviour is consistent with our previous findings in [32]. Consequently, the integrated system was tuned to match the INS’s attitude estimation. In Table 2, the error statistics of the INS/LiDAR/Stereo navigation solution are displayed, while Figure 5 compares all four trajectories to the ground truth.
The remaining datasets in Table 1 experienced the same analysis as previously done for D-34. The datasets yielded the same trend for the position and attitude estimations. Appendix A, Figure A1, Figure A2, Figure A3, Figure A4, Figure A5, Figure A6, Figure A7, Figure A8, Figure A9 and Figure A10 provide detailed position and attitude results for each of the remaining drives of the KITTI dataset. Figure 6 compares the INS, LiDAR SLAM, Stereo SLAM, and the INS/LiDAR/Stereo integrated navigation solution to the ground truth trajectory for each drive, as displayed in the navigation frame.
Finally, Figure 7 showcases the RMSE reductions of the INS position estimations for the horizontal and up directions for all drives of the KITTI dataset. It is worth mentioning that the average reduction of INS RMSE for all residential drives was approximately 99% and 86% in the horizontal and up directions, respectively.

4.2. Highway Datasets—Case Study 2

The second case study involves testing the performance of the developed integrated navigation system using raw highway KITTI datasets. Four highway datasets were considered, which totalled a driving distance and time of roughly 5 kilometres and 5 min, respectively. Detailed results are provided for D-42, which is a 2591-m driving segment with a driving time of 121.19 s. Figure 8 and Figure 9 present the position errors in the ENU frame and attitude errors (roll, pitch, and yaw), respectively. The statistics of these errors are presented in Table 3.
It is noticeable from Figure 8 and Figure 9 that the integrated navigation system persists to have the same behaviour as shown in the first case study. That is, the system follows the position estimations of the more accurate sensors (stereo SLAM) while following the most accurate attitude estimates by the INS.
Having conducted the same analysis for the remaining highway datasets, Figure A11, Figure A12, Figure A13, Figure A14, Figure A15 and Figure A16 in Appendix A provide a detailed comparison of the positional and attitude errors of the datasets. In addition, Figure 10 presents a graphical comparison between the trajectories of the INS, LiDAR SLAM, Stereo SLAM, the integrated navigation system and the ground truth trajectory in the ENU reference frame.
Figure 11 presents the RMSE reductions of the INS position estimations in the horizontal and up directions for highway drives considered in the second case study. It is important to note that, on average, the INS RMSE was reduced by approximately 50% for the horizontal direction and 73% for the up direction in highway drives.

4.3. Comparison to State-of-the-Art Models

It is worth mentioning that even though the developed INS/LiDAR/Stereo navigation system does follow the position of the stereo SLAM and the attitude of the INS, the LiDAR plays a vital role as a redundant onboard sensor. If some unexpected malfunction occurs to the stereo camera, the navigation system will continue to operate efficiently to the accuracy of the LiDAR SLAM, which prevents full navigation system failure.
The INS/LiDAR/Stereo navigation system yielded superior performance to the system we developed in [32] as illustrated in Table 4. The table compares both navigation models using a sample of the KITTI dataset. The accuracy improvement in the positional RMSE is around 50% and 64% for the horizontal and up directions, respectively. In addition, it is important to mention that the INS/LiDAR navigation model was compared in [32] to other state-of-the-art SLAM algorithms, which promotes the accuracy of our INS/LiDAR/Stereo system versus both our previous work and other state-of-the-art SLAM algorithms [29,30,41].

5. Conclusions

In this research, an integrated INS/LiDAR/Stereo navigation system was developed and tested in various driving scenarios using the raw KITTI dataset. The datasets were broken into two case studies, namely residential and highway datasets. Both case studies covered residential areas driving scenarios at relatively low driving speeds with dense features and driving in highway areas at higher speeds and fewer features. In both case studies, the developed integrated navigation system followed the position of the stereo SLAM estimation, albeit the attitude estimations of the INS. In addition, the integrated system led to an overall improvement in the INS RMSE for all case studies of 83% and 82% in the horizontal and up directions, respectively. Finally, the developed navigation model was superior to our previously developed INS/LiDAR navigation model by an average accuracy improvement of approximately 50% and 64% for the horizontal and up directions, respectively.

Author Contributions

N.A. and A.E.-R. conceived the idea and wrote the paper. N.A. implemented the analysis and performed the validation under the supervision of A.E.-R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC), the Toronto Metropolitan University Graduate Fellowship, and the Government of Ontario Scholarship (OGS).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in this study can be found at: http://www.cvlibs.net/datasets/kitti/, accessed 10 March 2023.

Acknowledgments

This work is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), Toronto Metropolitan University, and the Government of Ontario.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. Positional errors expressed in the ENU frame, D-20.
Figure A1. Positional errors expressed in the ENU frame, D-20.
Sensors 23 07424 g0a1
Figure A2. Attitude angles’ errors, D-20.
Figure A2. Attitude angles’ errors, D-20.
Sensors 23 07424 g0a2
Figure A3. Positional errors expressed in the ENU frame, D-27-d.
Figure A3. Positional errors expressed in the ENU frame, D-27-d.
Sensors 23 07424 g0a3
Figure A4. Attitude angles’ errors, D-27-d.
Figure A4. Attitude angles’ errors, D-27-d.
Sensors 23 07424 g0a4
Figure A5. Positional errors expressed in the ENU frame, D-33.
Figure A5. Positional errors expressed in the ENU frame, D-33.
Sensors 23 07424 g0a5
Figure A6. Attitude angles’ errors, D-33.
Figure A6. Attitude angles’ errors, D-33.
Sensors 23 07424 g0a6
Figure A7. Positional errors expressed in the ENU frame, D-34-d.
Figure A7. Positional errors expressed in the ENU frame, D-34-d.
Sensors 23 07424 g0a7
Figure A8. Attitude angles’ errors, D-34-d.
Figure A8. Attitude angles’ errors, D-34-d.
Sensors 23 07424 g0a8
Figure A9. Positional errors expressed in the ENU frame, D-28.
Figure A9. Positional errors expressed in the ENU frame, D-28.
Sensors 23 07424 g0a9
Figure A10. Attitude angles’ errors, D-28.
Figure A10. Attitude angles’ errors, D-28.
Sensors 23 07424 g0a10
Figure A11. Positional errors expressed in the ENU frame, D-15.
Figure A11. Positional errors expressed in the ENU frame, D-15.
Sensors 23 07424 g0a11
Figure A12. Attitude angles’ errors, D-15.
Figure A12. Attitude angles’ errors, D-15.
Sensors 23 07424 g0a12
Figure A13. Positional errors expressed in the ENU frame, D-16.
Figure A13. Positional errors expressed in the ENU frame, D-16.
Sensors 23 07424 g0a13
Figure A14. Attitude angles’ errors, D-16.
Figure A14. Attitude angles’ errors, D-16.
Sensors 23 07424 g0a14
Figure A15. Positional errors expressed in the ENU frame, D-101.
Figure A15. Positional errors expressed in the ENU frame, D-101.
Sensors 23 07424 g0a15
Figure A16. Attitude angles’ errors, D-101.
Figure A16. Attitude angles’ errors, D-101.
Sensors 23 07424 g0a16

References

  1. de Ponte Müller, F. Survey on ranging sensors and cooperative techniques for relative positioning of vehicles. Sensors 2017, 17, 271. [Google Scholar] [CrossRef] [PubMed]
  2. Martínez-Díaz, M.; Soriguera, F. Autonomous vehicles: Theoretical and practical challenges. Transp. Res. Procedia 2018, 33, 275–282. [Google Scholar] [CrossRef]
  3. Maurer, M.; Gerdes, J.C.; Lenz, B.; Winner, H. Safety Concept for Autonomous Vehicles. In Autonomes Fahren; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  4. Abd Rabbou, M.; El-Rabbany, A. Tightly coupled integration of GPS precise point positioning and MEMS-based inertial systems. GPS Solut. 2015, 19, 601–609. [Google Scholar] [CrossRef]
  5. Borko, A.; Klein, I.; Even-Tzur, G. GNSS/INS Fusion with Virtual Lever-Arm Measurements. Sensors 2018, 18, 2228. [Google Scholar] [CrossRef] [PubMed]
  6. Elmezayen, A.; El-Rabbany, A. Ultra-Low-Cost Tightly Coupled Triple-Constellation GNSS PPP/MEMS-Based INS Integration for Land Vehicular Applications. Geomatics 2021, 1, 258–286. [Google Scholar] [CrossRef]
  7. Elmezayen, A.; El-Rabbany, A. Performance evaluation of real-time tightly-coupled GNSS PPP/MEMS-based inertial integration using an improved robust adaptive Kalman filter. J. Appl. Geodesy 2020, 14, 413–430. [Google Scholar] [CrossRef]
  8. Li, W.; Fan, P.; Cui, X.; Zhao, S.; Ma, T.; Lu, M. A Low-Cost INS-Integratable GNSS Ultra-Short Baseline Attitude Determination System. Sensors 2018, 18, 2114. [Google Scholar] [CrossRef]
  9. Li, W.; Li, W.; Cui, X.; Zhao, S.; Lu, M. A Tightly Coupled RTK/INS Algorithm with Ambiguity Resolution in the Position Domain for Ground Vehicles in Harsh Urban Environments. Sensors 2018, 18, 2160. [Google Scholar] [CrossRef] [PubMed]
  10. Wang, G.; Han, Y.; Chen, J.; Wang, S.; Zhang, Z.; Du, N.; Zheng, Y.J. A GNSS/INS integrated navigation algorithm based on Kalman filter. IFAC-PapersOnLine 2018, 51, 232–237. [Google Scholar] [CrossRef]
  11. Gao, B.; Hu, G.; Zhong, Y.; Zhu, X. Cubature Kalman Filter With Both Adaptability and Robustness for Tightly-Coupled GNSS/INS Integration. IEEE Sens. J. 2021, 21, 14997–15011. [Google Scholar] [CrossRef]
  12. Grewal, M.S.; Andrews, A.P.; Bartone, C.G. Kalman Filtering; Wiley Telecom: Washington, DC, USA, 2020. [Google Scholar]
  13. Boguspayev, N.; Akhmedov, D.; Raskaliyev, A.; Kim, A.; Sukhenko, A. A Comprehensive Review of GNSS/INS Integration Techniques for Land and Air Vehicle Applications. Appl. Sci. 2023, 13, 4819. [Google Scholar] [CrossRef]
  14. Noureldin, A.; Karamat, T.B.; Georgy, J. Fundamentals of Inertial Navigation, Satellite-Based Positioning and Their Integration; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  15. Ben-Ari, M.; Mondada, F. Elements of Robotics; Springer International Publishing: Cham, Switzerland, 2018. [Google Scholar]
  16. Taketomi, T.; Uchiyama, H.; Ikeda, S. Visual SLAM algorithms: A survey from 2010 to 2016. IPSJ Trans. Comput. Vis. Appl. 2017, 9, 16. [Google Scholar] [CrossRef]
  17. Çelik, K.; Somani, A.K. Monocular Vision SLAM for Indoor Aerial Vehicles. J. Electr. Comput. Eng. 2013, 2013, 374165. [Google Scholar] [CrossRef]
  18. Tan, W.; Liu, H.; Dong, Z.; Zhang, G.; Bao, H. Robust monocular SLAM in dynamic environments. In Proceedings of the 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Adelaide, Australia, 1–4 October 2013; pp. 209–218. [Google Scholar]
  19. Davison, A.J.; Reid, I.D.; Molton, N.D.; Stasse, O. MonoSLAM: Real-Time Single Camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1052–1067. [Google Scholar] [CrossRef] [PubMed]
  20. Hwang, S.-Y.; Song, J.-B. Monocular Vision-Based SLAM in Indoor Environment Using Corner, Lamp, and Door Features from Upward-Looking Camera. IEEE Trans. Ind. Electron. (1982) 2011, 58, 4804–4812. [Google Scholar] [CrossRef]
  21. Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
  22. Engel, J.; Stuckler, J.; Cremers, D. Large-scale direct SLAM with stereo cameras. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 1935–1942. [Google Scholar]
  23. Krombach, N.; Droeschel, D.; Houben, S.; Behnke, S. Feature-based visual odometry prior for real-time semi-dense stereo SLAM. Robot. Auton. Syst. 2018, 109, 38–58. [Google Scholar] [CrossRef]
  24. Mur-Artal, R.; Tardós, J.D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef]
  25. Gomes, T.; Roriz, R.; Cunha, L.; Ganal, A.; Soares, N.; Araújo, T.; Monteiro, J. Evaluation and testing system for automotive LiDAR sensors. Appl. Sci. 2022, 12, 13003. [Google Scholar] [CrossRef]
  26. Zhang, J.; Singh, S. LOAM: Lidar odometry and mapping in real-time. Robot. Sci. Syst. 2014, 2, 1–9. [Google Scholar]
  27. KITTI. Available online: http://www.cvlibs.net/datasets/kitti/eval_odometry.php (accessed on 1 December 2022).
  28. KITWARE. Available online: https://gitlab.kitware.com/keu-computervision/slam (accessed on 1 May 2020).
  29. A-LOAM. Available online: https://github.com/HKUST-Aerial-Robotics/A-LOAM (accessed on 5 October 2022).
  30. F-LOAM. Available online: https://github.com/wh200720041/floam (accessed on 6 May 2022).
  31. Shan, T.; Englot, B. LeGO-LOAM: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4758–4765. [Google Scholar]
  32. Abdelaziz, N.; El-Rabbany, A. An Integrated INS/LiDAR SLAM Navigation System for GNSS-Challenging Environments. Sensors 2022, 22, 4327. [Google Scholar] [CrossRef] [PubMed]
  33. Elamin, A.; Abdelaziz, N.; El-Rabbany, A. A GNSS/INS/LiDAR Integration Scheme for UAV-Based Navigation in GNSS-Challenging Environments. Sensors 2022, 22, 9908. [Google Scholar] [CrossRef] [PubMed]
  34. Aboutaleb, A.; El-Wakeel, A.S.; Elghamrawy, H.; Noureldin, A. Lidar/riss/gnss dynamic integration for land vehicle robust positioning in challenging gnss environments. Remote Sens. 2020, 12, 2323. [Google Scholar] [CrossRef]
  35. Chu, T.; Guo, N.; Backén, S.; Akos, D. Monocular camera/IMU/GNSS integration for ground vehicle navigation in challenging GNSS environments. Sensors 2012, 12, 3162–3185. [Google Scholar] [CrossRef]
  36. Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.; Tardós, J.D. Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam. arXiv 2021, arXiv:2007.11898. [Google Scholar] [CrossRef]
  37. UZ-SLAMlab ORB_SLAM3. Available online: https://github.com/UZ-SLAMLab/ORB_SLAM3 (accessed on 15 June 2023).
  38. Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
  39. KITTI. Available online: http://www.cvlibs.net/datasets/kitti/ (accessed on 10 March 2023).
  40. Google Earth. Available online: https://earth.google.com/web/ (accessed on 2 March 2023).
  41. LeGO-LOAM. Available online: https://github.com/Mitchell-Lee-93/kitti-lego-loam (accessed on 5 May 2022).
Figure 1. Pose transformation of onboard sensor WGS84 global reference frame.
Figure 1. Pose transformation of onboard sensor WGS84 global reference frame.
Sensors 23 07424 g001
Figure 2. INS/LiDAR/Stereo LC integration block diagram.
Figure 2. INS/LiDAR/Stereo LC integration block diagram.
Sensors 23 07424 g002
Figure 3. Positional errors expressed in the ENU frame, D-34.
Figure 3. Positional errors expressed in the ENU frame, D-34.
Sensors 23 07424 g003
Figure 4. Attitude angles’ errors, D-34.
Figure 4. Attitude angles’ errors, D-34.
Sensors 23 07424 g004
Figure 5. Comparison of trajectories in the world frame (WGS84), D-34, base map captured from Google Earth [40].
Figure 5. Comparison of trajectories in the world frame (WGS84), D-34, base map captured from Google Earth [40].
Sensors 23 07424 g005
Figure 6. Comparison of trajectories of the raw residential KITTI drives in the ENU frame.
Figure 6. Comparison of trajectories of the raw residential KITTI drives in the ENU frame.
Sensors 23 07424 g006
Figure 7. Accuracy improvement expressed by INS RMSE reduction for the horizontal and up directions, raw residential KITTI datasets.
Figure 7. Accuracy improvement expressed by INS RMSE reduction for the horizontal and up directions, raw residential KITTI datasets.
Sensors 23 07424 g007
Figure 8. Positional errors expressed in the ENU frame, D-42.
Figure 8. Positional errors expressed in the ENU frame, D-42.
Sensors 23 07424 g008
Figure 9. Attitude angles’ errors, D-42.
Figure 9. Attitude angles’ errors, D-42.
Sensors 23 07424 g009
Figure 10. Comparison of trajectories of the raw highway KITTI drives in the ENU frame.
Figure 10. Comparison of trajectories of the raw highway KITTI drives in the ENU frame.
Sensors 23 07424 g010
Figure 11. Accuracy improvement expressed by INS RMSE reduction for the horizontal and up directions, raw highway KITTI datasets.
Figure 11. Accuracy improvement expressed by INS RMSE reduction for the horizontal and up directions, raw highway KITTI datasets.
Sensors 23 07424 g011
Table 1. Trajectory information of the raw KITTI drives under consideration [39].
Table 1. Trajectory information of the raw KITTI drives under consideration [39].
Drive LabelDrive NumberLength (m)Time (s)Average Speed (km/h)No. of Frames
Residential datasets
D-272011_10_03_drive_0027_sync3669.18465.9728.354497
D-332011_09_30_drive_0033_sync1709.57165.3137.231594
D-34-d2011_09_30_drive_0034_sync920.52126.8826.121224
D-342011_10_03_drive_0034_sync5043.67481.3137.724642
D-202011_09_30_drive_0020_sync1233.74114.4938.791104
D-27-d2011_09_30_drive_0027_sync692.47114.8521.711106
D-282011_09_30_drive_0028_sync4208.65537.7828.175177
Highway datasets
D-422011_10_03_drive_0042_sync2591.80121.1976.991170
D-152011_09_26_drive_0015_sync362.8130.6542.61297
D-1012011_09_26_drive_0101_sync1299.1396.6248.40936
D-162011_09_30_drive_0016_sync404.7128.8450.52278
Table 2. Position (m) and attitude angles (deg) error statistics—D-34.
Table 2. Position (m) and attitude angles (deg) error statistics—D-34.
INSLiDARStereoINS/LiDAR/Stereo
MeanRMSEMaxMeanRMSEMaxMeanRMSEMaxMeanRMSEMax
East485.60795.291556.31−5.336.4613.75−4.667.2816.76−4.726.9015.00
North718.271840.005986.33−25.2526.3036.373.736.5112.981.094.758.93
Horizontal1524.502004.516046.3826.0527.0837.688.679.7617.917.348.3815.31
Up169.47322.741082.8629.3833.6260.785.686.2211.055.686.2211.04
Roll0.2372.0226.1440.6473.1116.7060.2581.0512.3790.2322.0206.190
Pitch−0.0512.6556.298−0.4203.5596.4750.1230.8202.595−0.0482.6716.350
Yaw0.9321.0724.015−1.2541.6014.3830.8310.9804.8800.9661.1194.083
Table 3. Position (m) and attitude angles (deg) error statistics—D-42.
Table 3. Position (m) and attitude angles (deg) error statistics—D-42.
INSLiDARStereoINS/LiDAR/Stereo
MeanRMSEMaxMeanRMSEMaxMeanRMSEMaxMeanRMSEMax
East−129.93194.98502.5710.7917.8235.4910.5316.2331.3910.5616.3631.65
North27.0932.6950.88−22.5025.7234.33−13.1214.2918.42−13.9815.2519.91
Horizontal135.94197.70502.7026.8631.2848.3919.3221.6334.7019.9022.3635.66
Up17.9322.2340.58123.55161.39285.498.7310.7017.868.7310.7017.86
Roll−0.2250.5571.4210.0264.82015.8380.7300.7971.357−0.2240.5551.419
Pitch−0.2660.5701.201−4.7708.95515.2700.4360.5260.916−0.2660.5701.202
Yaw0.1300.2990.682−0.3281.2602.654−1.1441.5662.4650.1290.2970.677
Table 4. Comparing the developed model to the INS/LiDAR model developed in [32].
Table 4. Comparing the developed model to the INS/LiDAR model developed in [32].
INS/LiDAR/Stereo, RMSE (m)INS/LiDAR, RMSE (m)Accuracy Improvement (%)
D-28
Horizontal8.4015.0644.22
Up7.8313.5742.30
D-42
Horizontal22.3632.1530.45
Up10.7023.5854.62
D-101
Horizontal3.6714.8275.24
Up0.416.0393.20
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abdelaziz, N.; El-Rabbany, A. INS/LIDAR/Stereo SLAM Integration for Precision Navigation in GNSS-Denied Environments. Sensors 2023, 23, 7424. https://doi.org/10.3390/s23177424

AMA Style

Abdelaziz N, El-Rabbany A. INS/LIDAR/Stereo SLAM Integration for Precision Navigation in GNSS-Denied Environments. Sensors. 2023; 23(17):7424. https://doi.org/10.3390/s23177424

Chicago/Turabian Style

Abdelaziz, Nader, and Ahmed El-Rabbany. 2023. "INS/LIDAR/Stereo SLAM Integration for Precision Navigation in GNSS-Denied Environments" Sensors 23, no. 17: 7424. https://doi.org/10.3390/s23177424

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop