# Advancing Simultaneous Localization and Mapping with Multi-Sensor Fusion and Point Cloud De-Distortion

^{1}

^{2}

^{3}

^{4}

^{*}

## Abstract

**:**

## 1. Introduction

- (1)
- Completed the joint calibration of the sensors used, as well as the LiDAR and camera.
- (2)
- Achieved a motion distortion removal (MDR) method for LiDAR point cloud data based on odometer readings.
- (3)
- Presented the obstacle detection algorithms that fuse LiDAR and camera data, which are divided into three sections: (1) a comparison of obstacle detection using a single LiDAR or camera, (2) obstacle detection using the point-fusion (PF) method, and (3) obstacle detection using the point-map fusion (PMF) algorithm.

## 2. System Overall Design and Joint Calibration of LiDAR and Depth Camera

#### 2.1. System Overall Design

#### 2.2. Sensor Calibration

#### 2.3. Joint Calibration of LiDAR and Camera

## 3. Point Cloud Data Undistortion Algorithm Based on Odometry (MDR)

#### 3.1. Principle of LiDAR Point Cloud Distortion Removal

#### 3.2. MDR Algorithm Principle

#### 3.3. Mapping Experiment Based on MDR Algorithm

## 4. SLAM Map Construction Algorithm Based on LiDAR and Camera Data Fusion

#### 4.1. SLAM Map Construction Based on a Single Sensor

#### 4.2. SLAM Map Construction Based on PF Method

Algorithm 1 Data fusion algorithm for LiDAR point cloud and laser-like point cloud |

Require: LiDAR point cloud $scan\_lidar$, laser-like point cloud $scan\_camera$. |

Ensure: Fusion point cloud $scan\_fusion$. |

1: Convert to PCL format $pointcloud\_lidar$ and $pointcloud\_camera$. |

2: Merge to PCL Point Cloud $pointcloud\_fusion$. |

3: Convert $pointcloud\_fusion$ to Vector matrix $matrix\_fusion$. |

4: Define the max and min angles of $scan\_fusion$ as ${\theta}_{max}$ and ${\theta}_{min}$, Angle resolution $\theta $, Max and min scanning dist ${r}_{max}$ and ${r}_{min}$ |

5: $\mathbf{f}\mathbf{o}\mathbf{r}$ each ${\left(\begin{array}{ccc}{x}_{k}& {y}_{k}& {z}_{k}\end{array}\right)}^{T}\mathbf{d}\mathbf{o}$ |

6: Project to the $xOy$ plane to obtain the projection point ${\left(\begin{array}{cc}{x}_{k}& {y}_{k}\end{array}\right)}^{T}$. |

7: $dist=\sqrt{{x}_{k}^{2}+{y}_{k}^{2}}$ |

8: $if(dist<{r}_{min}||dist>{r}_{max})$ |

9: $continue$ |

10: $end\text{}if$ |

11: $angle=actan\left({y}_{k}/{x}_{k}\right)$ |

12: $if(angle<{\theta}_{min}||angle>{\theta}_{max})$ |

13: $continue$ |

14: $end\text{}if$ |

15: $index=(angle-{\theta}_{min})/\theta $ |

16: $if(dist<{scan\_fusion}_{index})$ |

17: ${scan\_fusion}_{index}=dist$ |

18: $end\text{}if$ |

19: $\mathbf{e}\mathbf{n}\mathbf{d}\text{}\mathbf{f}\mathbf{o}\mathbf{r}$ |

20: return $scan\_fusion$. |

#### 4.3. SLAM Map Construction Based on PMF Method

Algorithm 2 Bayesian Local Grid Map Fusion Algorithm |

Require: Local grid maps ${m}_{1:N}$, parameters $\theta $. |

Ensure: Global grid map ${m}_{global}$. |

1: Initialize global grid map: ${m}_{global}={m}_{1}$ |

2: $\mathbf{f}\mathbf{o}\mathbf{r}$ $n=2$ to $N$ do |

3: $\mathbf{f}\mathbf{o}\mathbf{r}$ each cell $i$ do |

4: Compute the occupancy probabilities ${m}_{global}\left(i\right)$ and ${m}_{n}\left(i\right)$ for cell $i$ in the global grid map and the $n$th local grid map, respectively. |

5: Compute the posterior probability $p\left({m}_{global}\left(i\right)\right|{m}_{n}\left(i\right))$ using bayes-rule: $p\left({m}_{global}\left(i\right)\right|{m}_{n}\left(i\right))=\frac{p\left({m}_{n}\right(i\left)\right|{m}_{global}\left(i\right)\left)p\right({m}_{global}\left(i\right))}{p\left({m}_{n}\right(i\left)\right)}$ |

6: Update the occupancy probability in the global grid map using the posterior probability: ${m}_{global}\left(i\right)=p\left({m}_{global}\right(i\left)\right|{m}_{n}\left(i\right))$ |

7: $\mathbf{e}\mathbf{n}\mathbf{d}\text{}\mathbf{f}\mathbf{o}\mathbf{r}$ |

8: $\mathbf{e}\mathbf{n}\mathbf{d}\text{}\mathbf{f}\mathbf{o}\mathbf{r}$ |

9: Output the updated global grid map: ${m}_{global}$. |

## 5. Overall Experiment

#### 5.1. Sensor Information Fusion SLAM Map Construction Experiment

#### 5.2. SLAM Experiments Performed before and after Motion Distortion

## 6. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## References

- Jiang, G.; Yin, L.; Jin, S.; Tian, C.; Ma, X.; Ou, Y. A simultaneous localization and mapping (SLAM) framework for 2.5 D map building based on low-cost LiDAR and vision fusion. Appl. Sci.
**2019**, 9, 2105. [Google Scholar] [CrossRef] - López, E.; García, S.; Barea, R.; Bergasa, L.M.; Molinos, E.J.; Arroyo, R.; Romera, E.; Pardo, S. A multi-sensorial simultaneous localization and mapping (SLAM) system for low-cost micro aerial vehicles in GPS-denied environments. Sensors
**2017**, 17, 802. [Google Scholar] [CrossRef] - Liu, J.Y. From big dog to spot mini: Evolution of boston powered quadruped robot. Robot Ind.
**2018**, 2, 109–116. [Google Scholar] [CrossRef] - Fankhauser, P.; Bloesch, M. Probabilistic Terrain Mapping for Mobile Robots With Uncertain Localization. IEEE Robot. Autom. Lett.
**2018**, 3, 3019–3026. [Google Scholar] [CrossRef] - Biswal, P.; Mohanty, P.K. Development of quadruped walking robots: A review. Ain Shams Eng. J.
**2021**, 12, 2017–2031. [Google Scholar] [CrossRef] - Kim, D.; Carballo, D.; Carlo, J.D.; Katz, B.; Bledt, G.; Lim, B.; Kim, S. Vision Aided Dynamic Exploration of Unstructured Terrain with a Small-Scale Quadruped Robot. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May 2020–31 August 2020; pp. 2464–2470. [Google Scholar] [CrossRef]
- Dudzik, T. Vision-Aided Planning for Robust Autonomous Navigation of Small-Scale Quadruped Robots. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2020. Available online: https://hdl.handle.net/1721.1/129203 (accessed on 25 October 2022).
- Barbosa, F.M.; Osório, F.S. Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts, Datasets and Metrics. arXiv
**2023**, arXiv:2303.04302. [Google Scholar] [CrossRef] - Wang, Y.; Zhu, R.; Wang, L.; Xu, Y.; Guo, D.; Gao, S. Improved VIDAR and machine learning-based road obstacle detection method. Array
**2023**, 18, 100283. [Google Scholar] [CrossRef] - Alfred, D.J.; Chandru, V.C.; Muthu, B.A.; Senthil, K.R.; Sivaparthipan, C.; Marin, C.E.M. Fully convolutional neural networks for LIDAR–camera fusion for pedestrian detection in autonomous vehicle. Multimed. Tools Appl.
**2023**, 1–24. [Google Scholar] [CrossRef] - Xizo, Y.F.; Huang, H.; Zheng, J.; Liu, R. Obstacle Detection for Robot Based on Kinect and 2D Lidar. J. Univ. Electron. Sci. Technol. China
**2018**, 47, 337–342. [Google Scholar] [CrossRef] - Liu, C.; Zhang, G.; Rong, Y.; Shao, W.; Meng, J.; Li, G.; Huang, Y. Hybrid metric-feature mapping based on camera and Lidar sensor fusion. Measurement
**2023**, 207, 112411. [Google Scholar] [CrossRef] - Shi, X.; Liu, T.; Han, X. Improved Iterative Closest Point (ICP) 3D point cloud registration algorithm based on point cloud filtering and adaptive fireworks for coarse registration. Int. J. Remote Sens.
**2020**, 41, 3197–3220. [Google Scholar] [CrossRef] - Hong, S.; Ko, H.; Kim, J. VICP: Velocity updating iterative closest point algorithm. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 1893–1898. [Google Scholar] [CrossRef]
- Zheng, C.C.; Ke, F.Y.; Tang, Q.Q. Tightly coupled SLAM for laser inertial navigation based on graph optimization. Electron. Meas. Technol.
**2023**, 46, 35–42. [Google Scholar] [CrossRef] - Zhang, T.; Zhang, C.; Wei, H.Y. Design of Lidar Odemeter Integrating Lidar and IMU in Dynamic Environment. Navig. Position. Timing
**2022**, 9, 70–78. [Google Scholar] [CrossRef] - Zhang, F.; Zhai, W.F.; Zhang, Q.Y. Research on Motion Distortion Correction Algorithm Based on Laser SLAM. Ind. Control Comput.
**2022**, 35, 76–78. [Google Scholar] - Sun, T.; Feng, Y.T. Research on Gmapping Particle Filter SLAMBased on lmproved Particle Swarm Optimization. Ind. Control Comput.
**2022**, 35, 121–123+153. [Google Scholar] - Huang, R.; Zhang, S. High Adaptive Lidar Simultaneous Localization and Mapping. J. Univ. Electron. Sci. Technol. China
**2021**, 50, 52–58. [Google Scholar] - Yan, X.B.; Peng, D.G.; Qi, E.J. Research on Ground-Plane-Based Monocular Aided LiDAR SLAM. Acta Opt. Sin.
**2020**, 40, 173–183. Available online: https://kns.cnki.net/kcms/detail/31.1252.O4.20201211.1200.008.html (accessed on 25 October 2022). - Lin, H.Y.; Hsu, J.L. A sparse visual odometry technique based on pose adjustment with keyframe matching. IEEE Sens. J.
**2020**, 21, 11810–11821. [Google Scholar] [CrossRef] - Ghaffari Jadidi, M.; Clark, W.; Bloch, A.M.; Eustice, R.M.; Grizzle, J.W. Continuous Direct Sparse Visual Odometry from RGB-D Images. In Proceedings of the Robotics: Science and Systems 2019, Freiburg im Breisgau, Germany, 22–26 June 2019. [Google Scholar] [CrossRef]
- Shao, H.; Zhao, Q.; Chen, B.; Liu, X.; Feng, Z. Analysis of Position and State Estimation of Quadruped Robot Dog Based on Invariant Extended Kalman Filter. Int. J. Robot. Autom. Technol.
**2022**, 9, 17–25. [Google Scholar] [CrossRef] - Zhao, Q.; Shao, H.; Yang, W.; Chen, B.; Feng, Z.; Teng, H.; Li, Q. A Sensor Fusion Algorithm: Improving State Estimation Accuracy for a Quadruped Robot Dog. In Proceedings of the 2022 IEEE International Conference on Robotics and Biomimetics (ROBIO), Jinghong, China, 5–9 December 2022; pp. 525–530. [Google Scholar] [CrossRef]
- Zhang, Z. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell.
**2000**, 22, 1330–1334. [Google Scholar] [CrossRef]

**Figure 4.**Joint calibration of LiDAR and depth camera. (

**a**) Mapping and labeling of point cloud. (

**b**) Calibration verification.

**Figure 8.**Comparison of the effects before and after distortion correction during in-place rotation in a static environment for a quadruped robot dog. (

**a**) Static environment. (

**b**) Comparison of the effects before and after distortion correction during in-place rotation. 1. red-colored point cloud before distortion correction; 2. yellow-colored point cloud after distortion correction.

**Figure 10.**Comparison of the detection results of typical obstacles using LiDAR and camera. (

**a**) Typical obstacles. (

**b**) LiDAR point cloud (Gmapping). (

**c**) Camera: depth point cloud. (

**d**) Camera: laser-like point cloud (Gmapping). (

**e**) Camera: voxel point cloud (OctoMap).

**Figure 11.**Complex environment with multiple obstacle types. (

**a**) Various typical obstacles and overlapping situations. (

**b**) Frame obstacles and superposition.

**Figure 14.**Quadruped robot dog experimental platform. 1—Yobogo quadruped robot dog platform; 2—power bank; 3—router; 4—LiDAR protective plate; 5—2D LiDAR (RPLiDAR_A3); 6—fixture for securing the camera; 7—RealSense D435 depth camera.

**Figure 16.**Comparison of obstacle detection results of different methods. (

**a**) Obstacle detection and mapping results using only raw 2D LiDAR (RPLiDAR_A3) data. (

**b**) Obstacle detection and mapping results based on the PF method (reference [11]). (

**c**) Obstacle detection and mapping results based on PMF method (the method of this study).

**Figure 17.**Open laboratory environment for experiments. (

**a**) Laboratory environment. (

**b**) Ten sets of dimensions.

**Figure 18.**Comparison of PMF algorithm SLAM mapping results with and without MDR. (

**a**) Without MDR. (

**b**) With MDR. (

**c**) Comparison of errors between 10 sets of dimensions and actual measured dimensions.

$\mathit{l}$ | ${\mathit{l}}_{\mathit{a}}$ | ${\mathit{l}}_{\mathit{b}}$ | ${\mathit{l}}_{\mathit{c}}$ | ${\mathit{l}}_{\mathit{d}}$ |
---|---|---|---|---|

30 | 28.73 | 29.36 | 29.22 | 29.67 |

Camera Voxel Point Cloud | Occupancy | Vacancy | Unknown | |
---|---|---|---|---|

Fused Point Clouds | ||||

Occupancy | ⚫ | ⚫ | ⚫ | |

Vacancy | ⚫ | ◯ | ◯ | |

Unknown | ⚫ | ◯ | ⦿ |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Shao, H.; Zhao, Q.; Chen, H.; Yang, W.; Chen, B.; Feng, Z.; Zhang, J.; Teng, H.
Advancing Simultaneous Localization and Mapping with Multi-Sensor Fusion and Point Cloud De-Distortion. *Machines* **2023**, *11*, 588.
https://doi.org/10.3390/machines11060588

**AMA Style**

Shao H, Zhao Q, Chen H, Yang W, Chen B, Feng Z, Zhang J, Teng H.
Advancing Simultaneous Localization and Mapping with Multi-Sensor Fusion and Point Cloud De-Distortion. *Machines*. 2023; 11(6):588.
https://doi.org/10.3390/machines11060588

**Chicago/Turabian Style**

Shao, Haiyan, Qingshuai Zhao, Hongtang Chen, Weixin Yang, Bin Chen, Zhiquan Feng, Jinkai Zhang, and Hao Teng.
2023. "Advancing Simultaneous Localization and Mapping with Multi-Sensor Fusion and Point Cloud De-Distortion" *Machines* 11, no. 6: 588.
https://doi.org/10.3390/machines11060588