Next Article in Journal
Robust Control of An Inverted Pendulum System Based on Policy Iteration in Reinforcement Learning
Previous Article in Journal
Diversity Challenge in Skin Care: Adaptations of a Simple Emulsion for Efficient Moisturization across Multiple Geographies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time 3D Reconstruction of UAV Acquisition System for the Urban Pipe Based on RTAB-Map

School of Earth Sciences and Spatial Information Engineering, Hunan University of Sciences and Technology, Xiangtan 411201, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(24), 13182; https://doi.org/10.3390/app132413182
Submission received: 9 October 2023 / Revised: 6 December 2023 / Accepted: 8 December 2023 / Published: 12 December 2023
(This article belongs to the Special Issue Advances in Oil and Gas Storage, Transportation, and Safety)

Abstract

:

Featured Application

A new solution for 3D reconstruction tasks of underground pipelines using a vision SLAM-based UAV platform.

Abstract

In urban underground projects, such as urban drainage systems, the real-time acquisition and generation of 3D models of pipes can provide an important foundation for pipe safety inspection and maintenance. The simultaneous localization and mapping (SLAM) technique, compared to the traditional structure from motion (SfM) reconstruction technique, offers high real-time performance and improves the efficiency of 3D object reconstruction. Underground pipes are situated in complex environments with unattended individuals and often lack natural lighting. To address this, this paper presents a real-time and cost-effective 3D perception and reconstruction system that utilizes an unmanned aerial vehicle (UAV) equipped with Intel RealSense D435 depth cameras and an artificial light-supplementation device. This system carries out real-time 3D reconstruction of underground pipes using the RTAB-Map (real-time appearance-based mapping) method. RTAB-Map is a graph-based visual SLAM method that combines closed-loop detection and graph optimization algorithms. The unique memory management mechanism of RTAB-Map enables synchronous mapping for multiple sessions during UAV flight. Experimental results demonstrate that the proposed system, based on RTAB-Map, exhibits the robustness, textures, and feasibility for 3D reconstruction of underground pipes.

1. Introduction

Urban pipeline facilities play a vital role in energy transportation, clean water supply, and wastewater discharge in urban areas, thereby significantly improving the living conditions of urban residents. However, as pipes age, they may undergo degradation due to factors such as corrosion, geological subsidence, or improper plumbing and digging. This deterioration can lead to economic losses and hazardous incidents [1]. Therefore, obtaining internal pipe information is crucial for the sustainable operation of urban pipelines as it provides early support for pipeline inspection and maintenance.
Traditional pipe inspection methods depend on manual work and are limited by the following factors: (i) they are time consuming and laborious; (ii) they risk lives if entering into unknown pipe environments. Therefore, robotic solutions for pipeline inspection promise enhancement of human labor by automating data acquisition for pipe condition assessments. CCTV (closed-circuit television) cameras or sonar devices are some of the most well-adopted robotic solutions used to obtain information about the interior of pipelines for basic visual inspection as well as data collection and video analysis [1]. Although CCTV is the preferred method because it provides a complete view of the pipe interior, it requires long cables and travels with slow speed in small-diameter pipeline inspections.
To gain further insight into the 3D structure and textures of the pipeline, additional steps must be taken with the video data or image sequences to achieve 3D reconstruction. This is often accomplished offline by employing the SFM method [2]. Additionally, multi-sensor fusion SLAM techniques from the robotics community can be utilized to automatically collect sensory data and perform RGB-D 3D reconstructions within the pipes. Shang and Shen proposed a pipeline reconstruction system using a multi-camera array, which includes one centrally positioned depth camera and four oblique depth cameras. This system estimates the pipeline path and maps its surface based on the vision SLAM framework. However, it should be noted that this method is not capable of real-time reconstruction, and the use of multiple cameras increases the integrated cost of the system [3]. In another study, Zhang et al. presented a pipeline reconstruction SLAM method that employs a cylindrical rule. This method addresses the issues of scale drift and accuracy reduction encountered in the vision SLAM approach for pipeline reconstruction. However, it is important to mention that the accuracy of this method significantly decreases when the lower part of the pipeline is covered [4]. Zhang et al. also introduced a 3D reconstruction system based on 360° panoramic video. This system involves multi-view image frame extraction, panoramic reprojection, and photogrammetric processing. It provides an intuitive and clear reconstruction of the pipeline’s real scenes, but it can only be operated offline [5]. While these approaches offer valuable information, including visual, 3D, and geo-referencing data, for advanced pipe condition monitoring and assessment, they are time-consuming and require offline operation. Consequently, there is a need for autonomous, unconstrained robots that can perform continuous, real-time pipeline health monitoring [4].
In recent years, SLAM technology has gained wide popularity in solving complex problems related to real-time 3D mapping in robots. Several improved algorithms based on SLAM have been developed to enhance work efficiency. The ORB-SLAM2 method, for instance, requires ORB features with better view invariance and higher matching efficiency. However, it is susceptible to tracking failures in scenes with fast camera movement and weak textures. Additionally, the maps generated by ORB-SLAM2 are sparse point cloud maps with limited applicability [6]. Zou et al. proposed a robust RGB-D SLAM system that utilizes point and line features to improve robustness in low-texture scenes, which are more sensitive to light variations [7]. On the other hand, LIO-SLAM achieves relatively high localization and map construction accuracy by fusing LiDAR data and IMUs. However, it comes with high costs and is typically used for generating 2D maps, which may not suffice for tasks requiring 3D environment modeling [8]. DSO-SLAM, which employs the sparse direct method, extracts feature points directly from the image without the need for dense feature matching. It is capable of functioning in environments with changing illumination and weak texture, making it more robust [9]. VINS-Fusion, on the other hand, utilizes a vision camera and IMU fusion technology, offering high positioning accuracy, real-time capabilities, and stability in both indoor and outdoor scenes. However, it requires precise calibration of sensors, including camera–IMU time synchronization and external parameter calibration. Moreover, it is primarily designed for high-precision positioning applications and is not suitable for 3D map construction [10]. Manhattan SLAM is a technology integrating superpixels and Manhattan world assumptions, in which both line features and planar features can be better extracted; however, it cannot be used for the 3D reconstruction of pipelines [11]. Notably, the performance of different improved SLAM algorithms can significantly differ in diverse application environments. Consequently, when applying SLAM to different contexts, it becomes crucial to select the most suitable SLAM approach or adapt existing methods to balance efficiency and accuracy.
Although various methods and systems have made significant progress in achieving optimal results, there is still a demand for automated and real-time 3D reconstruction of underground pipelines in complex environments. For instance, the robotic vehicle inspection system equipped with CCTV has limitations when it comes to inspecting large-diameter or irregular-shaped pipelines. Additionally, these systems often require manual operators and lack autonomy. Existing SLAM studies primarily focus on improving reconstruction accuracy, while efficiency and real-time performance receive less attention. Liang et al. propose an end-to-end network called SVR-Net for monocular TSDF SLAM, which directly generates dense TSDF maps during localization, avoids inconsistencies in depth map fusion, and meets real-time requirements [12]. However, this approach requires pre-training and re-execution in different application scenarios. Deep learning has proven to be effective in image processing, and many works have applied it to front-end feature matching and loopback detection in visual SLAM, reducing the false matching rate. However, it relies on strong arithmetic support and prior information. It is particularly important to use a low arithmetic platform that is easy to operate without the need for training or introducing prior information. RTAB-Map is a representative method for RGB-D SLAM, which generates point clouds and triangular mesh maps [13]. Its binary program is available in the ROS system due to its high integration. It can run in real time on low-computing power platforms and generates dense point cloud maps. The binary program of RTAB-Map can optimize both bit-pose filtering and model reconstruction of the point cloud data.
Nowadays, UAVs have gained significant importance in structural health performance inspections due to their small size, low cost, flexibility, mobility, and ability to perform multiple-view inspections [14,15,16,17]. However, their application in pipeline inspection missions has been limited to the outer surface of pipes and they have not yet been deployed for 3D reconstruction of the interior pipeline [18,19,20]. This limitation is primarily attributed to the vast and complex nature of pipeline systems with narrow spaces and dim lighting conditions. Additionally, existing commercial UAVs face challenges related to carrying a variety of sensors while having limited load capacity and endurance. Therefore, there is a need to design a high-performance, multi-functional, and reliable UAV inspection system specifically tailored for pipeline inspections.
With the development of drone and vision computer technology, drones equipped with lightweight sensor devices will become a more mainstream option for the inspection of pipe networks. In view of this, this paper’s objective is to propose a real-time 3D reconstruction system for UAVs based on RTAB-Map, which collects data through RGB-D cameras and generates 3D point clouds inside the pipeline through real-time processing by on-board computers that are then remotely viewed, in real time, at the other end.
The main contributions are as follows: (i) A real-time 3D reconstruction platform for UAVs with RTAB-Map is proposed to deal with harsh scenarios such as urban underground pipelines that are difficult for people to reach. (ii) The validation compares the reconstruction effect of mainstream visual SLAM algorithms in the underground pipe scenario, showing that the RTAB-Map method has better robustness in this scenario. (iii) Experiments are carried out in real sewage pipes, and the results show that the system can carry out 3D reconstruction in real time in the underground pipe scene, and the reconstruction accuracy is also guaranteed.
The rest of this paper is organized as follows: Section 2 describes the methodology and framework. This section mainly introduces the hardware architecture of the UAV system and the RTAB-Map method. Section 3 describes the experiments and results and also includes a comparison of trajectories and point cloud between RTAB-Map, ORB-SLAM, and Manhattan-SLAM. Section 4 provides an analysis and discussion of the experimental results. Finally, Section 5 summarizes the main conclusions of this research and discusses its limitations.

2. Methodology

The schematic flowchart of the proposed system is presented in Figure 1. The system consists of three stages: UAV data acquisition, real-time 3D reconstruction using RTAB-Map, and results display. RTAB-Map is responsible for both camera position estimation and dense point cloud 3D reconstruction. The drones used in this system are equipped with obstacle avoidance capabilities, allowing them to navigate close to structures and capture high-resolution images or videos. These images/videos provide clear visualization of pipeline routing, pipe deformation, water penetration, sewage sludge, and other defects at elevations and angles that are inaccessible or difficult for humans to reach.

2.1. UAV Hardware System

The overall hardware architecture of our UAV acquisition system is illustrated in Figure 2. The UAV hardware system consists of a mini-computer, a depth camera, a motion control system (using brushless motor), an LED light, and a rack-mount. The UAV platform utilizes a folding carbon fiber rack with an open-source Pixhawk flight controller. The top of the platform is equipped with a depth camera and a mini-computer. The onboard computer is a quad-core Intel i5-1145G7 computer with 512 GB solid hardware and 8 GB memory, running on the Ubuntu 20.04 system. In our system, we used the Intel RealSense D435 depth camera as the sensor. The Intel RealSense D435 camera is an inexpensive RGB-D data acquisition device that captures RGB information and depth information of the scene. The LED light serves as an artificial light source in dark environments. All sensors and controllers are connected to the mini-computer, which handles all computational tasks, including positioning, RGB-D data processing from the depth camera, encoder data from the brushless motors, and 3D reconstructions. The final results can be transmitted to a WiFi-connected laptop computer for display. The entire system is powered by a 19 V lithium battery.

2.2. RTAB-Map Method

In this study, RTAB-Map was utilized to conduct the 3D dense reconstruction of pipes from UAV acquisition video data. RTAB-Map, which stands for real-time appearance-based mapping, consists of two main parts: the front-end and the back-end. The front-end focuses on attitude estimation based on features. For each RGB-D image, we extract GFTT features [21], known for their speed and robustness in low-light and low-texture environments, as opposed to other feature extractors like SIFT, SURF, and ORB. The nearest neighbor distance ratio (NNDR) test is used to identify image correspondence between frames, employing the binary BRIEF descriptor [22]. Once the feature correspondences are established, they are projected from 2D to 3D space using depth information. Relative changes are then computed by solving a least squares problem to optimize the 3D–3D correspondences. Outliers are eliminated using random sample consistency (RANSAC). A keyframe is selected as the input image if it has sufficient correspondence with previous keyframes and its distance/orientation to the nearest keyframe exceeds a certain threshold.
Meanwhile, the poses are further optimized using bundle adjustment (BA). The BA is typically formulated as a least squares problem to adjust the camera’s pose and the coordinates of the feature points by minimizing the reprojection error. In this process, a point in 3D space is projected onto the image plane to obtain a theoretical value u = h ( ξ , P ) . h denotes the imaging process, ξ represents the camera’s position in the camera coordinate system (Lie algebra representation), and P is the coordinate of the 3D point under the world coordinate system. To obtain the reprojection error e r ,
e r = ( z u ) 2 = ( z h ξ , P ) 2 ,
where z is the observed value (i.e., the pixel coordinates of the feature point on the camera image) and u is the theoretical value. Therefore, we construct a least squares problem for
m i n 1 2 i = 1 m | z h ( ξ , P ) | 2 2 = m i n 1 2 | f ( x ) | 2 2
Here, BA optimization is to solve this least square problem.
The back-end part incorporates visual loopback detection and graph optimization. Visual loop closure is achieved through bag-of-words modeling and Bayesian filters. The bag of words approach uses a collection of image features as words and focuses on the frequency of these words in the image, rather than their location. It employs binary image features that are more resilient to changes in illumination. Prior to using the bag-of-words model, the dictionary is trained offline. The descriptors of all the extracted image feature points are clustered, putting K-means into K sets. This clustering operation is repeated within each set to obtain the second level of the dictionary tree. The above operation is then repeated for subsequent levels, resulting in a dictionary tree with a branch K and depth L. Each word is assigned a weight based on its frequency of occurrence in the training set. Words that appear more frequently in the training set are assigned lower weights, as they have lower discriminative ability.
The visual bag-of-words model is primarily utilized for efficiently calculating the similarity between the current bit-pose node and the candidate nodes. On the other hand, the Bayesian filter is employed to preserve the probability distribution of the similarity of all the candidate nodes. Let the current bit position be L t , and all candidate nodes to be detected are viewed as a whole and described by a random variable S t . Then, the probability of S t = i indicates the size of the probability that L t closes the loop with L i . Employing more Bayesian filters, the probabilistic update formula for S can be obtained (Equation (3)), where L t = L 1 ,…, L t denotes all nodes at time t, and the observation model P ( S t L t can be computed by the likelihood function l S t = j L t = P ( L t | S t = j ) . The updated P ( S t | L t ) is normalized and if P ( S t L t < T , it means the loop closure is successful. The threshold T was determined experimentally. Finally, all the nodes and constraint edges are fed into the graph optimization module for global optimization, which corrects the odometry position of all the nodes globally.
P ( S t L t = η P ( L t S t i = 1 t n P ( S t S t 1 = i P ( S t 1 = i | L t 1 )
During online mapping, it is crucial to process data faster than it is inputted. However, as the mapping process continues, the time taken for closed-loop detection and graph optimization increases, which impacts real-time performance. To address this issue, RTAB-Map utilizes a memory management model [2] (Figure 3). The memory is divided into short-term memory (STM), working memory (WM), and long-term memory (LTM). STM serves as the initial storage space for new data in a new node with a fixed size. When STM reaches its capacity, the earliest nodes are moved to WM for closed-loop detection. After a certain time limit, some nodes are transferred to LTM. LTM stores less relevant nodes that do not significantly contribute to the global map in the short term and are not used outside of closed-loop detection and graph optimization. If a closed loop is detected between the current node and an old node in WM, neighboring nodes in LTM are relocated to WM to increase the chances of identifying the closed loop. This memory management mechanism ensures operational efficiency and enables RTAB-Map to support multi-session mapping.

3. Experiments and Results

3.1. Experimental Scenarios

To verify the feasibility of the proposed system, this study selected a temporary experimental site consisting of a surface iron-rubberized straight path (Φ1.2 m × L12 m), as depicted in Figure 4. The experimental site was a black sewage linear pipe, and data collection was performed without any interference from moving objects. In order to evaluate the performance of the proposed system, this study analyzed dynamic scenes and conducted experiments under varying lighting conditions. Instead of relying on natural light, this system utilized UAV-mounted lighting equipment to provide illumination during the experiments. The angle and position of the on-board lighting equipment were constantly adjusted, resulting in changes in the brightness of the pipe walls. This allowed us to collect data on the dynamic scene while also introducing tracking challenges. Achieving uniform coverage with artificial lighting is more challenging compared to natural light. However, the on-board lighting equipment effectively simulated challenging lighting situations encountered in real-world engineering applications, aligning with the goals of our test methodology.

3.2. Experimental Results for Visualization

This study presents a RTAB-Map based on experimental findings on the UAV’s flight odometer and trajectory, as well as the 3D reconstruction of the pipeline in-wall. The drone flight’s odometer utilizes data from UAV sensors to estimate the positions and trajectories of the camera. RTAB-Map can be localized using visual odometry or laser odometry. In this study, the visual odometry is selected and obtained using a depth camera. Figure 5a illustrates the visual odometry during real-time operation. The green dots represent ordinary environment feature points in the map, the yellow dots represent feature points found multiple times in the map, and the red dots represent features found in the dictionary for closed-loop detection.
The 3D visualization using the RTAB-Map is created by generating a local map when a new node is created. This local map is generated using the depth image obtained from the depth camera, which is then converted into point cloud data. Once the global optimization is completed, the individual local maps are stitched together to obtain a global map based on the odometry poses of each node. Figure 5b shows the results of the 3D point cloud map, which includes the camera’s position nodes and the motion trajectory.
Figure 6 illustrates the sparse reconstruction results obtained using RTAB-Map. Specifically, Figure 6a showcases the inner wall of the pipe and its corresponding trajectory, while Figure 6b focuses on the in-wall textures of the joint and its trajectory. Among them, the blue line represents the trajectory, and the nodes represent the poses of these key frames. To enhance clarity, the dense point clouds of the obtained trajectories undergo a filtering process. The visualization of the pipeline’s dense point cloud before and after processing by using the median filter can be seen in Figure 7.

3.3. Experimental Results for Comparisons

In this experiment, this study compared the trajectories estimated by three well-known methods: RTAB-Map, ORB-SLAM, and Manhattan-SLAM. The trajectories were compared in both 3D space (Figure 8a, x-z projection plane Figure 8b) and three axes (Figure 9) in multiple dimensions. The results show that the trajectory estimated by RTAB-Map approximates a straight line, which aligns with the UAV’s motion process in the pipeline. On the other hand, Manhattan-SLAM starts to drift at the end of the trajectory, while ORB-SLAM2 gradually drifts in the middle of the trajectory. These comparison results are consistent with the differences in the effectiveness of the three methods for 3D reconstruction of pipelines. In conclusion, RTAB-Map demonstrates greater robustness than the other two methods for localization and mapping in underground pipeline environments.
Furthermore, three well-known visual SLAM algorithms, namely RTAB-Map, ORB-SLAM2, and Manhattan-SLAM, were also selected for benchmarking. To validate the comparison, the sparse point cloud generated by Manhattan-SLAM was passed through the PMVS method to complete the dense reconstruction. Figure 10a displays the 3D point cloud of the pipe generated by RTAB-Map, while Figure 10b shows the 3D point cloud generated by ORB-SLAM2. Figure 10c presents the dense point cloud of the pipeline obtained by Manhattan-SLAM with PMVS. These results indicate that RTAB-Map successfully reconstructs the internal information of the pipeline without noticeable geometric deformations. On the other hand, ORB-SLAM2 disperses the point cloud in all directions after running for a certain distance, and Manhattan-SLAM reconstructs the pipeline’s contours better but loses most of the features. In the pipeline environment, RTAB-Map demonstrates higher reconstruction accuracy.

4. Analysis and Discussion

In this study, the Intel RealSense D435 depth camera was utilized to perform real-time 3D reconstruction of underground pipes. This camera serves as a crucial data source for our reconstruction process. Additionally, the core hardware environment of the UAV acquisition system used in this experiment consists of an i5-1145G7 with Radeon Graphics 2.60 GHz and 8 GB RAM. The experiment employed an onboard mini-computer running on a 64-bit Ubuntu 20.04 operating system. The system can process thirty frames per second during operation. The time consumed during data acquisition and the experimental process demonstrates that the current efficiency is sufficient for real-time 3D reconstruction. Importantly, the device’s stability ensures the high quality of the video during data acquisition.
Pipe 3D modeling in pipeline internal environments presents unique challenges such as complex pipeline structures, lack of natural light, inconspicuous pipe in-wall texture, and poor security. To address these challenges, the use of RTAB-Map, a graph-based SLAM system, is proposed. RTAB-Map combines loop closure detection and graph optimization methods, while also providing memory management for large-scale and long-term online operations. This system demonstrates advantages in scenarios where the camera moves quickly, surrounding objects reflect light, or illumination is unstable. Additionally, RTAB-Map’s memory management mechanism ensures the system’s stability over extended periods. To evaluate the feasibility of RTAB-Map in pipeline internal complex environments, this study conducted experiments using a UAV to collect video data and extract high frame rate RGB and depth images. The results confirm that RTAB-Map is a viable solution for navigation and modeling in such challenging environments.

5. Conclusions

This paper presents a study on an unmanned aircraft system designed for dense 3D real-time reconstruction of urban pipelines using low-cost equipment. The research begins by building a UAV hardware platform for data acquisition, with the Intel RealSense D435 depth camera selected as the sensor for capturing RGB and depth information. The experimental scene focuses on a section of rubber sewage pipe, and three visual SLAM methods (RTAB-Map, ORB-SLAM2, and Manhattan-SLAM) are compared to evaluate their reconstruction effects. The results demonstrate that RTAB-Map outperforms the other two methods in terms of localization accuracy and 3D reconstruction. Additionally, RTAB-Map exhibits greater robustness in low-textured and dim urban pipe environments. The experiments confirm that the proposed UAV 3D reconstruction system, based on visual SLAM, enables real-time modeling in urban pipeline internal environments.
The proposed UAV acquisition system for urban underground inner pipes offers several advantages: (i) UAVs are more lightweight and flexible in large-diameter scenarios such as urban sewage pipes, which reduces the risk of operational safety. (ii) RTAB-Map is better at real-time and reconstruction accuracy compared to other visual SLAM methods. (iii) RTAB-Map incorporates a memory management mechanism that enables real-time execution of data acquisition and modeling processes. This mechanism is crucial for ensuring pipeline stability analysis. (iv) Compared to Laser scanners, the depth camera used in this system is cost-effective while providing detailed texture information about the pipe in-walls.
However, there are certain limitations in this study that need to be acknowledged. Firstly, in order to maintain the lightweight nature of the UAV, its range is limited and it cannot operate for extended periods of time. Secondly, the system faces difficulties in smooth metal pipes and other texture-deficient scenes, which is a drawback of the visual SLAM method. To overcome this limitation, the study will propose the fusion of LiDAR, IMU, and other sensors to enhance the accuracy of 3D reconstruction in the future. Lastly, the study only focuses on the performance of the system in straight pipe scenarios. Future research will delve into more complex situations involving underground pipes with multiple structures, such as curved pipes, vertical pipes, and ‘T’ shaped pipes.

Author Contributions

Conceptualization, X.C. and X.Z.; methodology, X.C. and X.Z.; software, X.Z. and C.L.; validation, X.Z. and C.L.; formal analysis, X.Z.; investigation, X.Z. and C.L.; resources, X.C. and X.Z.; data curation, C.L.; writing—original draft preparation, X.C. and X.Z.; writing—review and editing, X.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by China Postdoctoral Science Foundation (2017M622577), Hunan Provincial Natural Science Foundation (2018JJ2118).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request. The data are not publicly available due to privacy restrictions.

Acknowledgments

The authors would like to express many thanks to all the anonymous reviewers.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tian, T.; Wang, L.; Yan, X.; Ruan, F.; Aadityaa, G.J.; Choset, H.; Li, L. Visual-Inertial-Laser-Lidar (VILL) SLAM: Real-time Dense RGB-D Mapping for Pipe Environments. 2023. Available online: https://biorobotics.ri.cmu.edu/papers/paperUploads/IROS23_2674_FI%20(1).pdf (accessed on 30 October 2023).
  2. Liu, Z.; Kleiner, Y. State of the art review of inspection technologies for condition assessment of water pipes. Measurement 2013, 46, 1–15. [Google Scholar] [CrossRef]
  3. Shang, Z.; Shen, Z. Dual-function depth camera array for inline 3D reconstruction of complex pipelines. Autom. Constr. 2023, 152, 104893. [Google Scholar] [CrossRef]
  4. Zhang, R.; Worley, R.; Edwards, S.; Aitken, J.; Anderson, S.R.; Mihaylova, L. Visual Simultaneous Localization and Mapping for Sewer Pipe Networks Leveraging Cylindrical Regularity. IEEE Robot. Autom. Lett. 2023, 8, 3406–3413. [Google Scholar] [CrossRef]
  5. Zhang, X.; Zhao, P.; Hu, Q.; Wang, H.; Ai, M.; Li, J. A 3D reconstruction pipeline of urban drainage pipes based on multiviewimage matching using low-cost panoramic video cameras. Water 2019, 11, 2101. [Google Scholar] [CrossRef]
  6. Mur-Artal, R.; Tardós, J.D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef]
  7. Zou, Y.; Eldemiry, A.; Li, Y.; Chen, W.J.S. Robust RGB-D SLAM using point and line features for low textured scene. Sensors 2020, 20, 4984. [Google Scholar] [CrossRef] [PubMed]
  8. Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. LIO-SAM: Tightly-coupled lidar inertial odometry via smoothing and mapping. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 5135–5142. [Google Scholar] [CrossRef]
  9. Engel, J.; Koltun, V.; Cremers, D.; Intelligence, M. Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 611–625. [Google Scholar] [CrossRef] [PubMed]
  10. Qin, T.; Pan, J.; Cao, S.; Shen, S. A general optimization-based framework for local odometry estimation with multiple sensors. arXiv 2019. [Google Scholar] [CrossRef]
  11. Zhang, R.; Shi, S.; Yi, X.; Jing, M. Application of RGB-D SLAM in 3D Tunnel Reconstruction Based on Superpixel Aided Feature Tracking. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 43, 559–564. [Google Scholar] [CrossRef]
  12. Lang, R.; Fan, Y.; Chang, Q. Svr-net: A sparse voxelized recurrent network for robust monocular slam with direct tsdf mapping. Sensors 2023, 23, 3942. [Google Scholar] [CrossRef] [PubMed]
  13. Labbe, M.; Michaud, F. Online global loop closure detection for large-scale multi-session graph-based SLAM. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 2661–2666. [Google Scholar] [CrossRef]
  14. Chen, B.; Miao, X. Distribution line pole detection and counting based on YOLO using UAV inspection line video. J. Electr. Eng. Technol. 2020, 15, 441–448. [Google Scholar] [CrossRef]
  15. Hui, X.; Bian, J.; Zhao, X.; Tan, M. Vision-based autonomous navigation approach for unmanned aerial vehicle transmission-line inspection. Int. J. Adv. Robot. Syst. 2018, 15, 1729881417752821. [Google Scholar] [CrossRef]
  16. Ma, Y.; Li, Q.; Chu, L.; Zhou, Y.; Xu, C. Real-time detection and spatial localization of insulators for UAV inspection based on binocular stereo vision. Remote Sens. 2021, 13, 230. [Google Scholar] [CrossRef]
  17. Xiang, X.; Hu, H.; Ding, Y.; Zheng, Y.; Wu, S.J.A.S. GC-YOLOv5s: A Lightweight Detector for UAV Road Crack Detection. Appl. Sci. 2023, 13, 11030. [Google Scholar] [CrossRef]
  18. Yu, C.; Yang, Y.; Cheng, Y.; Wang, Z.; Shi, M.; Yao, Z. UAV-based pipeline inspection system with Swin Transformer for the EAST. Fusion Eng. Des. 2022, 184, 113277. [Google Scholar] [CrossRef]
  19. Gómez, C.; Green, D.R. Small unmanned airborne systems to support oil and gas pipeline monitoring and mapping. Arab. J. Geosci. 2017, 10, 202. [Google Scholar] [CrossRef]
  20. Zhang, T.; Zeng, W.; Wan, L.; Qin, Z. Vision-based system of AUV for an underwater pipeline tracker. China Ocean Eng. 2012, 26, 547–554. [Google Scholar] [CrossRef]
  21. Shi, J. Good features to track. In Proceedings of the 1994 IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; pp. 593–600. [Google Scholar] [CrossRef]
  22. Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. Brief: Binary robust independent elementary features. In Proceedings of the Computer Vision–ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, 5–11 September 2010; pp. 778–792. [Google Scholar] [CrossRef]
Figure 1. Flowchart of UAV acquisition system based on RTAB-Map for urban pipeline.
Figure 1. Flowchart of UAV acquisition system based on RTAB-Map for urban pipeline.
Applsci 13 13182 g001
Figure 2. Hardware architecture of UAV Acquisition System.
Figure 2. Hardware architecture of UAV Acquisition System.
Applsci 13 13182 g002
Figure 3. RTAB-Map memory management model [13].
Figure 3. RTAB-Map memory management model [13].
Applsci 13 13182 g003
Figure 4. Experimental pipe and environment: (a) experimental pipe and drone; (b) drone within pipe.
Figure 4. Experimental pipe and environment: (a) experimental pipe and drone; (b) drone within pipe.
Applsci 13 13182 g004
Figure 5. Odometry and visualization of 3D point cloud map: (a) camera odometry; (b) 3D point cloud map.
Figure 5. Odometry and visualization of 3D point cloud map: (a) camera odometry; (b) 3D point cloud map.
Applsci 13 13182 g005
Figure 6. The sparse reconstruction results using RTAB-Map: (a) pipe in-wall textures and UAV trajectory; (b) pipe interconnecting walls and UAV trajectory.
Figure 6. The sparse reconstruction results using RTAB-Map: (a) pipe in-wall textures and UAV trajectory; (b) pipe interconnecting walls and UAV trajectory.
Applsci 13 13182 g006
Figure 7. The 3D reconstruction results before and after treatments: (a) pipeline in-wall textures before treatment; (b) pipeline in-wall textures after treatment.
Figure 7. The 3D reconstruction results before and after treatments: (a) pipeline in-wall textures before treatment; (b) pipeline in-wall textures after treatment.
Applsci 13 13182 g007
Figure 8. Trajectory comparison results derived by RTAB-Map, ORB-SLAM, and Manhattan-SLAM: (a) trajectories in 3D space; (b) trajectory after x-z projection.
Figure 8. Trajectory comparison results derived by RTAB-Map, ORB-SLAM, and Manhattan-SLAM: (a) trajectories in 3D space; (b) trajectory after x-z projection.
Applsci 13 13182 g008
Figure 9. Trajectories in three axes.
Figure 9. Trajectories in three axes.
Applsci 13 13182 g009
Figure 10. Comparison of 3D reconstruction point clouds: (a) RTAB-Map; (b) ORB-SLAM2; (c) Manhattan-SLAM.
Figure 10. Comparison of 3D reconstruction point clouds: (a) RTAB-Map; (b) ORB-SLAM2; (c) Manhattan-SLAM.
Applsci 13 13182 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, X.; Zhu, X.; Liu, C. Real-Time 3D Reconstruction of UAV Acquisition System for the Urban Pipe Based on RTAB-Map. Appl. Sci. 2023, 13, 13182. https://doi.org/10.3390/app132413182

AMA Style

Chen X, Zhu X, Liu C. Real-Time 3D Reconstruction of UAV Acquisition System for the Urban Pipe Based on RTAB-Map. Applied Sciences. 2023; 13(24):13182. https://doi.org/10.3390/app132413182

Chicago/Turabian Style

Chen, Xinbao, Xiaodong Zhu, and Chang Liu. 2023. "Real-Time 3D Reconstruction of UAV Acquisition System for the Urban Pipe Based on RTAB-Map" Applied Sciences 13, no. 24: 13182. https://doi.org/10.3390/app132413182

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop