Next Article in Journal
Towards Haptic-Based Dual-Arm Manipulation
Next Article in Special Issue
Deep Instance Segmentation and Visual Servoing to Play Jenga with a Cost-Effective Robotic System
Previous Article in Journal
High Quality Coal Foreign Object Image Generation Method Based on StyleGAN-DSAD
Previous Article in Special Issue
RANSAC for Robotic Applications: A Survey
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Collaborative 3D Scene Reconstruction in Large Outdoor Environments Using a Fleet of Mobile Ground Robots

Institute for Systems and Robotics, Instituto Superior Tecnico, University of Lisbon, 1049-001 Lisbon, Portugal
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(1), 375; https://doi.org/10.3390/s23010375
Submission received: 10 December 2022 / Revised: 23 December 2022 / Accepted: 24 December 2022 / Published: 29 December 2022
(This article belongs to the Special Issue Recognition Robotics)

Abstract

:
Teams of mobile robots can be employed in many outdoor applications, such as precision agriculture, search and rescue, and industrial inspection, allowing an efficient and robust exploration of large areas and enhancing the operators’ situational awareness. In this context, this paper describes an active and decentralized framework for the collaborative 3D mapping of large outdoor areas using a team of mobile ground robots under limited communication range and bandwidth. A real-time method is proposed that allows the sharing and registration of individual local maps, obtained from 3D LiDAR measurements, to build a global representation of the environment. A conditional peer-to-peer communication strategy is used to share information over long-range and short-range distances while considering the bandwidth constraints. Results from both real-world and simulated experiments, executed in an actual solar power plant and in its digital twin representation, demonstrate the reliability and efficiency of the proposed decentralized framework for such large outdoor operations.

1. Introduction

The use of outdoor mobile robots for real-world applications, such as search and rescue [1,2], logistics [3], agriculture [4], industrial inspection [5], surveillance and maintenance [6,7], have increased rapidly over the past several years. This is due to the capabilities of mobile robots to assist humans in dangerous, repetitive or time-consuming tasks. A successful robot navigation for such applications relies primarily on three aspects: mapping, localization, and trajectory planning. Robotic mapping generates a map by deciphering the spatial information of the environment acquired through the robot’s sensors. Commonly, mapping is carried out first to understand the environment and enhance the subsequent localization and motion planning tasks. However, for many applications, mapping must be executed frequently to continuously acquire a complete situational awareness and to support reasoning and decision making in dynamic environments.
Many outdoor robotic automation applications, such as solar farm inspection and maintenance [8,9,10], disaster response [11,12,13], agriculture [14] and city re-planning [15,16] need to cover very large areas of 1–40 acres. Traversing such expansive environments with a single mobile robot is very time-consuming or even impractical. In addition, conventional localization methods based only on GPS, odometry and IMU are not always reliable for such long-range operations. The uneven, rough, and unstructured nature of rural environments, such as in solar farms and disaster-struck regions, introduce additional localization errors. In such scenarios, a multi-robot system can be a suitable alternative to obtain full coverage of the area and execute tasks in a collaborative manner, resulting in a more complete and time-efficient solution. In regards to mapping, a multi-robot system can rapidly explore the environment in parallel and from different angles, to generate more accurate maps in less time [17] and to enhance the localization accuracy in challenging environments.
In general, a multi-robot mapping framework will require three main elements:
  • A mission-planning unit to coordinate robots to explore the environment.
  • A communication policy to share the map generated by each robot.
  • A matching and merging method to integrate individual maps into a global map.
In practice, mapping large outdoor 3D environments with a team of mobile robots is challenging due to the communication limitations and the high volume of sensor data that need to be shared and processed. High-capacity wireless communication routers commonly available to robots, such as Wi-Fi modules, typically have a limited range of about 90 m or less in open outdoor environments. On the other hand, long-range wireless devices, such as the Xbee-Pro RF module, can provide a large coverage of up to several kilometers but have a very limited bandwidth of about 200 kbs, which is not suitable for sharing high volumes of sensor data. Furthermore, the employment of 4G mobile technologies is not always possible due to the lack of coverage in many rural areas or in disaster-response scenarios. Hence, it is essential to consider such communication constraints while developing a multi-agent mapping algorithm.
In this work, we present an online, fully distributed and active framework for a team of mobile ground robots, equipped with 3D LiDAR sensors, for mapping and situational awareness in large outdoor environments. We develop our solution specifically for the application of inspection and surveillance of large multi megawatt solar plants, while considering the strict communication constraints that exist in terms of range and bandwidth in the commonly available wireless technologies. However, the proposed framework can be applied to many other outdoor exploration problems, such as search and rescue or precision agriculture. The outline of the paper is as follows: Section 2 provides the literature review on 3D mapping and localization methods and discusses the various techniques and limitations of multi-agent cooperative mapping and point cloud registration. The proposed distributed multi-agent framework is discussed in Section 3. The experiments and results are presented in Section 4. Finally, Section 5 concludes the findings and pitches possible improvements to the proposed method.

2. Literature Review

Over the past decade, 3D sensors have emerged as revolutionary data acquisition devices. In robotics, 3D sensory information has been used for mapping, localization, obstacle avoidance, and scene recognition. Omnidirectional LiDARs [18,19,20,21], RGBD cameras [22,23,24,25], and uni-directional LiDARs [26,27,28] have found applications in the field. Three-dimensional sensor-based algorithms, such as LOAM [18,19,21], have become the norm for an out-of-the-box simultaneous localization and mapping algorithm. However, the computational complexity of 3D algorithms and the size of 3D sensor data make it challenging to achieve scalability. Due to these reasons, SLAM (simultaneous localization and mapping) algorithms [29,30,31] are not commonly preferred with 3D sensors, especially in large areas [32,33]. Methods of point cloud compression [34] and low-cost registration [35,36] are promising endeavors but require prior training. The considerable size of 3D data further imposes constraints on a decentralized multi-agent mapping system, making it challenging to share observations continuously. Hence, it is essential to transfer only the required features.
Point clouds represent rigid body data structures, typically generated from LiDAR sensors. The process of aligning two point clouds is called point cloud registration. The process results in a rigid body transformation matrix that aligns one point cloud in the frame of another. The registration techniques [37] are categorized into local and global registration. Global registration [38,39,40,41] is ideal when the initial transformation estimate has yet to be discovered, and is perfect when the point clouds are acquired from spatially distant frames of references. When the initial transformation is known, a quick refinement can be acquired from a local registration technique. Local registration techniques, such as iterative closest point (ICP) [42,43], normal distribution transform [44], point-to-plane [45,46,47], color-based [48,49] or class-based methods [50,51], can be computationally expensive if the initial transformation is inaccurate. For multi-agent systems, where each agent has a different frame of reference, global point cloud alignment is refined by a local point cloud registration technique.
A multi-robot system relies on a consistent network for the exchange of observations and data. If the system is not centralized, the agents rely on peer-to-peer networking. However, these networks can be classified into two subcategories: long-range and short-range networks. Long-range networks, such as low power wide area (LPWA) [52,53] and long-term evolution machine type communication (LTE-M) [54] provide networking solutions for large areas. The rural area coverage analysis [55] for sigfox (30 km at 12 kbps), lora (15 km at 290 bps–50 kbps), LTE-M (10 km at 200 kbps–1 Mbps) showcases the constraints imposed on the data size. The Xbee-PRO RF modules [56] are commonly used for outdoor robot applications allowing a long-range radio-frequency (RF) transmission that can go up to several kilometers, with a limited bandwidth of (200 kbps). Short-range communication, such as Wi-Fi (100 m at 15 Mbps), are ideal for the transfer of large size data. Thus, for a distributed multi-agent mapping system, relying on peer-to-peer 3D sensory information transfer, covering a wide area (≥1 km 2 ) requires both short- and long-range communication systems.
Multi-agent SLAM poses many different challenges, such as inter-agent cooperation and communication [57], distributed sensor fusion [58] and collaborative planning [59]. These challenges are further enhanced when the sensors share large information packets, such as 3D data [58]. These challenges can be relaxed in a centralized system, assisted with short-range communication devices with high bandwidth [60,61]; however, this is not a realistic solution for large outdoor applications. In a communication-constrained environment [57,62,63,64,65], prior planning [66] to meet and share information can relieve stress on communication channels. However, these periodic communications can be challenging to realize when the area to be covered exceeds 1 km 2 , especially considering the overall energy expended.
The major contribution of this article is an end-to-end active distributed homogeneous framework for the large-scale 3D mapping of environments. We incorporate a global peer-to-peer small bandwidth long-range network along with a short-range peer-to-peer network to allow a framework that can go beyond the range limits of a Wi-Fi network. The proposed approach generates a global map in each agent’s frame and helps to localize agents within this map. Conditionally, the framework heavily filters point clouds to enable long-range transmission, which is then used for localization and mapping. This conditional approach ensures that only the necessary communication bandwidth and computation are used. This relative localization can also be used for improving path planning, exploration, and mapping. The framework is developed, to tackle the communication constraints, imposed in large-area mapping.

3. Methodology

A set of N a agents, R, is tasked to explore and map the environment. Each agent, R i ( R i R ), is equipped with a 3D LiDAR sensor, a GPS receiver, an IMU, and an odometer sensor. The LiDAR sensor has a maximum range of L m a x . Considering possible GPS drifts, odometer slippage, and electromagnetic interference, each agent has an instance of an extended Kalman filter (EKF) to fuse the sensory information from GPS, IMU and odometer and obtain a more reliable estimate of the global localization G i in the geographic coordinate system. An instance of the LiDAR odometry and mapping (LOAM) [18] is used on each agent to locally map the surrounding environment from the R i perspective and to locate the agent within the map. This egocentric LOAM localization is represented in the form of an odometry message O i , in the sub-map M i , of agent R i . Each agent is equipped with a short-range and a long-range wireless transceiver. The short-range transceiver is a Wi-Fi module that allows the peer-to-peer transfer of large map data, which is only activated when two agents are within distance C m a x of each other. The long-range transceiver is an RF module that can ensure long-range transfer of very small quantities of data, which is used only to share odometry, GPS or heavily down-sampled 3D data.
Figure 1 portrays an instance of the proposed fast decentralized multi-agent active mapping framework, executed on agent R i . A separate instance of the framework is executed on each agent of R. This ensures an active decentralized framework for multi-agent 3D mapping. The framework has two modules: a continuous update module and a conditional update module. The continuous update module is executed with every new sensory update. In this module, an instance of LOAM generates the egocentric odometry O i and the map M i . Added to this, an instance of extended Kalman filter fuses the sensory data from GPS, IMU and odometry to output the global localization estimate, G i . A ball tree generator, as explained in Section 3.1, generates a global ball tree B i that keeps track of O i and G i throughout time and at specified distance intervals. Whenever a new tree node is added, it is also shared with all other agents using the long-range transmitter. Minimal proximity search, detailed in Section 3.2, is used to compute the proximity of an incoming tree node from an agent R j with all nodes of the ball tree B i . The conditional update module is executed when the result of the minimal proximity search is true.
Conditional update module consists of several computationally intensive processes. Spherized point cloud registration, in Section 3.3, describes down-sampling, segmentation and nearest-neighbor sampling of the M i , to generate a spherized map M s i . M s i , which is considerably reduced in size but abundant in features, is then transmitted using the long-range transmitter to the respective agent R j , for point cloud registration. The resultant transformation is then used for merging the complete maps once the agents are close enough to transfer the complete maps via the short-range transceiver.

3.1. Global Ball Tree Generator

A ball tree is a binary tree data structure, that is used for data partitioning to ensure fast data query [67]. When an agent, R i , is initialized, a ball-tree, B i , is instantiated with G i ( t = 0 ) as the root node. The node n of B i is represented as B i ( n ) and the latest node added is B i ( e n d ) . The pair-wise distance used for constructing B i is the Haversine distance. The Haversine distance represents the angular distance between two points on the surface of a sphere. Ball trees with Haversine distance are shown to result in fast nearest-neighbor look-up for GPS datasets [68,69]. A node B i ( n ) , inserted at time instant T, consists of the G i ( t = T ) and is tagged with the corresponding LOAM-odometry message O i ( t = T ) . The framework continuously monitors the LOAM-odometry O i and iteratively calculates the distance between the current odometry O i ( t = T ) with that of the last node B i ( e n d ) . If this distance is greater than a predefined value, D t h r e s h , a new node is added to B i , with G i ( t = T ) and O i ( t = T ) .
The process, called global ball-tree generator, is described in Algorithm 1, which is continuously run by each agent in R. Each node of the ball-tree has a global localization estimate G i , which is mapped with the corresponding LOAM-odometry message stored at that time instant. These pair-wise data are essential to link the egocentric localization of R i with the global frame. Alternatively, we could georeference the point cloud, for each iteration of LOAM, which requires an accurate initial global localization estimate [70] and would be computationally costly [71,72]. The intermittent method proposed in this work eases the computational complexity. It also alleviates the dependence on a single initial estimate.
Algorithm 1 Global ball-tree generator for agent R i .
  • Input:  G i ( t ) , O i ( t ) , D t h r e s h , B i
  •  
  • Initialize  B i = Add-Node ( G i ( t = 0 ) , O i ( t = 0 ) )
  •  
  • while mapping do
  •     Odom( C u r r e n t ) = O i ( t = T )
  •     Odom( L a s t N o d e ) = B i ( e n d ) O
  •     if  Odom ( C u r r e n t ) Odom ( L a s t N o d e ) 2 D t h r e s h  then
  •          B i = Add-Node( G i ( t = T ) , O i ( t = T ) , B i )
  •     end if
  • end while
  •  
  • procedure Add-Node( G , O , B = Balltree())
  •      B . push ( G )
  •      B ( e n d ) O
  •     return B
  • end procedure

3.2. Minimal Proximity Search

In a communication-constrained environment, it is essential to ensure that the bandwidth is used for the most vital transmissions. The process explained in this section queries for possible spatial overlaps in the global frame. Minimal proximity search transmits every new node added to the ball tree B i ( e n d ) and compares it with the ball trees of other agents in R for proximity within the global frame.
Every new node added to B i of agent R i is shared over the long-range transmitter, with the remaining agents. The minimal bandwidth required to transfer the node B i ( e n d ) makes it ideal for a communication-constrained environment. With no loss of generality, an agent R j ( R , i j ) has its own instance of LOAM, EKF and global ball-tree generator, resulting in its own sub-map M j , egocentric odometry O j , global localization estimate G j , and global ball-tree B j . Agent R j processes the incoming node information from R i by carrying out a nearest-neighbor search in B j . If the global localization estimate G j entry of the resultant nearest-neighbor node, B j ( N e i g h b o u r ) , is within a certain threshold( r i ), of B i ( e n d ) , the node B j ( N e i g h b o u r ) is shared with R i . This is depicted in Algorithm 2.
Algorithm 2 Minimal proximity search by R j .
  • Input:  B i ( e n d ) , B j , r i j
  •  
  • while mapping do
  •     Neighbor= B j .NearestNeighborSearch( B i ( e n d ) . G )
  •     if Neighbor.distance r i j  then
  •         return B j ( N e i g h b o r )
  •     end if
  • end while
In an effort to minimize the effect of GPS drifts, the distance threshold, r i , is a bounded dynamic distance threshold. Equation (1) ensures that r i is bounded within predefined values ( r m i n , r m a x ) and proportional to the uncertainty C e k f i of the agent R i EKF estimate.
r i = r m a x if C e k f i · r i r m a x r m i n if C e k f i · r i r m i n C e k f i · r i else

3.3. Spherized Point Cloud Registration

There exists a transformation, T i j , that aligns M j with M i of agents R j and R i . This transformation can be achieved by registering the map M j with the map M i . However, the sizes of M i and M j are rapidly increasing as the R i and R j individually explore and map the environment, from their perspective. Sharing such large data over a long-range bandwidth-limited communication channel will lead to a high network latency and data loss. Hence, this section describes a strategy to only share small sampled subsets of the maps, and only for the regions that are expected to have sufficient overlapping features for registration.
With no loss in generality, let us assume that for two agents, R i and R j , the minimal proximity search was successful. A successful minimal proximity search (Section 3.2) gives an assurance that, at two different time instances, the R i and R j are spatially close, in the global frame. The minimal proxy search results in two nodes, n o d e i and n o d e j , of B i and B j , that are globally close: B i ( n o d e i ) . O and B j ( n o d e j ) . O gives the egocentric odometry measurement of R i and R j . For lucidity, we will refer to B i ( n o d e i ) . O and B j ( n o d e j ) . O as L i and L j , respectively.
A Euclidean ball, of radius r o , is generated in both M i and M j , centered at L i and L j , respectively. This method of filtering is hereby referred to as spherization. The points within the sphere are sampled and used for point cloud registration. Since they represent a fraction of the overall map, the size is considerably reduced. Added to this, the sampled map, M i s and M j s , have features that are bound to overlap. This is because the sampled map was generated when the agents were spatially close in the global frame. Since point clouds can be considered a rigid body of particles [73], we can conclude that the T i j that successfully aligns M i s with M j s also aligns M i with M j .
Spherized maps are transmitted over the long-range transmitter to the respective agents. For a seamless transmission on the constrained bandwidth channel, the spherized maps have to be less than 25 kilobytes. Thus, spherization is preceded by downsampling, ground-plane removal and outlier removal to bring down the overall size of the point cloud to the prerequisite limit. Each agent generates the spherized maps in its own frame of reference. These frames of reference will be separated by several meters, which is not ideal for a local point cloud registration algorithm. We use a global registration algorithm to align these two spherized point clouds roughly. The transformation matrix acquired from the global registration technique is then used to initialize the local point cloud registration. Local point cloud registration helps in refining the initial rough alignment. The local point cloud registration results in the transformation, T i j , and the RMSE, E i j , of all inlier correspondences.
The RMSE [45], in the context of point cloud registration, refers to the root mean square value between the corresponding points of the two point clouds. For N c correspondences, between M i s and M j s , the RMSE for transformation, T i j , can be calculated by Equation (2). c i and c j refer to all the correspondences in M i s and M j s , respectively. The transformation, T i j , that minimizes E i j , across all executions of Algorithm 3 is chosen for the full map alignment.
RMSE = n = 1 N c M j s ( c g n ) T i j M j s ( c m n ) 2 N c
In the proposed implementation, the global registration is carried out using RANSAC (random sample consensus) [74]. The FPFH (fast point feature histograms) feature [75], a 33-dimensional vector that encapsulates the local geometric property, for each point, is calculated. RANSAC searches for these features to make a fast and approximate alignment. For local registration, we are aware that the process can be further enhanced by sharing only the features [76,77] rather than the entire point clouds and subsequently using feature-based registration methods [45,50,51]. We could also implement a semantic mapping technique [20,21] for acquiring a segmented map before spherization. However, we use point-to-plane ICP [46] to keep the overall complexity and tunable parameters to a minimum.
Algorithm 3 Spherized point cloud registration in agent R i .
  • Input:  M i , M s j , B i , r s i j
  •  
  • if Minimal-Proximity-Search( B i ( e n d ) , B j , r s i j ) is True then
  •      M s i = Spherization( M i , B i ( e n d ) . O , r s i j )
  •     Long-range-transmission( M s i ) -> R j
  •      T i j = Global-Point-Cloud-Registration( M s i , M s j )
  •      T i j , E i j , C i j = Local-Point-Cloud-Registration( M s i , M s j )
  •     if  E i j < E u i j  then
  •          E u i j = E i j
  •          T u i j = T i j
  •         return T u i j
  •     end if
  • end if
  •  
  • procedure Spherization( M , O , r )
  •     M = Outlier-Removal(Ground-Plane-Removal(Downsample(M)))
  •     Neighbors = M.NearestNeighborSearch(centre = O,radius = r)
  •      M s = M(Neighbors)
  •     return M s
  • end procedure

4. Experiments and Results

4.1. Real World Experiments

We carried out our experiments with two Jackal robots (shown in Figure 2a) (named J 1 and J 2 ), from Clearpath Robotics, on an actual solar farm (total area approximately 1 km 2 , depicted in Figure 3a). The two robots were equipped with a Velodyne Puck (VLP-16) sensor that has 16 layers of infra-red (IR) lasers, a horizontal field of view of 360 , a vertical field of view of 30 and a speed of up to 300,000 data points per second. A Pixhawk 2.1 cube IMU and a Here+ GPS receiver were also added to each robot.
The global path of the robots are planned beforehand to explore the regions of interest, through visiting a set of predefined GPS waypoints. The plans also include some time instances where the robots are within communication range for the sharing of map information between agents. The global paths taken by the robots are presented in Figure 2b, along with the area in which the agents were within short-range communication distance and the region that had a successful minimal proximity search outcome. The values of ( r m a x , r m i n ) were set to (20 m, 30 m). Figure 4 represents the various stages of the framework during the experiment. Each agent’s LOAM initialization (shown in Figure 4a,d) creates an ego-centric frame of reference. Once the successful minimal proximity search is achieved, the down-sampled point cloud spheres are shared between both agents. These are then registered in the respective frames of reference, as portrayed in Figure 4b,e. Finally, when the agents are close enough for short-range communication, the latest maps generated by J 1 and J 2 are shared and aligned, as depicted in Figure 4c,f.

4.2. Simulated Experiments

The simulations were carried out on a digital twin world (a 3D Gazebo model) of the actual solar farm used in the real-world experiments, as shown in Figure 3a. The simulated environment had a total area of about 1 km 2 . The digital twin was purely used for simulation purposes, to further test the collaborative 3D scene reconstruction framework with multiple agents, and was not linked to real-time sensory data from the actual solar farm. The complete ground truth point cloud was acquired by converting the 3D Gazebo mesh model (depicted in Figure 5b) to a 3D point cloud model (depicted in Figure 5c).
We initially performed a brief parameter analysis to select the values for maximum LiDAR ranges, L m a x . Figure 6 details three maps, generated with three different L m a x values, by an agent following the same path in between the solar panels of the digital twin world (Figure 3c). We can note that for L m a x = 20 m (in Figure 6a), LOAM is unable to properly find the correspondences that are further away. Due to this, the reconstructed panels are incorrectly curved. This issue is not seen for L m a x = 40 m (in Figure 6b) and L m a x = 80 m (in Figure 6c). In an effort to keep the computational load to a minimum, L m a x was selected as 40 m for the experiments.
To validate the robustness of the proposed algorithm to 3D-laser errors, we induced a Gaussian error in the simulated VLP-16 sensor. The red circles in Figure 7 map the correspondences between the ground truth and the 3D map generated by a single agent. It can be noted that, owing to LiDAR errors, there is a clear mismatch in the generated map. The resultant incorrect 3D reconstruction is evident in the encircled areas. Such reconstruction errors, across each mapping agent, is bound to make the eventual point cloud registration prone to errors. However, the cooperative framework was shown to be robust against such individual reconstruction errors and could still merge multiple maps with a good accuracy.
We averaged the results over 15 simulations of varying number of UGVs ( N a = 2 to 5 agents). For comparing the robustness of the proposed method to the noise in the LiDAR data and resultant LOAM mapping, we executed the experiments with different LiDAR rates, f of 10 hz and 5 hz. Mapping at a lower laser frequency, for the same agent speed, is relatively more error prone. Similar to the real-world experiment, navigation is carried out in between predefined waypoints, shown in Figure 5a. These waypoints are grouped as rows and divided uniformly amongst N a . The selected path covered all possible communication conditions and allowed validation of the end-to-end functionality of the proposed framework. For N a = 3 , the distribution of agents and the path taken by each agent can be seen in Figure 8b. Each color in the point cloud, shown in Figure 8a, represents the sub-map obtained by a single agent. Note that, the ground plane was removed for ease of visualization. The errors in individual LOAM-maps can be evidently seen as artifacts in Figure 8a. The framework is robust to these errors and is able to merge the maps irrespectively.
For performance analysis of the 3D scene reconstructions, we register the resultant merged map ( M m ) from the simulated UGVs against the ground truth 3D model ( M g ). This point cloud registration results in a transformation matrix, T m g , and a set of corresponding points, c g and c m , in M g and M m , respectively.

4.2.1. RMSE Analysis

Figure 9a plots the minimum–maximum–mean RMSE values for various number of agents. We can note that there is a decline in the overall RMSE values with the increase in the number of agents, N a . The mapping error, infused by the LiDAR noise, accumulates over time. This is spread across the number of agents involved and thus the overall decline in RMSE is expected, provided the map merging is accurate. This decline in RMSE implies a successful fusion of each agent’s map.

4.2.2. Fitness Analysis

Fitness( F ) property of a point cloud registration gives us the overlapping area of the two point clouds. For our scenario, where the target point cloud, M g has N g points, F is given by Equation (3).
F = N c N g
Since M g is constant throughout the analysis, a high fitness score implies an increase in the number of correspondences. In Figure 9b, we can note that with the increase in N a , there is a steady increase in F , (and thereby N c ), implying that the merged point clouds have more point-to-point correspondences with the M g . This can be attributed to the successful blending of the map of each agent.

4.2.3. Covariance Analysis

The Fischer information matrix, I , that is acquired as a result of the point cloud registration of M g on M m , characterizes the confidence in the registration process. The inverse [78] of I gives the covariance matrix, C , of the point cloud registration process [74,79,80]. C gives us the uncertainties involved in the 6 degrees of freedom. We utilize the determinant of C for our analysis of the overall uncertainty. Figure 10a showcases the healthy decline in the value of the determinant of C . This implies that the point cloud registration is more confident in its result, with the increase in N a . The reduced covariance or the increased confidence is the result of successful map merges.
The results depicted in Figure 9 and Figure 10 further showcase the robustness of the proposed algorithm. The behavior exhibited by the agents at f = 10 hz is the similar to that of f = 5 hz. In other words, the agents showcase a decline in RMSE and d e t ( C ) and an increase in F with the increase in N a . Though there is a performance degradation in f = 5 hz with respect to f = 10 hz, this is expected, owing to the increased mapping error from LOAM.

4.2.4. Time Analysis

Trivially, the time taken to map the whole map should decline linearly with N a . For all runs of the simulation, the homogeneous agents have the same set of waypoints to visit. Figure 10b showcases the time taken to complete the whole map. Mapping is deemed to be complete when all agents have completed Algorithm 3, with E u 0.4 . This threshold gives us a reliable transformation in between maps of different agents. As is evident in Figure 10b, there is an expected decline in time taken; however, this is not linear. This is because of the time taken to achieve a successful spherized registration with low E u .

4.2.5. Density

The density, at a point(p) in a point cloud, is the number of points around p, within a sphere of radius, r d . The density at point p, D (p), is given by Equation (4). Density can be roughly considered analogous to the resolution of an image. Thus, a denser point cloud is a more detailed point cloud.
D ( p ) = Number of points within sphere ( centre = p , radius = r d ) 4 3 π · ( r d ) 3
For analysis, we average the density of every point in the merged point cloud [81]. The spherical radius r d is chosen as 1 m. The results are shown in Figure 11a. We can see a healthy increase in the average density with N a . This is attributed to the increased overlapping areas. The overlapping areas have unique points from different agents during the merging of individual point clouds. This in-turn increases the number of points per unit area. Added to this, though trivial, it is evident that the density is higher at f = 10 hz than f = 5 hz. This is due to the increase rate of the LiDAR data acquisition. Figure 11b depicts the surface density distribution across each point in the point cloud, generated with 3 agents (represented in Figure 8a). Figure 11c depicts the histogram of surface density of Figure 11b. We can note that higher surface density is rare and achievable primarily in areas of overlap.

5. Conclusions and Future Work

This article proposes an active, distributed, homogeneous multi-agent mapping and localization framework. The distributed framework enables conditional long-range and short-range peer-to-peer communication for small and large data. The proposed method is tested on a real-world solar farm with two UGVs and its digital twin with multiple agents. The results showcase the robustness of the proposed algorithm to independent mapping errors. However, we acknowledge that using direct point cloud registration in the framework can be error-prone, with increased LiDAR errors. Additionally, a spherized registration is robust to global localization errors up to a few meters. Thus, noisy EKF estimates, due to large GPS or magnetic interference, might lead to incorrect map merges. For future work, we aim to extend the framework to a heterogeneous team of agents with a heterogeneous set of sensors. We also plan to incorporate an optimal waypoint planning module, considering the constraints in communication and each agent’s battery life. Currently, we are conducting an in-depth parameter study to understand and optimize the framework.

Author Contributions

Conceptualization, J.L., M.B. and P.U.L.; methodology, J.L., M.B. and P.U.L.; software, J.L.; validation, J.L.; formal analysis, J.L., M.B. and P.U.L.; investigation, J.L., M.B. and P.U.L.; resources, J.L., M.B. and P.U.L.; data curation, J.L., M.B. and P.U.L.; writing—original draft preparation, J.L., M.B. and P.U.L.; writing—review and editing, J.L., M.B. and P.U.L.; visualization, J.L.; supervision, M.B. and P.U.L.; project administration, M.B. and P.U.L.; funding acquisition, J.L., M.B. and P.U.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a doctoral grant from Fundação para a Ciência e a Tecnologia (FCT) UI/BD/153758/2022 and ISR/LARSyS Strategic Funding through the FCT project UIDB/50009/2020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EKFExtended Kalman Filter
EMIElectromagnetic Interference
LOAMLidar Odometry and Mapping in Real Time
IMUInertial Measurement Unit

References

  1. Basiri, M.; Gonçalves, J.; Rosa, J.; Bettencourt, R.; Vale, A.; Lima, P. A multipurpose mobile manipulator for autonomous firefighting and construction of outdoor structures. Field Robot 2021, 1, 102–126. [Google Scholar] [CrossRef]
  2. Karma, S.; Zorba, E.; Pallis, G.; Statheropoulos, G.; Balta, I.; Mikedi, K.; Vamvakari, J.; Pappa, A.; Chalaris, M.; Xanthopoulos, G.; et al. Use of unmanned vehicles in search and rescue operations in forest fires: Advantages and limitations observed in a field trial. Int. J. Disaster Risk Reduct. 2015, 13, 307–312. [Google Scholar] [CrossRef]
  3. Limosani, R.; Esposito, R.; Manzi, A.; Teti, G.; Cavallo, F.; Dario, P. Robotic delivery service in combined outdoor–indoor environments: Technical analysis and user evaluation. Robot. Auton. Syst. 2018, 103, 56–67. [Google Scholar] [CrossRef]
  4. Åstrand, B.; Baerveldt, A.J. An agricultural mobile robot with vision-based perception for mechanical weed control. Auton. Robot. 2002, 13, 21–35. [Google Scholar] [CrossRef]
  5. Lu, S.; Zhang, Y.; Su, J. Mobile robot for power substation inspection: A survey. IEEE/CAA J. Autom. Sin. 2017, 4, 830–847. [Google Scholar] [CrossRef]
  6. Capezio, F.; Sgorbissa, A.; Zaccaria, R. GPS-based localization for a surveillance UGV in outdoor areas. In Proceedings of the Fifth International Workshop on Robot Motion and Control, Dymaczewo, Poland, 23–25 June 2005; pp. 157–162. [Google Scholar]
  7. Montambault, S.; Pouliot, N. Design and validation of a mobile robot for power line inspection and maintenance. In Proceedings of the 6th International Conference on Field and Service Robotics-FSR 2007, Chamonix Mont-Blanc, France, 6–12 July 2007; Springer: Berlin/Heidelberg, Germany, 2008; pp. 495–504. [Google Scholar]
  8. Akyazi, Ö.; Şahin, E.; Özsoy, T.; Algül, M. A solar panel cleaning robot design and application. Avrupa Bilim Ve Teknoloji Dergisi 2019, 343–348. [Google Scholar] [CrossRef]
  9. Jaradat, M.A.; Tauseef, M.; Altaf, Y.; Saab, R.; Adel, H.; Yousuf, N.; Zurigat, Y.H. A fully portable robot system for cleaning solar panels. In Proceedings of the 2015 10th International Symposium on Mechatronics and its Applications (ISMA), Sharjah, United Arab Emirates, 8–10 December 2015; pp. 1–6. [Google Scholar]
  10. Kazem, H.A.; Chaichan, M.T.; Al-Waeli, A.H.; Sopian, K. A review of dust accumulation and cleaning methods for solar photovoltaic systems. J. Clean. Prod. 2020, 276, 123187. [Google Scholar] [CrossRef]
  11. Schwarz, M.; Rodehutskors, T.; Droeschel, D.; Beul, M.; Schreiber, M.; Araslanov, N.; Ivanov, I.; Lenz, C.; Razlaw, J.; Schüller, S.; et al. NimbRo Rescue: Solving disaster-response tasks with the mobile manipulation robot Momaro. J. Field Robot. 2017, 34, 400–425. [Google Scholar] [CrossRef] [Green Version]
  12. Haynes, G.C.; Stager, D.; Stentz, A.; Vande Weghe, J.M.; Zajac, B.; Herman, H.; Kelly, A.; Meyhofer, E.; Anderson, D.; Bennington, D.; et al. Developing a robust disaster response robot: CHIMP and the robotics challenge. J. Field Robot. 2017, 34, 281–304. [Google Scholar] [CrossRef]
  13. Kruijff, G.J.M.; Kruijff-Korbayová, I.; Keshavdas, S.; Larochelle, B.; Janíček, M.; Colas, F.; Liu, M.; Pomerleau, F.; Siegwart, R.; Neerincx, M.A.; et al. Designing, developing, and deploying systems to support human–robot teams in disaster response. Adv. Robot. 2014, 28, 1547–1570. [Google Scholar] [CrossRef]
  14. Hajjaj, S.S.H.; Sahari, K.S.M. Review of agriculture robotics: Practicality and feasibility. In Proceedings of the 2016 IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), Tokyo, Japan, 17–20 December 2016; pp. 194–198. [Google Scholar]
  15. Pfaff, P.; Triebel, R.; Stachniss, C.; Lamon, P.; Burgard, W.; Siegwart, R. Towards mapping of cities. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 4807–4813. [Google Scholar]
  16. Bauer, A.; Klasing, K.; Lidoris, G.; Mühlbauer, Q.; Rohrmüller, F.; Sosnowski, S.; Xu, T.; Kühnlenz, K.; Wollherr, D.; Buss, M. The autonomous city explorer: Towards natural human-robot interaction in urban environments. Int. J. Soc. Robot. 2009, 1, 127–140. [Google Scholar] [CrossRef]
  17. Simmons, R.; Apfelbaum, D.; Burgard, W.; Fox, D.; Moors, M.; Thrun, S.; Younes, H. Coordination for multi-robot exploration and mapping. In Aaai/Iaai; 2000; pp. 852–858. Available online: https://www.aaai.org/Papers/AAAI/2000/AAAI00-131.pdf (accessed on 9 December 2022).
  18. Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. In Robotics: Science and Systems; University of California: Berkeley, CA, USA, 2014; pp. 1–9. [Google Scholar]
  19. Shan, T.; Englot, B. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4758–4765. [Google Scholar]
  20. Li, L.; Kong, X.; Zhao, X.; Li, W.; Wen, F.; Zhang, H.; Liu, Y. SA-LOAM: Semantic-aided LiDAR SLAM with loop closure. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 7627–7634. [Google Scholar]
  21. Chen, S.W.; Nardari, G.V.; Lee, E.S.; Qu, C.; Liu, X.; Romero, R.A.F.; Kumar, V. Sloam: Semantic lidar odometry and mapping for forest inventory. IEEE Robot. Autom. Lett. 2020, 5, 612–619. [Google Scholar] [CrossRef] [Green Version]
  22. Yousif, K.; Taguchi, Y.; Ramalingam, S. MonoRGBD-SLAM: Simultaneous localization and mapping using both monocular and RGBD cameras. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 4495–4502. [Google Scholar]
  23. Loianno, G.; Thomas, J.; Kumar, V. Cooperative localization and mapping of MAVs using RGB-D sensors. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 4021–4028. [Google Scholar]
  24. Apriaskar, E.; Nugraha, Y.P.; Trilaksono, B.R. Simulation of simultaneous localization and mapping using hexacopter and RGBD camera. In Proceedings of the 2017 2nd International Conference on Automation, Cognitive Science, Optics, Micro Electro-Mechanical System, and Information Technology (ICACOMIT), Jakarta, Indonesia, 23–24 October 2017; pp. 48–53. [Google Scholar]
  25. Paton, M.; Kosecka, J. Adaptive rgb-d localization. In Proceedings of the 2012 Ninth Conference on Computer and Robot Vision, Toronto, ON, Canada, 28–30 May 2012; pp. 24–31. [Google Scholar]
  26. Lin, J.; Zhang, F. Loam livox: A fast, robust, high-precision LiDAR odometry and mapping package for LiDARs of small FoV. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2020; pp. 3126–3131. [Google Scholar]
  27. Xu, W.; Zhang, F. Fast-lio: A fast, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter. IEEE Robot. Autom. Lett. 2021, 6, 3317–3324. [Google Scholar] [CrossRef]
  28. Xu, W.; Cai, Y.; He, D.; Lin, J.; Zhang, F. Fast-lio2: Fast direct lidar-inertial odometry. IEEE Trans. Robot. 2022, 38, 2053–2073. [Google Scholar] [CrossRef]
  29. Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping: Part I. IEEE Robot. Autom. Mag. 2006, 13, 99–110. [Google Scholar] [CrossRef] [Green Version]
  30. Bailey, T.; Durrant-Whyte, H. Simultaneous localization and mapping (SLAM): Part II. IEEE Robot. Autom. Mag. 2006, 13, 108–117. [Google Scholar] [CrossRef] [Green Version]
  31. Kim, P.; Chen, J.; Cho, Y.K. SLAM-driven robotic mapping and registration of 3D point clouds. Autom. Constr. 2018, 89, 38–48. [Google Scholar] [CrossRef]
  32. Takleh, T.T.O.; Bakar, N.A.; Rahman, S.A.; Hamzah, R.; Aziz, Z. A brief survey on SLAM methods in autonomous vehicle. Int. J. Eng. Technol. 2018, 7, 38–43. [Google Scholar] [CrossRef] [Green Version]
  33. Jiang, Z.; Zhu, J.; Lin, Z.; Li, Z.; Guo, R. 3D mapping of outdoor environments by scan matching and motion averaging. Neurocomputing 2020, 372, 17–32. [Google Scholar] [CrossRef]
  34. Wiesmann, L.; Milioto, A.; Chen, X.; Stachniss, C.; Behley, J. Deep compression for dense point cloud maps. IEEE Robot. Autom. Lett. 2021, 6, 2060–2067. [Google Scholar] [CrossRef]
  35. Navarrete, J.; Viejo, D.; Cazorla, M. Compression and registration of 3D point clouds using GMMs. Pattern Recognit. Lett. 2018, 110, 8–15. [Google Scholar] [CrossRef] [Green Version]
  36. Wiesmann, L.; Guadagnino, T.; Vizzo, I.; Grisetti, G.; Behley, J.; Stachniss, C. DCPCR: Deep Compressed Point Cloud Registration in Large-Scale Outdoor Environments. IEEE Robot. Autom. Lett. 2022, 7, 6327–6334. [Google Scholar] [CrossRef]
  37. Huang, X.; Mei, G.; Zhang, J.; Abbas, R. A comprehensive survey on point cloud registration. arXiv 2021, arXiv:2103.02690. [Google Scholar]
  38. Choy, C.; Dong, W.; Koltun, V. Deep global registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2514–2523. [Google Scholar]
  39. Zhou, Q.Y.; Park, J.; Koltun, V. Fast global registration. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2016; pp. 766–782. [Google Scholar]
  40. Yang, H.; Shi, J.; Carlone, L. Teaser: Fast and certifiable point cloud registration. IEEE Trans. Robot. 2020, 37, 314–333. [Google Scholar] [CrossRef]
  41. Lei, H.; Jiang, G.; Quan, L. Fast descriptors and correspondence propagation for robust global point cloud registration. IEEE Trans. Image Process. 2017, 26, 3614–3623. [Google Scholar] [CrossRef]
  42. Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Sensor Fusion IV: Control Paradigms and Data Structures; Spie: Bellingham, WA, USA, 1992; Volume 1611, pp. 586–606. [Google Scholar]
  43. Chen, Y.; Medioni, G. Object modelling by registration of multiple range images. Image Vis. Comput. 1992, 10, 145–155. [Google Scholar] [CrossRef]
  44. Biber, P.; Straßer, W. The normal distributions transform: A new approach to laser scan matching. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No. 03CH37453), Las Vegas, NV, USA, 27–31 October 2003; Volume 3, pp. 2743–2748. [Google Scholar]
  45. Rusinkiewicz, S.; Levoy, M. Efficient variants of the ICP algorithm. In Proceedings of the Third International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC, Canada, 28 May–1 June 2001; pp. 145–152. [Google Scholar]
  46. Low, K.L. Linear least-squares optimization for point-to-plane icp surface registration. Chapel Hill Univ. North Carol. 2004, 4, 1–3. [Google Scholar]
  47. Park, S.Y.; Subbarao, M. An accurate and fast point-to-plane registration technique. Pattern Recognit. Lett. 2003, 24, 2967–2976. [Google Scholar] [CrossRef]
  48. Park, J.; Zhou, Q.Y.; Koltun, V. Colored point cloud registration revisited. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 143–152. [Google Scholar]
  49. Huhle, B.; Magnusson, M.; Straßer, W.; Lilienthal, A.J. Registration of colored 3D point clouds with a kernel-based extension to the normal distributions transform. In Proceedings of the 2008 IEEE International Conference on Robotics and Automation, Bangkok, Thailand, 14–17 December 2008; pp. 4025–4030. [Google Scholar]
  50. Zaganidis, A.; Sun, L.; Duckett, T.; Cielniak, G. Integrating deep semantic segmentation into 3-d point cloud registration. IEEE Robot. Autom. Lett. 2018, 3, 2942–2949. [Google Scholar] [CrossRef] [Green Version]
  51. Zaganidis, A.; Magnusson, M.; Duckett, T.; Cielniak, G. Semantic-assisted 3D normal distributions transform for scan registration in environments with limited structure. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 4064–4069. [Google Scholar]
  52. Raza, U.; Kulkarni, P.; Sooriyabandara, M. Low power wide area networks: An overview. IEEE Commun. Surv. Tutorials 2017, 19, 855–873. [Google Scholar] [CrossRef] [Green Version]
  53. Ikpehai, A.; Adebisi, B.; Rabie, K.M.; Anoh, K.; Ande, R.E.; Hammoudeh, M.; Gacanin, H.; Mbanaso, U.M. Low-power wide area network technologies for Internet-of-Things: A comparative review. IEEE Internet Things J. 2018, 6, 2225–2240. [Google Scholar] [CrossRef] [Green Version]
  54. Vaezi, M.; Azari, A.; Khosravirad, S.R.; Shirvanimoghaddam, M.; Azari, M.M.; Chasaki, D.; Popovski, P. Cellular, wide-area, and non-terrestrial IoT: A survey on 5G advances and the road toward 6G. IEEE Commun. Surv. Tutorials 2022, 24, 1117–1174. [Google Scholar] [CrossRef]
  55. Vejlgaard, B.; Lauridsen, M.; Nguyen, H.; Kovács, I.Z.; Mogensen, P.; Sorensen, M. Coverage and capacity analysis of sigfox, lora, gprs, and nb-iot. In Proceedings of the 2017 IEEE 85th Vehicular Technology Conference (VTC Spring), Sydney, Australia, 4–7 June 2017; pp. 1–5. [Google Scholar]
  56. XBee RF Modules. Available online: http://www.digi.com/products/xbee-rf-solutions (accessed on 12 August 2022).
  57. Corah, M.; O’Meadhra, C.; Goel, K.; Michael, N. Communication-efficient planning and mapping for multi-robot exploration in large environments. IEEE Robot. Autom. Lett. 2019, 4, 1715–1721. [Google Scholar] [CrossRef]
  58. Xu, X.; Zhang, L.; Yang, J.; Cao, C.; Wang, W.; Ran, Y.; Tan, Z.; Luo, M. A review of multi-sensor fusion slam systems based on 3D LIDAR. Remote Sens. 2022, 14, 2835. [Google Scholar] [CrossRef]
  59. Valencia, R.; Morta, M.; Andrade-Cetto, J.; Porta, J.M. Planning reliable paths with pose SLAM. IEEE Trans. Robot. 2013, 29, 1050–1059. [Google Scholar] [CrossRef] [Green Version]
  60. Krinkin, K.; Filatov, A.; yom Filatov, A.; Huletski, A.; Kartashov, D. Evaluation of modern laser based indoor slam algorithms. In Proceedings of the 2018 22nd Conference of Open Innovations Association (FRUCT), Jyvaskyla, Finland, 15–18 May 2018; pp. 101–106. [Google Scholar]
  61. Sayed, A.S.; Ammar, H.H.; Shalaby, R. Centralized multi-agent mobile robots SLAM and navigation for COVID-19 field hospitals. In Proceedings of the 2020 2nd Novel Intelligent and Leading Emerging Sciences Conference (NILES, Giza, Egypt, 24–26 October 2020; pp. 444–449. [Google Scholar]
  62. Liu, T.M.; Lyons, D.M. Leveraging area bounds information for autonomous decentralized multi-robot exploration. Robot. Auton. Syst. 2015, 74, 66–78. [Google Scholar] [CrossRef] [Green Version]
  63. Matignon, L.; Jeanpierre, L.; Mouaddib, A.I. Coordinated multi-robot exploration under communication constraints using decentralized markov decision processes. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, Toronto, ON, Canada, 22–26 June 2012. [Google Scholar]
  64. Arkin, R.C.; Diaz, J. Line-of-sight constrained exploration for reactive multiagent robotic teams. In Proceedings of the 7th International Workshop on Advanced Motion Control, Maribor, Slovenia, 3–5 July 2002; pp. 455–461. [Google Scholar]
  65. Amigoni, F.; Banfi, J.; Basilico, N. Multirobot exploration of communication-restricted environments: A survey. IEEE Intell. Syst. 2017, 32, 48–57. [Google Scholar] [CrossRef]
  66. Gao, Y.; Wang, Y.; Zhong, X.; Yang, T.; Wang, M.; Xu, Z.; Wang, Y.; Xu, C.; Gao, F. Meeting-Merging-Mission: A Multi-robot Coordinate Framework for Large-Scale Communication-Limited Exploration. arXiv 2021, arXiv:2109.07764. [Google Scholar]
  67. Omohundro, S.M. Five Balltree Construction Algorithms; International Computer Science Institute: Berkeley, CA, USA, 1989. [Google Scholar]
  68. Boeing, G. Clustering to reduce spatial data set size. arXiv 2018, arXiv:1803.08101. [Google Scholar] [CrossRef] [Green Version]
  69. Bhatia, N. Survey of nearest neighbor techniques. arXiv 2010, arXiv:1007.0085. [Google Scholar]
  70. Hariz, F.; Souifi, H.; Leblanc, R.; Bouslimani, Y.; Ghribi, M.; Langin, E.; Mccarthy, D. Direct Georeferencing 3D Points Cloud Map Based on SLAM and Robot Operating System. In Proceedings of the 2021 IEEE International Symposium on Robotic and Sensors Environments (ROSE), Virtual Conference, 28–29 October 2021; pp. 1–6. [Google Scholar]
  71. Liu, W.; Li, Z.; Li, Y.; Sun, S.; Sotelo, M.A. Using weighted total least squares and 3-D conformal coordinate transformation to improve the accuracy of mobile laser scanning. IEEE Trans. Geosci. Remote Sens. 2019, 58, 203–217. [Google Scholar] [CrossRef]
  72. Janata, T.; Cajthaml, J. Georeferencing of multi-sheet maps based on least squares with constraints—First military mapping survey maps in the area of Czechia. Appl. Sci. 2020, 11, 299. [Google Scholar] [CrossRef]
  73. Yang, H. A dynamical perspective on point cloud registration. arXiv 2020, arXiv:2005.03190. [Google Scholar]
  74. Choi, S.; Zhou, Q.Y.; Koltun, V. Robust reconstruction of indoor scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 5556–5565. [Google Scholar]
  75. Rusu, R.B.; Blodow, N.; Beetz, M. Fast point feature histograms (FPFH) for 3D registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3212–3217. [Google Scholar]
  76. Shen, Z.; Liang, H.; Lin, L.; Wang, Z.; Huang, W.; Yu, J. Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process. Remote Sens. 2021, 13, 3239. [Google Scholar] [CrossRef]
  77. Zhang, F.; Fang, J.; Wah, B.; Torr, P. Deep fusionnet for point cloud semantic segmentation. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2020; pp. 644–663. [Google Scholar]
  78. Fujita, K.; Okada, K.; Katahira, K. The Fisher Information Matrix: A Tutorial for Calculation for Decision Making Models, 2022.
  79. Pulli, K. Multiview registration for large data sets. In Proceedings of the Second International Conference on 3-d Digital Imaging and Modeling (Cat. No. pr00062), Ottawa, ON, Canada, 4–8 October 1999; pp. 160–168. [Google Scholar]
  80. Barczyk, M.; Bonnabel, S.; Goulette, F. Observability, Covariance and Uncertainty of ICP Scan Matching. arXiv 2014, arXiv:1410.7632. [Google Scholar]
  81. Maset, E.; Scalera, L.; Beinat, A.; Visintini, D.; Gasparetto, A. Performance Investigation and Repeatability Assessment of a Mobile Robotic System for 3D Mapping. Robotics 2022, 11, 54. [Google Scholar] [CrossRef]
Figure 1. Proposed framework deployed in agent R i .
Figure 1. Proposed framework deployed in agent R i .
Sensors 23 00375 g001
Figure 2. Real-world experimental setup. (a) Clearpath Jackal. (b) Path taken.
Figure 2. Real-world experimental setup. (a) Clearpath Jackal. (b) Path taken.
Sensors 23 00375 g002
Figure 3. Experimental setup: Solar farm, corresponding 3D map of solar farm obtained from LOAM [18] and digital twin. (a) Aerial view. (b) 3D point cloud. (c) Digital Twin.
Figure 3. Experimental setup: Solar farm, corresponding 3D map of solar farm obtained from LOAM [18] and digital twin. (a) Aerial view. (b) 3D point cloud. (c) Digital Twin.
Sensors 23 00375 g003
Figure 4. Real-world experiment—the various phases of the proposed method for agent J1 (above) and J2 (below). The blue and red points correspond to the point clouds generated by J1 and J2, respectively. (a,d) LOAM initialization; (b,e) minimal proximity search was successful, an agent receives a spherized down-sampled map from the other agent and registers this in its own map; (c,f) each agent receives the full map from the other agent, as they are within short communication range. The agent aligns the incoming map with its own map using the transformation acquired from the spherized registration.
Figure 4. Real-world experiment—the various phases of the proposed method for agent J1 (above) and J2 (below). The blue and red points correspond to the point clouds generated by J1 and J2, respectively. (a,d) LOAM initialization; (b,e) minimal proximity search was successful, an agent receives a spherized down-sampled map from the other agent and registers this in its own map; (c,f) each agent receives the full map from the other agent, as they are within short communication range. The agent aligns the incoming map with its own map using the transformation acquired from the spherized registration.
Sensors 23 00375 g004
Figure 5. (a) Way points utilized for navigation, (b) 3D mesh of the digital twin, (c) 3D point cloud generated from the 3D mesh.
Figure 5. (a) Way points utilized for navigation, (b) 3D mesh of the digital twin, (c) 3D point cloud generated from the 3D mesh.
Sensors 23 00375 g005
Figure 6. Maps, from the same viewpoint, generated with varying maximum LiDAR ranges, L m a x . (a) L m a x = 20 m. (b) L m a x = 40 m. (c) L m a x = 80 m.
Figure 6. Maps, from the same viewpoint, generated with varying maximum LiDAR ranges, L m a x . (a) L m a x = 20 m. (b) L m a x = 40 m. (c) L m a x = 80 m.
Sensors 23 00375 g006
Figure 7. The effect of erroneous LiDAR measurements. The red circles represent the expected correspondences between the generated map and the ground truth. (Left): The LOAM-mapping result of an agent. (Right): The ground-truth 3D model.
Figure 7. The effect of erroneous LiDAR measurements. The red circles represent the expected correspondences between the generated map and the ground truth. (Left): The LOAM-mapping result of an agent. (Right): The ground-truth 3D model.
Sensors 23 00375 g007
Figure 8. The results of an isolated iteration of simulated experiments with N a = 3 . (a) Merged map. (b) Path taken.
Figure 8. The results of an isolated iteration of simulated experiments with N a = 3 . (a) Merged map. (b) Path taken.
Sensors 23 00375 g008
Figure 9. RMSE and fitness plots for varying number of agents N a across different rates. (a) RMSE. (b) Fitness.
Figure 9. RMSE and fitness plots for varying number of agents N a across different rates. (a) RMSE. (b) Fitness.
Sensors 23 00375 g009
Figure 10. Uncertainty and time plots for varying number of agents N a . (a) d e t ( C ) . (b) Time taken.
Figure 10. Uncertainty and time plots for varying number of agents N a . (a) d e t ( C ) . (b) Time taken.
Sensors 23 00375 g010
Figure 11. (a) The variation of mean density of merged map over N a across different rates, (b) the surface density distribution across the point cloud in Figure 8a, (c) The histogram of surface density distribution for the point cloud in Figure 8a.
Figure 11. (a) The variation of mean density of merged map over N a across different rates, (b) the surface density distribution across the point cloud in Figure 8a, (c) The histogram of surface density distribution for the point cloud in Figure 8a.
Sensors 23 00375 g011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lewis, J.; Lima, P.U.; Basiri, M. Collaborative 3D Scene Reconstruction in Large Outdoor Environments Using a Fleet of Mobile Ground Robots. Sensors 2023, 23, 375. https://doi.org/10.3390/s23010375

AMA Style

Lewis J, Lima PU, Basiri M. Collaborative 3D Scene Reconstruction in Large Outdoor Environments Using a Fleet of Mobile Ground Robots. Sensors. 2023; 23(1):375. https://doi.org/10.3390/s23010375

Chicago/Turabian Style

Lewis, John, Pedro U. Lima, and Meysam Basiri. 2023. "Collaborative 3D Scene Reconstruction in Large Outdoor Environments Using a Fleet of Mobile Ground Robots" Sensors 23, no. 1: 375. https://doi.org/10.3390/s23010375

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop