Author Contributions
Conceptualization, W.W. and H.Z.; methodology, W.W. and C.Z.; software, W.W.; validation, W.W., C.Z. and H.Z.; formal analysis, W.W.; investigation, W.W.; resources, C.Z.; data curation, W.W.; writing—original draft preparation, W.W.; writing—review and editing, C.Z.; visualization, W.W.; supervision, C.Z. and H.Z.; project administration, C.Z.; funding acquisition, C.Z. All authors have read and agreed to the published version of the manuscript.
Figure 1.
Datasets collected under different perspectives. Lidar systems are deployed at different sites to scan the same building, and datasets collected at each site are different. In general, the farther apart the sites are, the less overlap there is likely to be between the collected datasets.
Figure 1.
Datasets collected under different perspectives. Lidar systems are deployed at different sites to scan the same building, and datasets collected at each site are different. In general, the farther apart the sites are, the less overlap there is likely to be between the collected datasets.
Figure 2.
Performance comparison of different algorithms. We conduct a comprehensive comparison of the different registration methods separately using processing time and alignment accuracy as the judging criteria. While some methods are faster in terms of processing time, others are more accurate. Collectively, our proposed approach achieves a balance between these two aspects and performs well in terms of both.
Figure 2.
Performance comparison of different algorithms. We conduct a comprehensive comparison of the different registration methods separately using processing time and alignment accuracy as the judging criteria. While some methods are faster in terms of processing time, others are more accurate. Collectively, our proposed approach achieves a balance between these two aspects and performs well in terms of both.
Figure 3.
System workflow diagram. The system consists of four main components: (1) Data acquisition. As well as the drone-borne Lidar system and the TLS system, we also obtain data from third-party sources, such as Semantic 3D; (2) Data type. The data packages used include point cloud datasets and RGB image sets. There are different colors in the gradient bar (
) in the point cloud dataset that represent different altitudes; (3) Feature matching point pairs. A mixed feature pair point set is established through geometric matching and photometric matching; (4) Data processing. By optimizing random sampling and evaluation estimation, the RANSAC algorithm finally reaches target registration.
Figure 3.
System workflow diagram. The system consists of four main components: (1) Data acquisition. As well as the drone-borne Lidar system and the TLS system, we also obtain data from third-party sources, such as Semantic 3D; (2) Data type. The data packages used include point cloud datasets and RGB image sets. There are different colors in the gradient bar (
) in the point cloud dataset that represent different altitudes; (3) Feature matching point pairs. A mixed feature pair point set is established through geometric matching and photometric matching; (4) Data processing. By optimizing random sampling and evaluation estimation, the RANSAC algorithm finally reaches target registration.
Figure 4.
The whole process of 3D registration. The first step is to search and determine the respective feature points from the source and target point clouds based on their initial positions. According to the local features, the matching relationship between the feature points is further determined. Lastly, the feature points with the same name are transformed to the same position, and the point cloud is converted to the same coordinate system. (
■: Source Points,
■: Target Points,
■: Source Keypoints,
■: Target Keypoints,
: Correspondences).
Figure 4.
The whole process of 3D registration. The first step is to search and determine the respective feature points from the source and target point clouds based on their initial positions. According to the local features, the matching relationship between the feature points is further determined. Lastly, the feature points with the same name are transformed to the same position, and the point cloud is converted to the same coordinate system. (
■: Source Points,
■: Target Points,
■: Source Keypoints,
■: Target Keypoints,
: Correspondences).
Figure 5.
Feature matching point set. The NARF and FPFH algorithms are used to extract 3D feature points from the laser point cloud data in the dataset, respectively, and the feature matching point pair set is established based on the local feature description method. The 3D feature points are then extracted using SIFT and the bilinear interpolation method for RGB images, which in turn generates the set of point pairs .
Figure 5.
Feature matching point set. The NARF and FPFH algorithms are used to extract 3D feature points from the laser point cloud data in the dataset, respectively, and the feature matching point pair set is established based on the local feature description method. The 3D feature points are then extracted using SIFT and the bilinear interpolation method for RGB images, which in turn generates the set of point pairs .
Figure 6.
Scene classification. The main process and basic working principle of scene classification are introduced.
Figure 6.
Scene classification. The main process and basic working principle of scene classification are introduced.
Figure 7.
Acquisition system and on-site environment. (a) UAV-borne Lidar scanning system; (b) the location of the National Biathlon Ski Center for the Beijing Winter Olympics; (c) the collected color data.
Figure 7.
Acquisition system and on-site environment. (a) UAV-borne Lidar scanning system; (b) the location of the National Biathlon Ski Center for the Beijing Winter Olympics; (c) the collected color data.
Figure 8.
Experimental sample selection. In addition to showing the full picture of the six sets of data, we also selected specific small samples for testing in the form of specific perspectives (
). Moreover, each small sample is displayed sequentially with laser point information and RGB image information. A gradient bar with a changing color based on the height is represented by
.
Figure 8.
Experimental sample selection. In addition to showing the full picture of the six sets of data, we also selected specific small samples for testing in the form of specific perspectives (
). Moreover, each small sample is displayed sequentially with laser point information and RGB image information. A gradient bar with a changing color based on the height is represented by
.
Figure 9.
Comparison of registration effects of third-party data. On samples selected from third-party databases (Bird, Dom, and Sg27), we performed point cloud registration tests using RANSAC, FGR, NDT, GICP, NICP, and our proposed method. The dotted box represents the area of the image where there is an obvious deviation in registration, and the square represents the enlarged image of the local deviation.
Figure 9.
Comparison of registration effects of third-party data. On samples selected from third-party databases (Bird, Dom, and Sg27), we performed point cloud registration tests using RANSAC, FGR, NDT, GICP, NICP, and our proposed method. The dotted box represents the area of the image where there is an obvious deviation in registration, and the square represents the enlarged image of the local deviation.
Figure 10.
Registration effect comparison. Bird, Dom, and Sg27 errors were measured with different methods using MSE, RMSE, and MAE as measurement indicators.
Figure 10.
Registration effect comparison. Bird, Dom, and Sg27 errors were measured with different methods using MSE, RMSE, and MAE as measurement indicators.
Figure 11.
Comparison of registration effects for self-built data.On samples selected from self-built databases (SJC, CSC, and BSC), we performed point cloud registration tests using RANSAC, FGR, NDT, GICP, NICP, and our proposed method. There are two types of boxes in the figure: the dotted box represents the place where there is obvious registration deviation, and the solid rectangle represents an enlarged view of the local deviation.
Figure 11.
Comparison of registration effects for self-built data.On samples selected from self-built databases (SJC, CSC, and BSC), we performed point cloud registration tests using RANSAC, FGR, NDT, GICP, NICP, and our proposed method. There are two types of boxes in the figure: the dotted box represents the place where there is obvious registration deviation, and the solid rectangle represents an enlarged view of the local deviation.
Figure 12.
Registration effect comparison. The errors of SJC, CSC, and BSC registered using different methods are measured using MSE, RMSE, and MAE metrics.
Figure 12.
Registration effect comparison. The errors of SJC, CSC, and BSC registered using different methods are measured using MSE, RMSE, and MAE metrics.
Figure 13.
Registration time statistics. Time taken for different methods to process data.
Figure 13.
Registration time statistics. Time taken for different methods to process data.
Table 1.
Lidar system parameters. In the process of acquisition, various parameters of the Lidar system are taken into consideration.
Table 1.
Lidar system parameters. In the process of acquisition, various parameters of the Lidar system are taken into consideration.
Category | ULS | TLS |
---|
Model | DJI M600 pro+UAV-1 series | Faro Focus 3D X130 |
Measuring Distance | 3–1050 m | 0.6–120 m |
Scan Angle | 360° | 360° (horizontal), 300° (vertical) |
Measurement Rate | 10–200 lines/s | 976,000 points/s |
Angular Resolution | 0.006° | 0.009° |
Accuracy | ±5 mm | ±2 mm |
Ambient Temperature | −10–+40 °C | −5–+40 °C |
Wavelength | 1550 nm | 1550 nm |
Table 2.
Platform configuration. A suitable test platform is selected whose software and hardware meet the requirements of the experiment.
Table 2.
Platform configuration. A suitable test platform is selected whose software and hardware meet the requirements of the experiment.
Category | Configuration |
---|
Software | Operating System | Win 11 |
| Point Cloud Processing Library | PCL 1.11.1 |
| Support Platform | Visual Studio 2019 |
Hardware | CPU | Intel(R)Core(TM)i5-9400F |
| Memory | DDR4 32 GB |
| Graphics Card | Nvidia GTX 1080 Ti 11 GB |
Table 3.
Evaluation of results from third-party databases. Based on third-party test samples, we evaluated the performance of our method and other similar methods. (The ↓ indicates that the smaller the value, the better; bold indicates the best performance; underline indicates the second-best performance.)
Table 3.
Evaluation of results from third-party databases. Based on third-party test samples, we evaluated the performance of our method and other similar methods. (The ↓ indicates that the smaller the value, the better; bold indicates the best performance; underline indicates the second-best performance.)
Methods | MSE(R)↓ | RMSE(R)↓ | MAE(R)↓ | MSE(t) (×)↓ | RMSE(t) (×)↓ | MAE(t) (×)↓ |
---|
Bird | Dom | Sg27 | Bird | Dom | Sg27 | Bird | Dom | Sg27 | Bird | Dom | Sg27 | Bird | Dom | Sg27 | Bird | Dom | Sg27 |
---|
RANSAC | 91.867 | 174.339 | 95.075 | 9.584 | 13.203 | 9.750 | 6.483 | 8.586 | 6.580 | 0.006 | 0.037 | 0.007 | 0.077 | 0.192 | 0.083 | 0.062 | 0.116 | 0.065 |
FGR | 231.879 | 74.285 | 145.049 | 15.227 | 8.618 | 12.043 | 9.75 | 5.919 | 7.913 | 0.002 | 0.008 | 0.002 | 0.048 | 0.093 | 0.048 | 0.032 | 0.071 | 0.036 |
NDT | 15.923 | 6.379 | 208.834 | 3.990 | 2.525 | 14.451 | 3.179 | 2.253 | 9.310 | 0.002 | 0.003 | 0.032 | 0.040 | 0.051 | 0.179 | 0.026 | 0.038 | 0.122 |
GICP | 4.706 | 21.739 | 118.729 | 2.169 | 4.662 | 10.896 | 2.006 | 3.586 | 7.246 | 0.001 | 0.002 | 0.002 | 0.035 | 0.041 | 0.042 | 0.025 | 0.032 | 0.032 |
NICP | 2.844 | 2.699 | 2.782 | 1.686 | 1.642 | 1.668 | 1.626 | 1.586 | 1.609 | 0.031 | 0.006 | 0.003 | 0.176 | 0.079 | 0.054 | 0.105 | 0.059 | 0.041 |
OURS | 1.774 | 1.859 | 6.556 | 1.332 | 1.363 | 2.561 | 1.189 | 1.253 | 2.276 | 0.001 | 0.001 | 0.001 | 0.023 | 0.037 | 0.023 | 0.018 | 0.030 | 0.015 |
Table 4.
Evaluation of the results of the self-built database. On the basis of our self-built test samples, we evaluated the performance of our method and other similar methods. (A smaller value indicates better performance, a bolded value indicates the best performance, and an underlined value indicates the second-best performance.)
Table 4.
Evaluation of the results of the self-built database. On the basis of our self-built test samples, we evaluated the performance of our method and other similar methods. (A smaller value indicates better performance, a bolded value indicates the best performance, and an underlined value indicates the second-best performance.)
Methods | MSE(R)↓ | RMSE(R)↓ | MAE(R)↓ | MSE(t) (×)↓ | RMSE(t) (×)↓ | MAE(t) (×)↓ |
---|
SJC | CSC | BSC | SJC | CSC | BSC | SJC | CSC | BSC | SJC | CSC | BSC | SJC | CSC | BSC | SJC | CSC | BSC |
---|
RANSAC | 148.107 | 76.566 | 47.062 | 12.169 | 8.750 | 6.860 | 7.986 | 5.996 | 4.890 | 0.195 | 0.066 | 0.006 | 4.413 | 2.567 | 0.08 | 1.642 | 0.709 | 0.054 |
FGR | 41.707 | 134.866 | 32.836 | 6.458 | 11.613 | 5.730 | 4.653 | 7.663 | 4.223 | 0.039 | 8.464 | 0.002 | 1.989 | 29.093 | 0.044 | 0.445 | 2.487 | 0.031 |
NDT | 17.813 | 2.986 | 16.496 | 4.221 | 1.728 | 4.061 | 3.319 | 1.677 | 3.223 | 0.032 | 0.019 | 0.001 | 1.798 | 1.394 | 0.026 | 0.202 | 0.284 | 0.018 |
GICP | 2.947 | 34.933 | 2.596 | 1.716 | 5.910 | 1.611 | 1.653 | 4.330 | 1.556 | 0.004 | 0.049 | 0.001 | 0.636 | 2.207 | 0.032 | 0.759 | 0.540 | 0.027 |
NICP | 9.867 | 1.993 | 63.956 | 3.141 | 1.411 | 7.997 | 2.673 | 1.328 | 5.568 | 0.012 | 0.004 | 0.034 | 1.097 | 0.176 | 0.185 | 0.815 | 0.145 | 0.123 |
OURS | 1.667 | 1.930 | 2.436 | 1.291 | 1.389 | 1.561 | 1.013 | 1.296 | 1.506 | 0.001 | 0.003 | 0.001 | 0.018 | 0.179 | 0.023 | 0.004 | 0.120 | 0.015 |
Table 5.
Time consumption comparison. Various methods are used sequentially to register selected samples, record the respective time, and analyze the results. (There is an indication of the best performance in bold, while there is an indication of the second-best performance in underline.)
Table 5.
Time consumption comparison. Various methods are used sequentially to register selected samples, record the respective time, and analyze the results. (There is an indication of the best performance in bold, while there is an indication of the second-best performance in underline.)
Methods | Time(s) |
---|
Bird | Dom | Sg27 | SJC | CSC | BSC |
---|
RANSAC | 46.293 | 31.584 | 65.791 | 43.626 | 55.418 | 30.084 |
FGR | 76.575 | 60.527 | 78.614 | 64.571 | 50.008 | 54.225 |
NDT | 89.402 | 124.051 | 166.569 | 103.366 | 88.629 | 63.748 |
GICP | 94.035 | 84.504 | 108.608 | 77.547 | 80.644 | 66.313 |
NICP | 80.224 | 35.271 | 70.547 | 48.514 | 61.522 | 38.693 |
OURS | 48.669 | 28.053 | 62.181 | 46.296 | 48.158 | 25.466 |