Next Article in Journal
Accurate Prediction of Punching Shear Strength of Steel Fiber-Reinforced Concrete Slabs: A Machine Learning Approach with Data Augmentation and Explainability
Previous Article in Journal
Edge AI-Enabled Road Fixture Monitoring System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Lightweight Method of Segment Beam Point Cloud Based on Edge Detection Optimization

1
China Harbour Engineering Company Ltd., Beijing 100027, China
2
School of Highway, Chang’an University, Xi’an 710064, China
*
Author to whom correspondence should be addressed.
Buildings 2024, 14(5), 1221; https://doi.org/10.3390/buildings14051221
Submission received: 8 March 2024 / Revised: 21 April 2024 / Accepted: 22 April 2024 / Published: 25 April 2024
(This article belongs to the Section Building Structures)

Abstract

:
In order to reduce the loss of laser point cloud appearance contours by point cloud lightweighting, this paper takes the laser point cloud data of the segment beam of the expressway viaduct as a sample. After comparing the downsampling algorithm from many aspects and angles, the voxel grid method is selected as the basic theory of the research. By combining the characteristics of the normal vector data of the laser point cloud, the top surface point cloud edge data are extracted and the voxel grid method is fused to establish an optimized point cloud lightweighting algorithm. The research in this paper shows that the voxel grid method performs better than the furthest point sampling method and the curvature downsampling method in retaining the top surface data, reducing the calculation time and optimizing the edge contour. Moreover, the average offset of the geometric contour is reduced from 2.235 mm to 0.664 mm by the edge-optimized voxel grid method, which has a higher retention. In summary, the edge-optimized voxel grid method has a better effect than the existing methods in point cloud lightweighting.

1. Introduction

With the informational development of engineering technology, laser scanning technology has become an important method of data acquisition in engineering. The laser point cloud data obtained by its scanning can accurately express the geometric appearance of a building. However, the massive amount of laser point cloud data generated via precision scanning technology seriously affects the efficiency regarding the relevant research, which is not conducive to the in-depth research and visual processing of the laser point cloud data. The existing research generally carries out the lightweight processing of the laser point cloud. The lightweight processing process of laser point clouds is bound to cause a certain degree of data loss. For example, some buildings’ surface contours, which obey geometric rules, may be oversimplified, which will affect the subsequent research and analysis. Therefore, it is necessary to optimize the lightweight processing of the point cloud and improve the retention degree of the laser point cloud geometric contours.
The existing lightweight point cloud methods mainly include the following: the lightweight point cloud method based on feature computation lightens the data of each node one by one through feature computation, finds the feature points of the point cloud model through a large number of geometric operations, and then deletes the non-feature points proportionally, which not only reduces the amount of data but also maintains the features of the model, which helps to reduce the burden of data processing and storage while maintaining the important features of the model. The grid-based lightweight method divides the point cloud data into regular grids, and the point cloud data within each grid are represented by the center point or representative point of the grid. By adjusting the size of the grid, one can control the lightweight degree. This method is simple and effective, but it may lose some details. The voxel-based lightweight method converts the point cloud data into a voxel representation, with each voxel containing a certain amount of point cloud data. The characteristics of each voxel are analyzed, and the voxel is screened and combined according to its characteristic importance so as to achieve light weight. The learning-based lightweight method trains a neural network to automatically extract the key features in a point cloud and generate a lightweight point cloud representation. This method can learn more complex feature representations, but it requires a great deal of training data and computational resources.
At present, there have been some studies on the lightweight treatment of laser point clouds. Yuan, et al. [1] proposed a simplified method of conformal geometric algebra to optimize the point clouds by means of distance calculation. Wei, et al. [2] realized point cloud simplification by constructing co-occurrence histograms of the curvature, which can contain more local geometric features. Abzal, et al. [3] proposed a pretreatment method to reduce the point cloud, which filters out unnecessary point cloud information in the data acquisition stage. Gan et al. [4] proposed a FPFH method based on PCA and achieved good results in point cloud lightweight, Wushour, et al. [5] eliminated the fault caused by the uneven sampling of point clouds. Ding, et al. [6] proposed a point cloud simplification method based on visual feature change and used random sorting to carry out point cloud simplification. Su, et al. [7,8] applied the self-adaptive Hilbert curve extension to point cloud simplification and proved the effectiveness of the method on actual surface point cloud data through algorithm experiments. Dasi, et al. [9] proposed an iterative-based grid simplification algorithm, calculated the minimum of the cost function, and analyzed the point cloud sampling through iteration. The algorithm is able to retain the distribution of the raw data. Li, et al. [10] proposed a point cloud simplification method based on quadratic error measurement and curvature information calculation. Hong, et al. [11] introduced the feature measure of the vertex to improve the quadratic error measurement algorithm, which can reduce the amount of data while retaining the point cloud structure features. Zhang Dehai, et al. [12] carried out research on the significant point cloud pretreatment technology and put forward a point cloud sampling criterion that meets the principles of high precision, fast speed, and adaptability. The method is suitable for reconstruction with high geometric accuracy requirements. Xiao Zhengtao, et al. [13] conducted an in-depth study on the OpenCV point in the downsampling algorithm of a feature three-dimensional object recognition module and proposed the voxel grid downsampling algorithm based on the existing problems, which improved the division grid attribution relationship of three-dimensional point clouds. Chen Yuanxiang, et al. [14] divided the point cloud into a ground cloud and non-ground point cloud and used redundancy elimination to sparsely sample them with different intensities. Yan Jianhua, et al. [15] proposed an algorithm based on the maximum allowable error to determine the sample grid size of scattered data point clouds, achieving good results in practice. Fu Wei, et al. [16] proposed a simplified algorithm based on the fusion of local and global point cloud features and sampled the point cloud based on the spatial voxelization method, obtaining a good simplified feature effect. Yuan Hua, et al. [17] completed point cloud data simplification work combined with curve surface normal vector estimation and the voxelized grid method. Chen Yonghui, et al. [18] proposed a feature-sensitive-based point cloud resampling algorithm, which introduces normal weights into the algorithm to realize the uniform downsampling of feature-preserving point clouds. Li Guojun, et al. [19] proposed a noisy point cloud uneven sampling algorithm based on Delaunay triangulation, and the point cloud through simplification is suitable for triangulation-based curve surface reconstruction. Wei Lei, et al. [20] proposed an LoD optimization model based on predicted residues, which solved the problem of a distance-based detail hierarchy without optimization regarding the point cloud compression. Fan Ran, et al. [21] proposed a point cloud simplification algorithm based on the equilibrium distribution of Poisson disk sampling, which can maintain the sharp edge characteristics of point clouds.
To sum up, the current research mainly focuses on the application of laser point clouds in the alignment analysis of the bridge formation stage. Relatively speaking, the application of laser point clouds in the alignment analysis of segment beam prefabrication is still insufficient. In view of the deficiency of the above research, it is particularly urgent to strengthen the prefabricated alignment analysis of the segmental beams and the laser point cloud application. Therefore, this paper studies taking the prefabricated laser point cloud data of segment beams as a case, the point cloud data of a segmental girder are collected, and the top plate, two flanks, web plate, bottom plate, and support of a segmental box girder are carefully scanned, and the point cloud model is established after processing. A variety of typical downsampling algorithms are compared and selected by using the collected point cloud data, and the voxel grid method is determined as the basic theory of lightweight processing. Taking the point cloud top surface contour as the main control index and using the normal vector angle features of the geometric edge of the point cloud model to extract the edge data of the top surface cloud, the point cloud edge extraction theory and the voxel grid method are integrated to establish an optimized lightweight point cloud algorithm, and the optimized algorithm is used to process the original laser point cloud data. The point cloud contour offsets of the original data before and after optimization are compared and analyzed, and we combine the actual engineering data for example verification.

2. Lightweight Point Cloud Basic Algorithm Comparison

2.1. Algorithm Introduction

In existing studies, the commonly used idea of lightweight point cloud is to extract some feature points from the original point cloud data to replace the original point cloud. This idea is called downsampling, which means the sampling method is used to reduce the amount of laser point cloud data. Three classical downsampling theories were selected for comparison in the study, and the theories are described as follows:
(1)
Voxel grid downsampling method
A three-dimensional voxel grid is created in the point cloud data, and then the center of gravity of each voxel is used to approximate the other points in the voxel; this method is called voxel grid downsampling. Its main idea is as follows: the whole point cloud is divided into multiple voxel grid groups of the same size, usually divided into cubes with side length L, and all the point clouds are included in the grid. Calculate the center of gravity of the point cloud in the grid, and then use the center of gravity to replace the rest of the whole grid to achieve the purpose of reducing the amount of point clouds. The schematic of the voxel grid method is shown in Figure 1:
To ensure the overall characteristics of the point cloud are roughly unchanged, the selection of grid size is very important. The center of gravity of a point cloud can reflect the distribution of the point cloud in the grid so as to ensure the characteristics of the point cloud to a large extent. The state equation of calculating the center of gravity can be represented by [17]
x c e n = i = 1 h x i h ,     y c e n = i = 1 h y i h ,   z c e n = i = 1 h z i h
In Formula (1), h denotes the quantity of points encompassed within a grid or voxel during the voxel grid filtering process.
The principle of the voxel grid method is simple and the algorithm is easy to implement, which has a good simplification effect. It can simplify the point cloud and improve the processing efficiency of the point cloud without establishing a large amount of coordinate relations. However, the voxel grid method uses barycenter points to replace other points in the grid body. When processing the top edge, the edge points may be replaced by barycenter points, resulting in the loss of edge data and the loss of accuracy of feature simulation. In the process of some point cloud data processing, there may be a large error in the final result.
(2)
The farthest downsampling method
The farthest point sampling is a very commonly used sampling algorithm, which is widely used because it can ensure uniform sampling of samples. The algorithm steps are as follows:
  • The input point cloud has N points, and select a point P0 from the point cloud as the starting point to obtain;
  • The collection of sampling points S = {P0};
  • Calculate the distance from P0 to form the N dimension array L, from which the point corresponding to the maximum value as P1 is selected, and update the collection of sampling points S = {P0, P1};
  • Calculate the distance of all points to P1; for each point Pi, if the distance to P1 is less than Li, update Li = d (Pi, P1); therefore, the array Li stored is always the closest distance from each point to the sampling point collection S;
  • The point corresponding to the maximum value in L is selected as P2, and the sampling point set S = {P0, P1, P2} is updated;
  • Repeat 2–4 steps and sampling until the N’ target sampling point.
The farthest point sampling method is shown in Figure 2.
The farthest downsampling method can select representative sampling points without prior information. This method can maintain data integrity when downsampling the point cloud data so that the point cloud can still accurately represent the shape and features of the original data after downsampling. The farthest downsampling method has the advantages of strong representativeness, good stability, and high flexibility, but it also has some disadvantages, such as large calculation amount, sensitivity to initial point, and noise.
(3)
The curvature downsampling method
The curvature downsampling method refers to the dynamic adjustment of sampling value according to the curvature change. Where the point cloud curvature is large, the number of sampling points is increased correspondingly, and, where the curvature fluctuation is small, the value is relatively average. The principle is as follows:
  • Calculate the neighborhood of each point K in the original point cloud, and then calculate the normal angle value of the point to the point neighborhood. The larger the curvature, the greater the angle value; the step can improve the calculation efficiency;
  • The collection of two regional points was established, namely region A with obvious features and region B with no obvious features;
  • Set an angle threshold; when the neighborhood angle value of a point is greater than the threshold value, determine the point to belong to the obvious feature region; otherwise, it is regarded as an obvious region;
  • Set the target sampling number to be S and the sampling uniformity to be U. The feature obvious and unobvious regions were sampled as S × (1 − U) and S × U, respectively.
By analyzing the curvature information of each point in the point cloud data, the curvature downsampling method preserves more points in the region with larger curvature, and samples sparsely in the region with smaller curvature so as to simplify the point cloud data. The laser point cloud curvature downsampling method has the advantages of good feature retention, strong adaptability, and high flexibility, and the local sampling point distribution is relatively uniform for different curvature locations. However, the computational complexity of this sampling method is relatively high. For large laser point clouds, the calculation time may be longer, and there are disadvantages such as noise sensitivity and threshold setting difficulty.

2.2. Algorithm Comparison and Selection

The data processing effects of the three downsampling algorithms are compared from the aspects of visualization and numerical analysis. The voxel grid method, farthest point sampling method, and curvature downsampling method were used to process the node beam point cloud data after denoising, and the amount of point cloud data after denoising was reduced from 1 million to 500,000. Visualize the downsampling results, as shown in Figure 3, where the point cloud is color-rendered according to the elevation.
As can be seen from Figure 3, there is no obvious difference between the processing results of the three downsampling methods, and there is nothing visible missing on the surface of the segment beam model, indicating that the three downsampling algorithms can meet the basic requirements of laser point cloud simplification, but the processing results should be analyzed numerically.
Firstly, determine comparative indicators based on actual needs. The top surface data of segmental beams are important engineering data. To ensure that the lightweight point cloud data can still better represent the top surface features, the concept of top surface integrity is proposed and used as an algorithm comparison indicator. The definition of top surface integrity is as follows: after processing the top surface of the laser point cloud, the data amount of the top surface point cloud dataset is P A , and the data amount of the original data is P B , and then the top surface integrity Wt is the ratio of P A and P B . The integrity of the top surface reflects the degree of restoration of the object surface after data processing, that is
W t = P A P B
Among them, the segmentation of the top surface of the point cloud adopts the Random Sample Consensus algorithm (RANSAC). The RANSAC algorithm can obtain mathematical model parameters from data containing outliers through iterative calculations.
The basic principle of the RANSAC involves assuming that the dataset composed of all point clouds is Q; a sample set P and a minimum sampling set are selected from the dataset Q. The minimum number of samples in the sampling set is n, which is determined based on the parameters of the initialization model. If the number of samples in the sample set P is greater than n, random sampling is conducted regarding the sample set. The sampling standard conforms to the minimum sampling set, forming a subset S. S is used as the initialization model M. Set an objective function as C, calculate the distance between the remainder and model M, compare it with the preset distance threshold, and, if it is less than the distance threshold, it is classified as an interior point. By traversing all the data in the remaining set and judging, a set S that meets the conditions can be obtained, which is called the interior point set. If the number of interior point sets is greater than the standard value N, it is considered to have obtained a point set model that meets the conditions. The interior point set S is then fitted again using methods such as least squares to form a new model M. When using the RANSAC algorithm for planar model extraction, the algorithm randomly selects three points for model fitting, calculates planar parameters, and continuously optimizes them to ultimately find the planar model.
Due to the regular geometric appearance of segmental beams, the mathematical model for fitting the top surface can be set as a spatial plane model, namely
A x + B y + C z + D = 0
For comparative analysis, it is also necessary to define the downsampling rate   U t , which means the ratio of the downsampled data volume to the original data volume, used to characterize the degree of simplification of laser point cloud data. The downsampling rate calculation method is
U t = Q B Q A
Among them, the amount of data before downsampling is   Q A , and the amount of data after downsampling is   Q B .
The downsampling rates were set at 25%, 50%, and 75%, respectively. The main normal proportion, top surface integrity, and computational time of the point cloud data processed by the three downsampling methods were compared. The results are shown in Figure 4.
From the Figure 4, it can be seen that, under the three downsampling rates, the top surface integrity values of the voxel grid method are 51.74%, 70.69%, and 96.63%, respectively, which is much higher than the other two methods. At the same time, the voxel grid method effectively controls the computation time within 6 s, while the other two methods have computation times greater than 10 s, indicating that the voxel grid method has faster computational efficiency.
In summary, three downsampling algorithms were compared in terms of visualization and numerical analysis. Overall, the voxel grid method has been chosen as the fundamental theory for laser lightweight point cloud processing in research.

3. Voxel Grid Method Based on Edge Detection Optimization

3.1. Algorithm Optimization Ideas

Although the voxel grid method can ensure the highest data integrity of the top point cloud, due to the use of centroid points to replace the other points within the grid, when processing the top edge, the edge points may be replaced by adjacent centroid points, resulting in missing edge data and a reduction in the top geometric size [22]. In some point cloud data processing processes, changes in the size of three-dimension point clouds may cause significant errors in the final results. Therefore, it is necessary to add constraints to voxel grid downsampling processing to ensure that the edge deviation of the point cloud obtained from lightweight processing meets the requirements. This article takes the top surface data of the segmented beam laser point cloud as an example to constrain the edge contour data of the top surface while reducing the weight. The specific steps are as follows:
(1)
Firstly, segment the point cloud top surface data of the segment beam to be processed and set it as the indicator monitoring plane;
(2)
Extract the contour edges of the top point cloud, set them as isolation points, and create indexes;
(3)
Use the voxel grid method for lightweight point cloud processing to reduce the amount of point cloud data. Among them, when the voxel grid contains isolation points, they are removed by index to avoid the contour edge points being replaced by adjacent centroid points.
The optimized algorithm process is shown in Figure 5.

3.2. Edge Detection Method

The commonly used edge detection method is to use point cloud normal vectors for edge extraction and judgment. The point cloud normal vector represents a vector perpendicular to the tangent plane of the point. The normal vector of a point cloud can also represent the degree of concave convex transformation in a region. The calculation of the normal vector needs to be combined with the covariance matrix. For a certain point pi and its neighboring point pjN(pi) (j = 1, 2, …, k), its centroid O can be calculated:
O = j = 1 k p j k
Next, combining the least squares method for fitting, the local surface of the generated point is fitted, and the normal vector of the surface can be represented by [17]
f = N p i p i O n
By converting the eigenvalues of f into covariance matrices, the eigenvector corresponding to the minimum value of the eigenvalues is the desired normal vector. The covariance matrix obtained is as follows:
1 i i X i O x 2 1 i i X i O x Y i O y 1 i i X i O x Z i O z 1 i i X i O x Y i O y 1 i i Y i O y 2 1 i i Y i O y Z i O z 1 i i X i O x Z i O z 1 i i Y i O y Z i O z 1 i i Z i O z 2
The calculation method for the trend function of the normal vector of the expression point pi is as follows:
f i = 1 k j = 1 k θ i j
Among them, θij is the angle formed by the normal vector of point pi and its corresponding normal vector of neighboring points, and fi is the trend function that expresses the change in the normal vector of point pi.
The greater the change in the normal vector between points, the greater the amplitude change in the bumps near the area where the point is located. The normal vector distribution of point cloud edge points is shown in Figure 6.
Through the comparative observation of Figure 6, it can be seen that the neighborhood points of the edge region tend to be on different planes, and changing the neighborhood size will not affect the establishment of the law, which will make the angle between the normal vectors between the points larger. For the points in the non-edge region, the neighborhood points are closer to the same plane, so the angle between the normal vectors between the points is close to 0, and this law does not change with the change in the neighborhood value. For the points in the mixed region, when the neighborhood value is small, it shows the characteristics of non-edge points, and, when the neighborhood value is large, it shows the characteristics of edge points. Therefore, by setting the appropriate threshold, the relatively flat points in the point cloud can be eliminated, and the edge points larger than the threshold can be left.

4. Example Verification of Optimization Algorithm

In this paper, the laser point cloud data of a section beam of a railway viaduct ECRL (East Coast Rail Link) is taken as an example. A three-dimensional laser scanner is used to scan the prefabricated beam field to obtain laser point cloud data and carry out algorithm practice. The original point cloud data are shown in Figure 7. They contain 500,000 coordinate points, and the spatial attitude has been aligned with the coordinate axis.
The algorithm is written in Python language, and the unoptimized voxel grid method is used to reduce the amount of point cloud data. The lightweight point cloud data contain 329,658 coordinate points. Based on the RANSAC theory, the top surface datasets of the original point cloud and the processed point cloud are extracted, respectively. The visualization effect is shown in Figure 8, where the top surface data are the red part.
The edge extraction algorithm is written to extract the data of the contour edge of the top surface of the point cloud before and after the lightweight processing, as shown in Figure 9, where red indicates the contour edge data.
From Figure 9, it can be seen that the top contour of the point cloud of the segmental beam is rectangular. Based on RANSAC theory, a mathematical model of spatial straight line is established, and the spatial equations of the four side lines of the top surface of the original segment beam can be obtained by fitting the contour data of the top surface of the original point cloud:
x + 0.007 y = 0 12.041 x + 0.69 y = 0
x + 0.003 y = 0 6.949 x + 4.674 y = 0
0.02 x y = 0 0.096 x + 3.681 y = 0
0.002 x y = 0 16.053 x + 3.231 y = 0
Similarly, the top surface contour of the point cloud after RANSAC fitting is used to obtain the point cloud coordinate set of the four side lines. The spatial Euclidean distance between the four edge points and the corresponding original point cloud top edge equation is calculated by the point cloud coordinate set. The coordinate points of each boundary line are numbered along the axis direction, and then the distance offset of different numbered segments of each boundary line is obtained, as shown in Figure 10.
It can be seen from Figure 10 that there is a certain deviation in the top surface contour of the point cloud obtained by the voxel grid method before the edge optimization. For the four edge lines of the top surface of the segmental beam, the offset of the middle section is mainly about 10 mm, and the point cloud density near the four vertices is relatively high, so the deviation is larger, with an average of no more than 60 mm.
The edge-optimized voxel grid lightweight algorithm is written to process the original data of the same set of test point clouds, and the distance offset on the four side lines is also calculated, as shown in Figure 11.
It can be seen from Figure 11, after edge optimization, when isolated points are included in the voxel grid, they are removed to avoid the replacement of the contour edge points by the neighboring center of gravity points. Therefore, the deviation distribution of the top surface contour of the point cloud obtained by the voxel grid method is relatively uniform, and the downsampling effect at the corner point is also good. The overall deviation is not more than 5 mm, and the edge of the point cloud is well-preserved.
While reducing weight, the edge contour data of the top surface are constrained, and the spatial linear mathematical model is established, and the offset in each direction is obtained by calculating and analyzing the point-to-line model. The purpose of this example verification is to compare the deviation before and after optimization of the lightweight point cloud algorithm without in-depth analysis of the offset in all directions, so the offset values are taken as absolute values. Therefore, the offset values in Figure 10 and Figure 11 are both positive and not negative.
In order to quantitatively evaluate the influence of lightweight processing on point cloud contours, the concept of contour average offset ε o f f is proposed, which means the mean value of the difference between the standard deviation and the mean value of all contour edge offsets. The calculation method is as follows:
ε o f f = 1 N 1 n i = 1 n d i = 1 n d 2 n 1
In Formula (13), N is the number of contour edges, n is the number of coordinates of edge i, and d is the single point offset. After calculation, for the point cloud obtained by the voxel grid method without edge optimization, the average offset of the top surface contour is 2.235 mm, and the average offset of the contour of the point cloud obtained by the optimization algorithm is 0.664 mm. The offset of the point cloud edge is relatively stable, and the geometric contour of the lightweight result is highly retained.
This example analysis only studies the edge optimization effect of the laser point cloud data voxel raster method of motorway viaduct section beams. The geometric types of the research object are single, and there is a lack of optimization algorithms to study more geometric types. In view of the limitations of the existing algorithms, a new lightweight point cloud method can be further explored to improve the algorithm’s ability to deal with complex structures and irregular shapes. This paper studies how to adjust the size and distribution of the whole element grid adaptively according to the characteristics of point cloud data to reduce the influence of the parameter settings on the results. Parallel computing and distributed processing technology are used to improve the efficiency and performance of the algorithm in processing large-scale point cloud data.

5. Conclusions

Taking segmental beam laser point cloud data as an example, this paper compares the effect of various lightweight point cloud algorithms and chooses the voxel grid method as the research basis. The contour edge data are extracted with the point cloud top profile as the control index. By combining the edge extraction theory and voxel lattice method, an optimized lightweight point cloud algorithm is established to improve the geometric appearance retention of the lightweight point cloud results. The results are as follows:
(1)
The results show that the voxel grid method has the best performance compared with the farthest downsampling method and the curvature downsampling method in terms of top surface data retention and calculation time.
(2)
Zou B et al. verified with the FWD method that the traditional sampling process had the problem of large deviation in the contours of the point cloud top surface obtained by processing [23]. In this paper, the edge optimization of the voxel grid method is used to reduce the deviation significantly, the overall deviation is less than 5 mm, and the edge of the point cloud is well-preserved.
(3)
The edge-optimized voxel grid method reduces the average offset of the point cloud contour from 2.235 mm to 0.664 mm, and the edge offset of the point cloud is relatively stable, improving the geometric contour retention of the point cloud after lightening.
(4)
This paper only studies the edge optimization effect of the voxel lattice method and will expand to more geometric types and new optimization algorithms in the future.

Author Contributions

Methodology, Y.D. and M.Y.; Formal analysis, M.Y.; Investigation, H.Y.; Resources, X.J.; Data curation, H.Y.; Writing—original draft, M.L.; Writing—review & editing, M.L. and Y.Q.; Visualization, Y.D. and Y.Q.; Supervision, Y.D.; Funding acquisition, X.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Fundamental Research Funds for the Central Universities grant number: CHD 300102212203; Key Research and Development Program of Shaanxi Province grant number: 2021SF-514.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

Authors Yan Dong, Haotian Yang, Mingjun Yin, Menghui Li were employed by the company China Harbour Engineering Company Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Yuan, S.; Zhu, S.; Li, D.-S.; Luo, W.; Yu, Z.Y.; Yuan, L.W. Feature preserving multiresolution subdivision and simplification of point clouds: A conformal geometric algebra approach. Math. Methods Appl. Sci. 2018, 41, 4074–4087. [Google Scholar] [CrossRef]
  2. Wei, N.; Gao, K.Y.; Ji, R.; Chen, P. Surface Saliency Detection Based on Curvature Co-occurrence Histograms. IEEE Access 2018, 6, 54536–54541. [Google Scholar] [CrossRef]
  3. Abzal, A.; Saadatseresht, M.; Varshpsaz, M. Development of a Novel Simplification Mask for Multi-shot Optical Scanners. ISPRS J. Photogramm. Remote Sens. 2018, 142, 12–20. [Google Scholar] [CrossRef]
  4. Gan, Z.; Ma, B.; Ling, Z. PCA-based fast point feature histogram simplification algorithm for point clouds. Eng. Rep. 2023, e12800. [Google Scholar] [CrossRef]
  5. Wushour, S.; Cao, J. An Extraction Algorithm for Sharp Feature Points from Point Clouds. J. Xi’an Jiaotong Univ. 2012, 46, 73. [Google Scholar]
  6. Ding, X.; Lin, W.; Chen, Z.; Zhang, X. Point Cloud Saliency Detection by Local and Global Feature Fusion. IEEE Trans. Image Process. 2019, 28, 5379–5393. [Google Scholar] [CrossRef]
  7. Su, T.; Wang, W.; Lv, Z.; Wu, W.; Li, X. Rapid Delaunay triangulation for randomly distributed point cloud data using adaptive Hilbert curve. Comput. Graph. 2016, 54, 65–74. [Google Scholar] [CrossRef]
  8. Su, T.; Wang, W.; Liu, H.; Liu, Z.; Li, X.; Jia, Z.; Zhou, L.; Song, Z.; Ding, M.; Cui, A. An adaptive and rapid 3D Delaunay triangulation for randomly distributed point cloud data. Vis. Comput. 2020, 38, 197–221. [Google Scholar] [CrossRef]
  9. Dassi, F.; Ettinger, B.; Perotto, S.; Sangalli, L.M. A mesh simplification strategy for a spatial regression analysis over the cortical surface of the brain. Appl. Numer. Math. 2015, 90, 111–131. [Google Scholar] [CrossRef]
  10. Li, Y.; Pang, M. Decimating Point Cloud Based on Quadric Error Metric. J. Chin. Comp. Sys. 2012, 33, 2538–2542. [Google Scholar]
  11. Zhang, H.; Lan, X. An improved triangle mesh simplification based on edge collapse. Geotech. Investig. Surv. 2014, 43, 910–922. [Google Scholar]
  12. Zhang, D.; Cui, G.; Bai, D.; Li, Y.; Zhang, X.; Yang, Y. Point cloud simplification technology of 3D optical measurement applied on reverse engineering. Appl. Res. Comput. 2014, 31, 946–948. [Google Scholar]
  13. Xiao, Z.; Gao, J.; Wu, D.; Zhang, L. Voxel Grid Downsampling for 3D Point Cloud Recognition. Modul. Mach. Tool Autom. Manuf. Tech. 2021, 11, 43–47. [Google Scholar]
  14. Chen, Y.; Chen, J.; Zheng, M.; Chen, Z. LiDAR point cloud compression method based on non-uniform sparse sampling. J. Fuzhou Univ. Nat. Sci. Ed. 2021, 49, 329–335. [Google Scholar]
  15. Yan, J.; Liu, X.; Ju, L. An Algorithm for Confirming Size of Sampling Mesh Based on Maximum Accepted Error of Scattered Cloud Points. J. Shanghai Univ. Nat. Sci. 2003, 9, 35–37. [Google Scholar]
  16. Fu, W.; Wu, L.; Chen, H. Study on local and global point cloud data simplification algorithm. Laser Infrared 2015, 8, 1004–1008. [Google Scholar]
  17. Yuan, H.; Pang, J.; Mo, J. Research on Simplification Algorithm of Point Cloud Based on Voxel Grid. Video Eng. 2015, 39, 43–47. [Google Scholar]
  18. Chen, Y.; Yue, L. Point Cloud Resampling Algorithm of Feature-sensitivity. J. Chin. Comput. Syst. 2017, 38, 1086–1090. [Google Scholar]
  19. Li, G.; Li, Z.; Hou, D. Delaunay-based Non-uniform sampling for noisy point cloud. J. Comput. Appl. 2014, 34, 2922–2924+2929. [Google Scholar]
  20. Wei, L.; Wan, S.; Wang, Z.; Ding, X.; Zhang, W. Optimization Method for Level of Detail of Lossless Point Cloud Compression. J. Xi’an Jiaotong Univ. 2021, 55, 88–96. [Google Scholar]
  21. Fan, R.; Jin, X. Selection and Reduction Algorithms for Large Point Clouds. J. Graph. 2013, 34, 12–19. [Google Scholar]
  22. Xing, Y.; Song, T.; Zhao, Y.; Liu, G.; Zheng, M. Point Cloud reduction algorithm combining 3D-SIFT feature extraction and voxel filtering. Laser J. 2019, 44, 163–169. [Google Scholar]
  23. Zou, B.; Qiu, H.; Lu, Y. Point Cloud Reduction and Denoising Based on Optimized Down sampling and Bilateral Filtering. IEEE Access 2020, 99, 1. [Google Scholar]
Figure 1. Schematic diagram of voxel grid method.
Figure 1. Schematic diagram of voxel grid method.
Buildings 14 01221 g001
Figure 2. Schematic diagram of farthest point sampling method.
Figure 2. Schematic diagram of farthest point sampling method.
Buildings 14 01221 g002
Figure 3. Comparison of three downsampling results with the original point cloud.
Figure 3. Comparison of three downsampling results with the original point cloud.
Buildings 14 01221 g003aBuildings 14 01221 g003b
Figure 4. Comparison of three methods under different downsampling rates.
Figure 4. Comparison of three methods under different downsampling rates.
Buildings 14 01221 g004
Figure 5. Voxel grid method for edge detection optimization.
Figure 5. Voxel grid method for edge detection optimization.
Buildings 14 01221 g005
Figure 6. Normal vector distribution of point cloud edge points.
Figure 6. Normal vector distribution of point cloud edge points.
Buildings 14 01221 g006
Figure 7. Point cloud raw data.
Figure 7. Point cloud raw data.
Buildings 14 01221 g007
Figure 8. Point cloud and top surface data.
Figure 8. Point cloud and top surface data.
Buildings 14 01221 g008
Figure 9. Point cloud top profile.
Figure 9. Point cloud top profile.
Buildings 14 01221 g009
Figure 10. The offset between the lightweight point cloud top contour and the original contour.
Figure 10. The offset between the lightweight point cloud top contour and the original contour.
Buildings 14 01221 g010
Figure 11. Top contour offset of lightweight point cloud after edge optimization.
Figure 11. Top contour offset of lightweight point cloud after edge optimization.
Buildings 14 01221 g011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dong, Y.; Yang, H.; Yin, M.; Li, M.; Qu, Y.; Jia, X. Research on Lightweight Method of Segment Beam Point Cloud Based on Edge Detection Optimization. Buildings 2024, 14, 1221. https://doi.org/10.3390/buildings14051221

AMA Style

Dong Y, Yang H, Yin M, Li M, Qu Y, Jia X. Research on Lightweight Method of Segment Beam Point Cloud Based on Edge Detection Optimization. Buildings. 2024; 14(5):1221. https://doi.org/10.3390/buildings14051221

Chicago/Turabian Style

Dong, Yan, Haotian Yang, Mingjun Yin, Menghui Li, Yuanhai Qu, and Xingli Jia. 2024. "Research on Lightweight Method of Segment Beam Point Cloud Based on Edge Detection Optimization" Buildings 14, no. 5: 1221. https://doi.org/10.3390/buildings14051221

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop