Next Article in Journal
The Effect of Human Trampling Activity on a Soil Microbial Community at the Urban Forest Park
Next Article in Special Issue
Recreation Potential Assessment at Tamarix Forest Reserves: A Method Based on Multicriteria Evaluation Approach and Landscape Metrics
Previous Article in Journal
Niche Dynamics Below the Species Level: Evidence from Evaluating Niche Shifts within Quercus aquifolioides
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vegetation Extraction from Airborne Laser Scanning Data of Urban Plots Based on Point Cloud Neighborhood Features

1
Faculty of Geography, Yunnan Normal University, Kunming 650500, China
2
Key Laboratory of Resources and Environmental Remote Sensing, Universities in Yunnan, Kunming 650500, China
3
Center for Geospatial Information Engineering and Technology of Yunnan Province, Kunming 650500, China
4
Power China Kunming Engineering Co., Ltd., Kunming 650051, China
*
Author to whom correspondence should be addressed.
Forests 2023, 14(4), 691; https://doi.org/10.3390/f14040691
Submission received: 9 March 2023 / Revised: 22 March 2023 / Accepted: 25 March 2023 / Published: 28 March 2023
(This article belongs to the Special Issue Geospatial Monitoring of Urban Green Space)

Abstract

:
This study proposes an accurate vegetation extraction method used for airborne laser scanning data of an urban plot based on point cloud neighborhood features to overcome the deficiencies in the current research on the precise extraction of vegetation in urban plots. First, the plane features in the R-neighborhood are combined with Euclidean distance clustering to extract the building point cloud accurately, and the rough vegetation point cloud is extracted using the discrete features in the R-neighborhood. Then, under the building point cloud constraints, combined with the Euclidean distance clustering method, the remaining building boundary points in the rough vegetation point cloud are removed. Finally, based on the vegetation point cloud after removing the building boundary point cloud, points within a specific radius r are extracted from the vegetation point cloud in the original data, and a complete urban plot vegetation extraction result is obtained. Two urban plots of airborne laser scanning data are selected to calculate the point cloud plane features and discrete features with R = 0.6 m and accurately extract the vegetation point cloud from the urban point cloud data. The visual effect and accuracy analysis results of vegetation extraction are compared under four different radius ranges of r = 0.5 m, r = 1 m, r = 1.5 m and r = 2 m. The best vegetation extraction results of the two plots are obtained for r = 1 m. The recall and precision are obtained as 92.19% and 98.74% for plot 1 and 94.30% and 98.73% for plot 2, respectively.

1. Introduction

Urban green vegetation has crucial advantages for urban construction and environmental protection, such as beautifying the space environment, reducing the urban temperature, absorbing noise and storing carbon [1,2]. Remote sensing technology has become essential to the study of urban green vegetation carbon storage [3], landscape patterns [4] and green space resource monitoring [5]. However, urban environments often have complex features, including buildings, roads, etc., in addition to green vegetation. Therefore, most green vegetation should be accurately extracted from complex and diverse urban environments before remote sensing technology is used to study urban vegetation [6].
Many scholars have performed related research on the extraction method of urban green vegetation when using traditional optical remote sensing data as the data source. For example, Ouma et al. [7] extracted high-precision urban vegetation objects using the spectral and spatial information of Quickbird remote sensing images. Wang et al. [8] adopted the panchromatic and multispectral images of the ZY-3 satellite as the data source and employed the rule-based object-oriented method to extract the urban green vegetation. Zhou et al. [9] constructed the Difference Enhanced Vegetation Index (DEVI) and other typical vegetation indices to extract green vegetation information from UAV urban image data. Wang et al. [10] employed Worldview-3 remote sensing images as the data source to establish a RedEdge-NIR vegetation index model (RENVI) combined with a digital elevation model, which extracted urban vegetation information and solved the high susceptibility of two-dimensional urban green vegetation extraction to terrain and building shadows. These studies indicate that there are many research methods that can be used for urban vegetation extraction based on optical remote sensing image data. However, only two-dimensional information on urban vegetation can be obtained based on optical remote sensing image data, and studying the three-dimensional spatial structure of vegetation remains challenging.
The emergence of Light Detection and Ranging (LiDAR) remote sensing technology has transformed the study of urban green vegetation from two-dimensional to three-dimensional, facilitating accurate and quantitative investigation of its three-dimensional spatial structure and its effect on the city. The traditional methods used to accurately extract green vegetation from urban LiDAR point cloud data can be divided into two categories. One indirectly extracts green vegetation by converting LiDAR point cloud data into images. For example, Zhang et al. [11] generated a Digital Terrain Model (DTM) using the LiDAR data and combined gradient threshold segmentation with region growth segmentation to realize urban tree point cloud extraction. Zhao [12] generated the Digital Surface Model (DSM) from the point cloud data, extracted the building based on the regular characteristics of the building and employed it as a constraint to extract the vegetation point cloud. Guo et al. [13] constructed a DSM based on LiDAR point cloud data and employed the threshold segmentation method to classify the other land types, such as urban buildings and trees. Although this method can extract vegetation, some of the 3D spatial structure information of the vegetation is ignored when transforming the 3D point cloud into a 2D image, resulting in a loss of vegetation extraction result information and degrading the extraction accuracy [14]. The other method is machine-learning-based vegetation extraction. Secord and Zakhor [15] extracted urban vegetation based on the characteristics of airborne LiDAR point cloud and image data combined with the region growth method and support vector machine algorithm. Mallet et al. [16] realized the building and vegetation point classification of full-waveform LiDAR data in urban areas using the support vector machine algorithm. Xue et al. [17] proposed a point cloud classification algorithm by employing a cloth fitting filter and weighted weakly correlated random forest to classify urban point cloud data accurately; the classification accuracy of vegetation points was 0.91. Although this method can extract accurate vegetation point clouds, it requires manual selection of many classification labels in the early stage, which is subjective and manual. Additionally, machine learning methods have high requirements for hardware equipment.
As shown, there are still several issues with the two urban vegetation extracting methods based on LiDAR point cloud data. For example, in the process of converting 3D point clouds into 2D images in the first method, a large amount of point cloud information will be lost, so vegetation cannot be extracted with high precision. In the second method, the selection of feature labels is time consuming, and the feature selection process is relatively subjective. Meanwhile, machine learning methods often have high requirements for hardware equipment. To overcome the deficiencies in the current LiDAR urban vegetation extraction research, this study proposes an automatic extraction method for urban plot vegetation based on point cloud neighborhood features using airborne laser scanning (ALS) urban point cloud data. This method makes use of the differences between vegetation and other objects in the spatial neighborhood characteristic attributes to achieve fast and high-precision vegetation extraction. It provides a new solution for the research of vegetation extraction based on Lidar technology and will act as a reference for the research of classification based on the neighborhood features of point cloud.

2. Materials and Methods

2.1. Data Acquisition and Preprocessing

For this study, we selected two plots in Yunnan Normal University in the Chenggong District, Kunming City, Yunnan Province, China, as the experimental objects. These two plots have complex, widely distributed, and densely grown vegetation types. At the same time, the buildings have different roof shapes. We acquired the ALS point cloud data of the experimental object using the DJI M600 Pro unmanned aerial vehicle platform equipped with a LiDAR sensor. The laser radar sensor, VUX-1, developed by RIEGL, can obtain uniform and high-quality point cloud data. The experimental data were obtained after clipping and denoising, as shown in Figure 1.
As shown in Figure 1, in addition to vegetation, experimental objects include other ground objects such as ground, roads, and buildings. Therefore, the ground and roads are filtered out prior to vegetation extraction to reduce the amount of data and facilitate subsequent vegetation extraction. Since roads can also be considered ground points, and the research on ground point filtering is relatively mature, various filtering algorithms have been developed to separate ground and non-ground points. This research employs the cloth-fitting filtering algorithm proposed by Zhang et al. [18] to extract non-ground points and obtain building and vegetation point clouds. The total number of point clouds in plot 1 and plot 2 is 1,171,322 and 1,182,917, respectively. Figure 2 shows the top view of the results.

2.2. Research Methods

To accurately extract vegetation point clouds, the research method is divided into three steps. First, by calculating the plane and discrete features in the point cloud R-neighborhood, the building point cloud is accurately extracted by combining plane features with Euclidean distance clustering, and the rough vegetation point cloud is extracted using discrete features. Then, some of the nearest building boundary points are removed from the vegetation point cloud extracted in the first step under the building point cloud constraints, and the vegetation point cloud is extracted by combining the Euclidean distance clustering again. Finally, based on the vegetation point cloud extracted in the second step, the points within a specific radius r from the vegetation point cloud in the original data are extracted, the vegetation point cloud lost in the previous step is complemented, and a complete urban plot vegetation extraction result is obtained. Figure 3 shows the research roadmap.

2.2.1. Point Cloud Planar Features and Discrete Features

The point cloud R-neighborhood is the point set range composed of a point and all other points within a certain distance R radius. After the R radius has been set, the point cloud features of the point set in the R-neighborhood can be calculated. The point cloud neighborhood features include line, plane, discrete, curvature, normal vector, main direction features, etc., all of which are widely utilized in point cloud classification [19,20,21,22,23]. In the ALS point cloud data, after preprocessing, there are two types of objects: building point cloud and vegetation point cloud. The roofs and walls of buildings are regular and planar, while the vegetation has no regular shape and is discrete. Therefore, according to the characteristics of the building and vegetation point clouds, the building and vegetation point clouds are extracted using the plane and discrete features, respectively. Principal component analysis is employed to extract planar and discrete features in the local neighborhood of the point cloud. The point cloud’s covariance matrix and the planar and discrete features are calculated based on the eigenvalues extracted from the covariance matrix. Let the point pi (i = 1, 2, 3... n) be a point in the point cloud set C. Then, the covariance matrix composed of pi and the points in its R-neighborhood can be described as follows [24].
C o v ( p i ) = i = 1 n ( p i p ) ( p i p ) T = j = 0 2 e j λ j e j T
p = 1 n i = 1 n p i
where n represents the number of points in the R-neighborhood of point pi, p represents the geometric center of the point set in the neighborhood, j represents the number of eigenvalues and eigenvectors, j = 3 and ej and λj represent the corresponding eigenvectors and eigenvalues, where λ0 > λ1 > λ2.
Planar and discrete features can be calculated based on three eigenvalues. Planar and discrete features are denoted by P and D; their calculation formulas are shown in Equations (3) and (4), respectively.
P = λ 1 λ 2 λ 0
D = λ 2 λ 0
Since the point cloud’s line feature, planar feature and discrete feature represent the shape probability of the point cloud in three-dimensional space, the sum of these features is 1 [25]. Based on the above principles, this research extracts the point cloud with P > 0.5 and D > 0.5 as the building and vegetation point clouds, respectively.

2.2.2. Vegetation Extraction under the Building Point Cloud Constraints

After extracting the vegetation point cloud through discrete features, a few building point clouds remain in the vegetation point cloud, referred to as the building boundary point clouds. The building boundary point cloud and the building point cloud extracted by planar features are integrated into the original data. These building boundary point clouds are closer to the extracted building point cloud than the vegetation point cloud. Therefore, this study employs the building point cloud extracted by planar features as a constraint, finds some points closest to the building point cloud in the vegetation point cloud extracted by discrete features and eliminates the building boundary point cloud. Although many building boundary point clouds have been eliminated, a few building points remain. These point clouds have two characteristics: their small number and their specific distance from the vegetation points in space. These characteristics are employed to accurately eliminate the remaining building point clouds using the Euclidean distance clustering method based on the distance between points in space. The closer the two points, the more similar their characteristics, so that they can be considered as one type. Assuming p1 (x1, y1, z1) and p2 (x2, y2, z2) as two points in the three-dimensional space, the Euclidean distance d between these two points is calculated as follows.
d = ( x 1 x 2 ) 2 + ( y 1 y 2 ) 2 + ( z 1 z 2 ) 2
The building point clouds are accurately removed based on Euclidean distance clustering through the following steps: the clustering distance parameter td is set for clustering according to the average distance between vegetation and buildings in the experimental data. The number and percentage of points in each category are counted, the few types of point clouds with the least percentage are removed and the rest is considered the vegetation point cloud.
After extracting the vegetation point cloud in the previous step, the building point cloud is filtered out. However, the number of point clouds is lost while extracting the vegetation point cloud based on discrete features. Therefore, the lost vegetation point cloud should be completed in the end. Since the vegetation point cloud extracted based on Euclidean distance clustering filters out the building point cloud, only the vegetation point cloud extracted at this time should be employed to find a certain number of points closest to it in the original data to obtain a complete vegetation point cloud. Therefore, in the same way, this research takes the extracted vegetation as a constraint condition, searches for points within a specific radius r in the original data and extracts a complete vegetation point cloud. When the value of r is too large, some building point clouds can be extracted, while the vegetation point cloud cannot be entirely extracted for values of r below a certain size. Therefore, this study should select an appropriate r value with which to extract complete vegetation.

2.2.3. Accuracy Evaluation

The two indicators of recall and precision are usually employed for precision evaluation in classification research [26]. Therefore, this research evaluates the accuracy of the final vegetation extraction results in this manner. All correctly extracted vegetation points are defined as true positive (TP) points. The false points extracted by considering buildings as vegetation points are false positive (FP) points. Vegetation points that are not extracted as building points are called false negative (FN) points. Finally, the extraction results are quantitatively evaluated with recall and precision based on these three values. The recall and precision calculation formulas are shown in Equations (6) and (7).
Recall = T P T P + F N
Precision = T P T P + F P
According to the research method, two urban plots are evaluated on MATLAB 2019b produced by MathWorks in the United States.

3. Results

3.1. Influence of K-Neighborhood and R-Neighborhood on Point Cloud Features

K-neighborhood and R-neighborhood are neighborhood calculation methods commonly used in point cloud feature calculation. The point cloud K-neighborhood refers to the point set range composed of a point and its adjacent points. The point cloud R-neighborhood refers to the point set range composed of a point and all points within a specific R radius. To verify that R-neighborhood is more suitable for vegetation extraction, this study employs the K-neighborhood and the R-neighborhood to calculate the linear, planar and discrete features of the point clouds of the two plots; the results are shown in Figure 4 and Figure 5.
Figure 4 and Figure 5 indicate that the features extracted by the R-neighborhood of the two plots are significantly more accurate than those extracted by the K-neighborhood. According to the R-neighborhood and K-neighborhood principles, the range constraints of the R-neighborhood are superior to those of the K-neighborhood. The R-neighborhood is regular, while the K-neighborhood is scattered. Thus, they provide different feature extraction results. Therefore, our experiment uses R-neighborhood to extract features.

3.2. Rough Extraction of Vegetation and Building Point Cloud

First, this study calculates the neighborhood features of the point cloud by setting the R-neighborhood value to 0.2 m, 0.4 m, 0.6 m, 0.8 m and 1 m. Figure 6 and Figure 7 show the vegetation results of plots 1 and 2 extracted by discrete features in different neighborhoods.
The results presented in Figure 6 and Figure 7 indicate that more complete vegetation extraction results can be obtained for higher R values, while the code’s runtime increases. When R = 0.6 m, both plot 1 and plot 2 can obtain better vegetation extraction results, and many building point clouds have been eliminated, except for some building boundary point clouds. Therefore, considering both the vegetation extraction results and the code operation efficiency, this research selects R = 0.6 m as the best value for experiments. When R = 0.6 m, the vegetation point cloud extraction results of the two plots are as shown in Figure 6c and Figure 7c, and the building point cloud results extracted from the two plots based on plane features are as shown in Figure 8.

3.3. Building Boundary Point Cloud Filtering in the Vegetation Point Cloud

As shown in Figure 8, many vegetation point clouds remain in the building point cloud results extracted by plane features. In order to accurately extract the building point cloud, this research adopts the extracted building point cloud as data, calculates the point cloud features again and extracts the building point cloud. Many vegetation point clouds are filtered out, but there are still a few vegetation point clouds left. At this stage, the vegetation point cloud possesses two characteristics: a small number and a certain distance from the building point cloud. Therefore, this study employs this feature to accurately extract building point clouds using Euclidean distance clustering. Figure 9 shows the precise extraction results of the building point clouds of the two plots.
After accurately extracting the building point cloud, this research takes the building point cloud as a constraint, finds some points closest to the building point cloud in the vegetation data extracted from the discrete point cloud and eliminates the remaining building boundary points in the vegetation point cloud. By this stage, many building boundary points have been filtered out, with only a few remaining in the vegetation point cloud. The building point cloud usually possesses two characteristics: its small number and its specific distance from the vegetation point cloud. Accordingly, this study employs Euclidean distance clustering to remove building point clouds accurately. Figure 10 shows the vegetation point cloud results after the removal of the building boundary point cloud.

3.4. Accurate Extraction of Vegetation Point Cloud

As shown in Figure 10, although the vegetation point cloud has been extracted, it is still lacking compared with the original data. In order to extract the vegetation point cloud from the original data more thoroughly, this research utilizes the vegetation result in the previous step as a constraint, finds a point within a specific radius r from the original data and extracts complete vegetation from the original data.
The r value is the critical threshold for vegetation extraction. The vegetation in the original data cannot be completely extracted with a small value of r. In contrast, more building point clouds are considered vegetation point clouds and extracted for a significant r value. Therefore, this research compares four thresholds of r = 0.5 m, r = 1 m, r = 1.5 m and r = 2 m, and calculates recall and precision, respectively, combined with visual effects to obtain the optimal value of r. The vegetation extraction results of the two plots with different r values are shown in Figure 11 and Figure 12, and their corresponding recall and precision values are presented in Table 1 and Table 2, respectively.
Comprehensively extracting the recall, precision and extracted visual effects, this study demonstrates that the best vegetation extraction results of the two plots can be obtained for r = 1 m. There are fewer building point clouds in the extraction results of the two plots, the vegetation extraction can be completed simultaneously and the recall and precision indicate precise results. The recall and precision values for plot 1 were 92.19% and 98.74%, while the corresponding values for plot 2 were 94.30% and 98.73%, respectively. Figure 13 shows the final vegetation point cloud extraction results of the two plots for r = 1 m.
As shown in Figure 13, both plots achieved good vegetation extraction results. In plot 1, vegetation grows on the top of the building; this can also be extracted. At the same time, few building point clouds can be observed, and they cannot be easily filtered out because they tend to be very close to the vegetation. Since the vegetation growth in plot 2 is simpler than in plot 1, the vegetation extraction precision in sample plot 2 is satisfactory, with no significant building point cloud.

4. Discussion

The research results demonstrate that it is feasible to use the neighborhood characteristics of point cloud to conduct point cloud classification, and thus obtain higher classification results. After discussing K-neighborhood and R-neighborhood, we believe that R-neighborhood is more suitable for point cloud classification because the space composed of R-neighborhood is more regular. This method involves two important parameters, such as R value and r value. Only when these two parameters are properly controlled can better vegetation extraction results be obtained.
Some similar studies take vegetation extracted from point cloud features as labels and employ these labels to perform machine learning classification in the original data, thus extracting vegetation. For example, Ning et al. [27] adopted point cloud neighborhood features and point cloud coordinate values as the input conditions of the model, employed the PointNet network to extract vegetation and performed individual tree segmentation based on this result. Ning et al. [28] utilized support vector machine (SVM) and random forest (FR) algorithms to extract vegetation and individual trees using point cloud neighborhood features as model input conditions. Zhu et al. [29] constructed a new convolution operator to extract point cloud features, and then realized ALS point cloud data classification by constructing a convolutional neural network method. Widyaningrum et al. [30] used a Dynamic Graph Convolutional Neural Network (DGCNN) to classify ALS point cloud data by extracting the features of different ground objects. Compared with similar studies, the proposed method avoids the cumbersome aspects of the machine learning method. Starting directly from the point cloud characteristics, it employs point cloud neighborhood features and a point cloud neighborhood relationship to accurately extract vegetation in more complex urban experimental plots. The advantage of this method is that it is simple in principle, easy to operate and can be applied in urban plots with complex vegetation growth.
However, when the study area grows, two key parameters of this method may be difficult to control. The accuracy of the extraction results of point cloud neighborhood features may be reduced, restricting the high-precision extraction of vegetation. Therefore, the method needs to be further optimized in a large study area. Additionally, this method uses Euclidean distance clustering, which does not work as well when the data are sparse, and when the number of dimensions in the vector increases. Therefore, in terms of complex point cloud data classification, Euclidean distance clustering is in need of further investigation.

5. Conclusions

This study took two ALS urban plots as experimental objects to extract vegetation in urban plots accurately based on neighborhood planar and discrete features of point cloud data. After discussing the point cloud features extracted by the R- and K-neighborhoods, this study determined that the point cloud features calculated by the R-neighborhood are more suitable for extracting vegetation point clouds. The visual effect and precision analysis results of vegetation extraction under different R-neighborhoods indicate that the best vegetation extraction results of the two plots could be obtained for r = 1 m. The recall and precision were 92.19% and 98.74% for plot 1 and 94.30% and 98.73% for plot 2, respectively. The proposed method realizes the high-precision extraction of vegetation in urban plots. The principle is simple and easy to implement, providing a reference for related research on urban vegetation. However, since the urban research area is often extensive in practical applications, the applicability of this method to the precise extraction of urban vegetation in large areas should be further studied. Large-scale urban data will be employed as the research object in future investigations to further explore the application of point cloud neighborhood features in vegetation extraction.

Author Contributions

Conceptualization, J.Z. and J.W.; methodology, J.Z., J.W. and W.M.; validation, J.P. and Y.D.; software, J.Z., Y.D. and J.P.; writing—original draft preparation, J.Z.; writing—review and editing, J.W., W.M. and J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research has received funding from the National Natural Science Foundation of China (41961060), the Multi-government International Science and Technology Innovation Cooperation Key Project of National Key Research and Development Program of China (2018YFE0184300) and the Scientific Research Found Project of the Education Department of Yunnan Province (2023Y0521).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Acknowledgments

We would like to express our respect and gratitude to the Feng Cheng for his help in data collection.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hou, Y.; Qiu, Y.; Li, X.; Zhao, Z.; Wen, Y. Analysis of Beijing citizens’ demand for urban green space based on choice experiment method. J. Arid Land Resour. Environ. 2020, 34, 91–97. [Google Scholar]
  2. Fraissinet, M.; Ancillotto, L.; Migliozzi, A.; Capasso, S.; Bosso, L.; Chamberlain, D.E.; Russo, D. Responses of Avian Assemblages to Spatiotemporal Landscape Dynamics in Urban Ecosystems. Landsc. Ecol. 2023, 38, 293–305. [Google Scholar] [CrossRef]
  3. Tang, Y.; Shi, T.; Bu, Y.; Shi, Y. Estimation and spatial distribution of carbon storage in urban greenspace. Chin. J. Ecol. 2020, 39, 1387–1398. [Google Scholar]
  4. Wang, X.; Meng, Q.; Zhao, S.; Li, J.; Zhang, L.; Chen, X. Urban Green Space Classification and Landscape Pattern Measurement based on GF-2 Image. J. Geoinf. Sci. 2020, 22, 1971–1982. [Google Scholar]
  5. Xiong, Y.; Zhao, S.; Yan, C.; Qiu, G.; Sun, H.; Wang, Y.; Qin, L. A comparative study of methods for monitoring and assessing urban green space resources at multiple scales. Remote Sens. Land Resour. 2021, 33, 54–62. [Google Scholar]
  6. Raffini, F.; Bertorelle, G.; Biello, R.; D’Urso, G.; Russo, D.; Bosso, L. From Nucleotides to Satellite Imagery: Approaches to Identify and Manage the Invasive Pathogen Xylella Fastidiosa and Its Insect Vectors in Europe. Sustainability 2020, 12, 4508. [Google Scholar] [CrossRef]
  7. Ouma, Y.O.; Josaphat, S.S.; Tateishi, R. Multiscale Remote Sensing Data Segmentation and Post-Segmentation Change Detection Based on Logical Modeling: Theoretical Exposition and Experimental Results for Forestland Cover Change Analysis. Comput. Geosci. 2008, 34, 715–737. [Google Scholar] [CrossRef]
  8. Wang, Y. Research on urban green surveying based on ZY 3 satellite. Eng. Surv. Mapp. 2014, 23, 65–67+75. [Google Scholar]
  9. Zhou, T.; Hu, Z.; Han, J.; Zhang, H. Green vegetation extraction based on visible light image of UAV. Chin. Environ. Sci. 2021, 41, 2380–2390. [Google Scholar]
  10. Wang, X.; Lu, X.; Li, G.; Wang, J.; Yang, Z.; Zhou, Y.; Feng, Z. Combining the Red Edge-Near Infrared Vegetation Indexes of DEM toExtract Urban Vegetation Information. Spectrosc. Spectr. Anal. 2022, 42, 2284–2289. [Google Scholar]
  11. Zhang, Q.; Cen, M.; Zhou, G.; Yang, X. Extracting Trees from LiDAR Data in Urban Region. Acta Geod. Cartogr. Sin. 2009, 38, 330–335. [Google Scholar]
  12. Zhao, X. Study on classification of building and vegetation in complex urban area. Bull. Surv. Mapp. 2019, S1, 181–185. [Google Scholar]
  13. Guo, L.; Deng, X.; Liu, Y.; He, H.; Lin, H.; Qiu, G.; Yang, W. Extraction of Dense Urban Buildings From Photogrammetric and LiDAR Point Clouds. IEEE Access 2021, 9, 111823–111832. [Google Scholar] [CrossRef]
  14. You, H.; Li, S.; Xu, Y.; He, Z.; Wang, D. Tree Extraction from Airborne Laser Scanning Data in Urban Areas. Remote Sens. 2021, 13, 3428. [Google Scholar] [CrossRef]
  15. Secord, J.; Zakhor, A. Tree Detection in Urban Regions Using Aerial Lidar and Image Data. IEEE Geosci. Remote Sens. Lett. 2007, 4, 196–200. [Google Scholar] [CrossRef]
  16. Mallet, C.; Bretar, F.; Roux, M.; Soergel, U.; Heipke, C. Relevance Assessment of Full-Waveform Lidar Data for Urban Area Classification. ISPRS J. Photogramm. Remote Sens. 2011, 66, S71–S84. [Google Scholar] [CrossRef]
  17. Xue, D.; Cheng, Y.; Shi, X.; Qin, X.; Wen, P. Point Clouds Classification Algorithm Based on Cloth Filtering Algorithm and lmproved Random Forest. Laser Optoelectron. Prog. 2020, 57, 192–200. [Google Scholar]
  18. Zhang, W.; Qi, J.; Wan, P.; Wang, H.; Xie, D.; Wang, X.; Yan, G. An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation. Remote Sens. 2016, 8, 501. [Google Scholar] [CrossRef]
  19. Liang, X.; Litkey, P.; Hyyppa, J.; Kaartinen, H.; Vastaranta, M.; Holopainen, M. Automatic Stem Mapping Using Single-Scan Terrestrial Laser Scanning. IEEE Trans. Geosci. Remote Sens. 2012, 50, 661–670. [Google Scholar] [CrossRef]
  20. Zhang, J.; Wang, J.; Dong, P.; Ma, W.; Liu, Y.; Liu, Q.; Zhang, Z. Tree Stem Extraction from TLS Point-Cloud Data of Natural Forests Based on Geometric Features and DBSCAN. Geocarto Int. 2022, 37, 10392–10406. [Google Scholar] [CrossRef]
  21. Zhou, R.; Xu, Z.; Peng, C.; Zhang, F.; Jiang, W. A JointBoost-based classification method of high voltage transmission corridor from airborne LiDAR point cloud. Sci. Surv. Mapp. 2019, 44, 21–27+33. [Google Scholar]
  22. Ma, W.; Wang, J.; Wang, C.; Xi, X.; Wang, P. An Extraction Algorithm of Power Lines from Airborne LiDAR Point Clouds. J. Geomat. Sci. Technol. 2019, 36, 39–44. [Google Scholar]
  23. Liu, Q.; Wang, J.; Ma, W.; Zhang, J. A fine extraction method of forest point cloud in complex background. Sci. Surv. Mapp. 2021, 46, 105–111. [Google Scholar]
  24. Guo, Q.; Su, Y.; Hu, T.; Liu, J. LiDAR Principles, Processing and Applications in Forest Ecology; Higher Education Press: Beijing, China, 2018; pp. 118–120. [Google Scholar]
  25. Xia, S.; Wang, C.; Pan, F.; Xi, X.; Zeng, H.; Liu, H. Detecting Stems in Dense and Homogeneous Forest Using Single-Scan TLS. Forests 2015, 6, 3923–3945. [Google Scholar] [CrossRef] [Green Version]
  26. Liu, Q.; Ma, W.; Zhang, J.; Liu, Y.; Xu, D.; Wang, J. Point-Cloud Segmentation of Individual Trees in Complex Natural Forest Scenes Based on a Trunk-Growth Method. J. For. Res. 2021, 32, 2403–2414. [Google Scholar] [CrossRef]
  27. Ning, X.; Ma, Y.; Hou, Y.; Lv, Z.; Jin, H.; Wang, Y. Semantic Segmentation Guided Coarse-to-Fine Detection of Individual Trees from MLS Point Clouds Based on Treetop Points Extraction and Radius Expansion. Remote Sens. 2022, 14, 4926. [Google Scholar] [CrossRef]
  28. Ning, X.; Tian, G.; Wang, Y. Shape Classification Guided Method for Automated Extraction of Urban Trees from Terrestrial Laser Scanning Point Clouds. Multimed. Tools Appl. 2021, 80, 33357–33375. [Google Scholar] [CrossRef]
  29. Zhu, J.; Sui, L.; Zang, Y.; Zheng, H.; Jiang, W.; Zhong, M.; Ma, F. Classification of Airborne Laser Scanning Point Cloud Using Point-Based Convolutional Neural Network. ISPRS Int. J. Geo-Inf. 2021, 10, 444. [Google Scholar] [CrossRef]
  30. Widyaningrum, E.; Bai, Q.; Fajari, M.K.; Lindenbergh, R.C. Airborne Laser Scanning Point Cloud Classification Using the DGCNN Deep Learning Method. Remote Sens. 2021, 13, 859. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of experimental objects. (a) Plot 1; (b) Plot 2.
Figure 1. Schematic diagram of experimental objects. (a) Plot 1; (b) Plot 2.
Forests 14 00691 g001
Figure 2. Vegetation and building point clouds. (a) Plot 1; (b) Plot 2.
Figure 2. Vegetation and building point clouds. (a) Plot 1; (b) Plot 2.
Forests 14 00691 g002
Figure 3. Technology roadmap. (I) Rough extraction of vegetation point cloud and building point cloud; (II) Removing the building boundary point cloud in the vegetation point cloud; (III) Accurate extraction of vegetation point cloud.
Figure 3. Technology roadmap. (I) Rough extraction of vegetation point cloud and building point cloud; (II) Removing the building boundary point cloud in the vegetation point cloud; (III) Accurate extraction of vegetation point cloud.
Forests 14 00691 g003
Figure 4. The feature extraction results of K-neighborhood and R-neighborhood of plot 1. (a) Characteristic of linearity of K-nearest neighbors; (b) Characteristic of planarity of K-nearest neighbors; (c) Characteristic of dispersion of K-nearest neighbors; (d) Characteristic of linearity of R-nearest neighbors; (e) Characteristic of planarity of R-nearest neighbors; (f) Characteristic of planarity of R-nearest neighbors.
Figure 4. The feature extraction results of K-neighborhood and R-neighborhood of plot 1. (a) Characteristic of linearity of K-nearest neighbors; (b) Characteristic of planarity of K-nearest neighbors; (c) Characteristic of dispersion of K-nearest neighbors; (d) Characteristic of linearity of R-nearest neighbors; (e) Characteristic of planarity of R-nearest neighbors; (f) Characteristic of planarity of R-nearest neighbors.
Forests 14 00691 g004
Figure 5. The feature extraction results of K-neighborhood and R-neighborhood of plot 2. (a) Characteristic of linearity of K-nearest neighbors; (b) Characteristic of planarity of K-nearest neighbors; (c) Characteristic of dispersion of K-nearest neighbors; (d) Characteristic of linearity of R-nearest neighbors; (e) Characteristic of planarity of R-nearest neighbors; (f) Characteristic of planarity of R-nearest neighbors.
Figure 5. The feature extraction results of K-neighborhood and R-neighborhood of plot 2. (a) Characteristic of linearity of K-nearest neighbors; (b) Characteristic of planarity of K-nearest neighbors; (c) Characteristic of dispersion of K-nearest neighbors; (d) Characteristic of linearity of R-nearest neighbors; (e) Characteristic of planarity of R-nearest neighbors; (f) Characteristic of planarity of R-nearest neighbors.
Forests 14 00691 g005
Figure 6. Vegetation extraction results at different R-neighborhood scales in plot 1. (a) R = 0.2 m; (b) R = 0.4 m; (c) R = 0.6 m; (d) R = 0.8 m; (e) R = 1 m.
Figure 6. Vegetation extraction results at different R-neighborhood scales in plot 1. (a) R = 0.2 m; (b) R = 0.4 m; (c) R = 0.6 m; (d) R = 0.8 m; (e) R = 1 m.
Forests 14 00691 g006
Figure 7. Vegetation extraction results at different R-neighborhood scales in plot 2. (a) R = 0.2 m; (b) R = 0.4 m; (c) R = 0.6 m; (d) R = 0.8 m; (e) R = 1 m.
Figure 7. Vegetation extraction results at different R-neighborhood scales in plot 2. (a) R = 0.2 m; (b) R = 0.4 m; (c) R = 0.6 m; (d) R = 0.8 m; (e) R = 1 m.
Forests 14 00691 g007
Figure 8. Building point cloud extracted results based on plane feature extraction. (a) Plot 1; (b) Plot 2.
Figure 8. Building point cloud extracted results based on plane feature extraction. (a) Plot 1; (b) Plot 2.
Forests 14 00691 g008
Figure 9. Accurate extraction results of the building point cloud. (a) Cluster result of plot 1; (b) Building extraction result of plot 1; (c) Cluster result of plot 2; (d) Building extraction result of plot 2.
Figure 9. Accurate extraction results of the building point cloud. (a) Cluster result of plot 1; (b) Building extraction result of plot 1; (c) Cluster result of plot 2; (d) Building extraction result of plot 2.
Forests 14 00691 g009
Figure 10. The results of the vegetation point clouds of the two plots after filtering out the building boundary point clouds. (a) Vegetation extracted using discrete feature of plot 1; (b) Result of filtering the building boundary of plot 1; (c) Vegetation extracted using discrete feature of plot 2; (d) Result of filtering the building boundary of plot 2.
Figure 10. The results of the vegetation point clouds of the two plots after filtering out the building boundary point clouds. (a) Vegetation extracted using discrete feature of plot 1; (b) Result of filtering the building boundary of plot 1; (c) Vegetation extracted using discrete feature of plot 2; (d) Result of filtering the building boundary of plot 2.
Forests 14 00691 g010
Figure 11. Vegetation extraction results of plot 1 at different r scales. (a) r = 0.5 m; (b) r = 1 m; (c) r = 1.5 m; (d) r = 2 m.
Figure 11. Vegetation extraction results of plot 1 at different r scales. (a) r = 0.5 m; (b) r = 1 m; (c) r = 1.5 m; (d) r = 2 m.
Forests 14 00691 g011
Figure 12. Vegetation extraction results of plot 2 at different r scales. (a) r = 0.5 m; (b) r = 1 m; (c) r = 1.5 m; (d) r = 2 m.
Figure 12. Vegetation extraction results of plot 2 at different r scales. (a) r = 0.5 m; (b) r = 1 m; (c) r = 1.5 m; (d) r = 2 m.
Forests 14 00691 g012
Figure 13. The final accurate extraction results of vegetation in two plots. (a) Original data of plot 1; (b) Final vegetation extraction result of plot 1; (c) Original data of plot 2; (d) Final vegetation extraction result of plot 2.
Figure 13. The final accurate extraction results of vegetation in two plots. (a) Original data of plot 1; (b) Final vegetation extraction result of plot 1; (c) Original data of plot 2; (d) Final vegetation extraction result of plot 2.
Forests 14 00691 g013
Table 1. The recall and precision evaluation results of plot 1.
Table 1. The recall and precision evaluation results of plot 1.
r (m)TPFPFNRecallPrecision
0.5536,2193797131,00880.37%99.30%
1615,104785952,12392.19%98.74%
1.5633,91614,26233,31195.01%97.80%
2641,24721,27825,98096.11%96.79%
Table 2. The recall and precision evaluation results of plot 2.
Table 2. The recall and precision evaluation results of plot 2.
r (m)TPFPFNRecallPrecision
0.5 375,852350878,61982.70%99.08%
1428,581553025,89094.30%98.73%
1.5438,518906515,95396.49%97.97%
2442,10814,78012,36397.28%96.77%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, J.; Wang, J.; Ma, W.; Deng, Y.; Pan, J.; Li, J. Vegetation Extraction from Airborne Laser Scanning Data of Urban Plots Based on Point Cloud Neighborhood Features. Forests 2023, 14, 691. https://doi.org/10.3390/f14040691

AMA Style

Zhang J, Wang J, Ma W, Deng Y, Pan J, Li J. Vegetation Extraction from Airborne Laser Scanning Data of Urban Plots Based on Point Cloud Neighborhood Features. Forests. 2023; 14(4):691. https://doi.org/10.3390/f14040691

Chicago/Turabian Style

Zhang, Jianpeng, Jinliang Wang, Weifeng Ma, Yuncheng Deng, Jiya Pan, and Jie Li. 2023. "Vegetation Extraction from Airborne Laser Scanning Data of Urban Plots Based on Point Cloud Neighborhood Features" Forests 14, no. 4: 691. https://doi.org/10.3390/f14040691

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop