Next Article in Journal
MegaDetectNet: A Fast Object Detection Framework for Ultra-High-Resolution Images
Previous Article in Journal
Improving Remote Photoplethysmography Performance through Deep-Learning-Based Real-Time Skin Segmentation Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A 3D Point Cloud Feature Identification Method Based on Improved Point Feature Histogram Descriptor

1
Hainan Geomatics Centre of Ministry of Natural Resources, Haikou 570203, China
2
College of Computer Science and Technology, Hainan University, Haikou 570228, China
3
Haikou Key Laboratory of Deep Learning and Big Data Application Technology, Hainan University, Haikou 570228, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(17), 3736; https://doi.org/10.3390/electronics12173736
Submission received: 3 August 2023 / Revised: 1 September 2023 / Accepted: 2 September 2023 / Published: 4 September 2023
(This article belongs to the Section Artificial Intelligence)

Abstract

:
A significant amount of research has been conducted on the segmentation of large-scale 3D point clouds. However, efficient point cloud feature identification from segmentation results is an essential capability for computer vision and surveying tasks. Feature description methods are algorithms that convert the point set of the point cloud feature into vectors or matrices that can be used for identification. While the point feature histogram (PFH) is an efficient descriptor method, it does not work well with objects that have smooth surfaces, such as planar, spherical, or cylindrical objects. This paper proposes a 3D point cloud feature identification method based on an improved PFH descriptor with a feature-level normal that can efficiently distinguish objects with smooth surfaces. Firstly, a feature-level normal is established, and then the relationship between each point’s normal and feature-level normal is calculated. Finally, the unknown feature is identified by comparing the similarity of the type-labeled feature and the unknown feature. The proposed method obtains an overall identification accuracy ranging from 71.9% to 81.9% for the identification of street lamps, trees, and buildings.

1. Introduction

Light detection and ranging (LiDAR) technology is a relatively new method for obtaining high-quality three-dimensional spatial data and is considered an emerging earth observation technology. One of the key advantages of LiDAR technology includes high data density, accuracy, and strong penetration. As such, LiDAR systems have become increasingly prevalent in many fields, such as terrain surveying [1], forest ecological research [2,3,4], coastal zone monitoring [5], urban 3D reconstruction [6], urban change detection [7], urban road detection and planning [8,9], robot environmental perception [10], and more.
However, while the hardware and acquisition technologies for LiDAR systems have rapidly developed, research on the post-processing and application of point cloud data has significantly lagged behind. Current 3D data processing methods face several issues, such as low automation and heavy manual processing workload. Despite significant efforts to develop data filtering, classification, and extraction algorithms for 3D point cloud data, there are still limitations to these methods, and large amounts of data are not being fully utilized. Additionally, there is a lack of robustness in current methods, and point cloud feature automatic recognition research is limited compared to point cloud automatic segmentation research.
However, in the fields of surveying and computer vision, the goal of point cloud processing methods is to achieve real-time or quasi-real-time automatic ground object recognition. This technology will be able to achieve automatic classification and recognition of ground objects in laser point cloud data like supervised and unsupervised classification techniques and deep learning techniques used in remote sensing image processing which can realize automatic identification [11]. Building on the early automatic segmentation method [12], this research is focused on developing an improved method for 3D point cloud feature identification, which addresses the expression of point cloud features in point cloud data (description) and the automatic recognition of ground features (identification).

2. Related Work

2.1. General Process

Laser point cloud data consist of points obtained with a laser scanner, which contain spatial coordinates, intensity information, and sometimes color information. In this paper, the point cloud feature is defined as a set of laser-scanned points that represent a specific entity, such as a tree, building, etc.
Hoffman and Jain [13] were the first to propose a complete processing process for laser scanning point cloud data, which includes five steps: data collection, data preprocessing, segmentation (or filtering classification), classification, and object modeling. Generally, point cloud data are processed according to this process [13].
As significant research has already been conducted on the segmentation of point cloud data [14], this paper will not cover segmentation algorithms in detail. Instead, the focus of this paper is on methods for point cloud feature description, classification, and recognition, based on the segmented point cloud.

2.2. Description Methods for Point Cloud Features

A single point or part of a point cloud feature is insufficient in representing the entire shape of the feature. All the key points of the point cloud feature together form the whole point cloud feature. The challenge lies in describing scattered points that do not have a clear topological relationship with each other. Moreover, for the same ground object, the position, quantity, and density of points in point cloud data collected at different times or by using different instruments may vary, making it difficult to describe them accurately.
Feature description methods are algorithms that convert point cloud feature points into vectors or matrices that represent the dataset. The description method should be stable and distinguishable to describe and identify features. Stability refers to the fact that for the same object, the description results should be stable and have high similarity. Distinguishability refers to the ability to differentiate different features, and the vectors or matrices should have a high degree of heterogeneity.
Feature matching is the process of comparing two features to determine whether they belong to the same category. The main process of feature description and feature matching is as follows: First, key points of the model are extracted, then the key points are converted to a descriptor array. Finally, the model and scene descriptor arrays are matched [15].
Salti and Tombari et al. propose a categorization of the main 3D description methods by dividing the state of the art into signatures and histograms [15,16,17]. The signatures describe the 3D surface neighborhood of a given point by defining an invariant local reference frame [17]; histograms describe the points by encoding counters of local topological entities into histograms according to a specific quantized domain [15].
Another descriptor named shape contexts proposed by Frome et al. [18] divides the point set into a grid along three coordinate directions: radial, directional, and elevation. The feature description vector is constructed by counting the number of points in the grid. Although the descriptor is simple in operation, its adaptability is poor.
The point feature histogram (PFH) [19] and fast point feature histogram description (FPFH) methods proposed by Rusu et al. [19,20,21,22] use a given point’s k-nearest neighbor points and its normal to build a histogram vector. The PFH algorithm takes computational complexity and uniqueness into account and has strong robustness. The SIFT algorithm proposed by Lowe et al. [23] extends the algorithm for two-dimensional color images to three-dimensional space [24,25]. This algorithm maintains invariance for rotation, scaling, brightness changes, etc., but has high descriptor dimensions and computational complexity. The normalized aligned radial feature (NARF) algorithm proposed by Steder and Rusu et al. [26] has rotational invariance, but the features on image edges are not obvious and are sensitive to noise [27]. The spin image method proposed by Johnson [28] generates description information of ground objects from different perspectives.
Overall, the current feature description algorithms have their own advantages and disadvantages, and their universality is poor. According to the description methods, there are mainly two methods for point cloud feature description: the signature method and the histogram method. Salti and Tombari et al. summarize the taxonomy of 3D descriptors [16,17] and methods’ category, unique local reference framework, and they whether support color information.
The next step after computing the signature and histogram descriptors for segmented point cloud data is to identify the point cloud features.

2.3. Identification Methods for Point Cloud Features

Point cloud feature identification refers to the process of matching unknown features with known ones to classify features. In recent years, there has been a growing number of automatic classification methods that use deep learning for spatiotemporal data, such as remote sensing images, laser point clouds, SAR, and others [29,30,31,32,33,34]. These methods are mainly based on the supervised classification of statistical learning data, which requires learning sample data in advance to determine model parameters and then using the obtained model to classify sub-data. Valuable references for machine-learning-based automatic ground object classification and filtering of ground point cloud data have been provided by Anguelov et al. [35], Triebel et al. [36], and Munoz et al. [37,38,39]. Charles, R.Q. et al. proposed the PointNet method for deep learning on point sets for 3D classification and segmentation [40], while Luis A. Alexandre performed a comparative evaluation on 3D point clouds, exploring both object and category recognition performance and describing existing feature extraction algorithms in a publicly available point cloud library [41]. Li, J. et al. applied the OFDV Net to standard public exterior large-scale point cloud dataset segmentation [33] and achieved good extraction effects. Pritpal Singh et al. provided a quantum-clustering optimization method for COVID-19 CT scan image segmentation [42] and a type-2 neutrosophic-entropy-fusion-based multiple thresholding method for brain tumor tissue structure segmentation [43]. However, these algorithms are mainly aimed at a certain type of data, such as airborne LiDAR data, LiDAR and image fusion data, or vehicle-borne LiDAR data. Currently, there are automatic or semi-automatic data segmentation and classification methods applied in actual production, but these methods need continuous improvement in terms of data extraction correctness, accuracy, efficiency, applicability, automation, and dependency on human experience.

3. Methodology

3.1. Overview of the Proposed Framework

The overall technical approach is as follows:
1. Point cloud segmentation: In our recent work [12], we proposed an improved DBSCAN method with automatic Eps estimation for point cloud segmentation and use it in this research for point cloud segmentation. In the improved DBSCAN method, the average of k-nearest neighbors’ maximum distances is used to fit a curve to estimate the important radius parameter ε in the DBSCAN method which can segment different types of LiDAR point clouds with higher accuracy in a robust manner.
2. Point cloud feature description and expression: Based on the point cloud data segmentation results, the 3D grid sampling method is used to remove the noises in the feature’s point set. Then, the feature descriptor is computed using the improved PFH and FPFH methods. The 3D grid sampling method divides 3D point cloud data into multiple small 3D grids based on point cloud density. The closest point to the center point is retained in all grids, while other data within the grid are ignored.
3. Case-Based Reasoning database establishment: Firstly, we label the point cloud feature histogram descriptors and establish a Case-Based Reasoning database as known cases. Then, we use the correlation coefficient to design the retrieval and matching method of the case library.
4. Point cloud feature identification: By using the retrieval and matching method in the database, we can identify the type of the unknown point cloud features.
The technical approach of this paper is shown in Figure 1.

3.2. PFH and FPFH Descriptor

3.2.1. PFH Descriptor

Rusu et al. [19] proposed the point feature histogram (PFH) method that contains a set of methods for building feature point representations. The PFH method is used for accurately labeling points in a 3D point cloud, and the representation is based on the k-neighborhood points and their surface normal [19,20,44]. Detailed theoretical primer can be referred to in [19,20,44].
For a query point P q , Figure 2 presents an influence region of P q for computing PFH features. The final PFH descriptor is computed as three tuples α , , θ for pairs of points in the neighborhood, and its computational complexity is O k 2 [19,45].
The final PFH representation for the query point is created by binning the set of all three tuples α , , θ into a histogram [20,44]. When 5 binning subdivisions are used, the final histogram result is 125-dimensional vectors ( 5 3 ) with float values.

3.2.2. Fast PFH Descriptor (FPFH)

To simplify the computation of PFH histogram features, fast PFH (FPFH) is performed on the query point to its neighbors but not all pairs of its neighbors [22] as shown in Figure 3. The FPFH reduces the computational complexity to O n k and retains most of the discriminative power of the PFH [22].

3.3. Improved PFH Method

3.3.1. Disadvantage of PFH

The original PFH method relies on the fitting surface at a given point P to determine the method vector of the point. However, this approach may not be effective in distinguishing objects with smooth surfaces, such as cylinders, planes, and spheres.
To highlight this difference, we selected 3D basic shapes, including corners, edges, cones, planes, cylinders, and spheres, for comparison (as shown in Figure 4). The PFH descriptors are calculated using the method explained above, resulting in a 125-dimensional vector array which is compressed to 25 dimensions for presentation purposes (as shown in Figure 5).
As shown in Figure 5, the vector values for corners, edges, and cones are evidently dissimilar, underscoring a high level of heterogeneity. However, the same cannot be said for cylinders, planes, and spheres, for which all vector values are identical, resulting in a value of 100 at the 13th dimension and 0 for all other dimensions. Consequently, the PFH descriptor is unable to distinguish between 3D objects such as cylinders, planes, and spheres, as their surface curvatures are relatively smooth. In the process of binning vector values to histograms, the small differences in surface curvature are insufficient to create separate bins for these objects. As a result, the PFH method cannot effectively distinguish among smooth surfaces.

3.3.2. Improvement of PFH

To address the issue of distinguishing smooth surfaces, we propose an enhanced version of PFH which is capable of distinguishing objects with smooth surfaces. For points p s , p m and their normals n s , n m , as shown in Figure 6, the improved PFH process is as follows.
First, we establish the normal n m original point and the feature’s middle point p m , and the direction is from p m to the farthest point to p m in the feature’s point set.
Next, we compute the fitting plane and normal for each point in the point cloud.
In the third step, we calculate difference between normal n s and n m for each point p s in the feature’s point set and its corresponding fitting plane normal.
Finally, we bin the results into tuple α , , θ arrays and generate histograms.
To compare the differences between various 3D basic shapes, we compute their descriptors using the improved PFH method, which utilizes feature-level normal information. The results, shown in Figure 7, demonstrate that cones, corners, and edges remain distinctly recognizable, while cylinders, planes, and spheres are also well distinguished from one another. Additionally, the peaks of each object differ significantly not only in size but also in dimension. Therefore, the improved PFH method helps to overcome the limitations of PFH, which cannot accurately distinguish between objects with smooth surfaces, such as cylinders, planes, and spheres.
It can be observed from Figure 7 that cones, corners, and edges are still clearly distinguishable, while cylinders, planes, and spheres are also well distinguished.

3.4. Identification Method

3.4.1. Point Cloud Feature Descriptor Database

The point cloud feature, which serves as the fundamental research unit for object recognition, can be described using the following database structure:
PointCloud Case = {
       Name,
       Class,
       Overall-Descriptor,
       Detailed-Descriptor
}
  • The Name field includes a feature’s name, ID, corresponding file name, and other descriptive information.
  • The Class field provides category information for the feature, defining its place within the wider set of point cloud features.
  • The Overall Descriptor provides an overview of the feature, including spatial boundary range, length, width, height, and volume—where the volume is calculated from the minimum bounding cube of point cloud features.
  • The Detailed Descriptor contains the statistical histogram’s peak values calculated using the improved PFH or FPFH method, providing more specific information about the feature.

3.4.2. Identification Method

To compare the similarity between the elements of a point cloud, two types of spatial geometric form information, namely the overall descriptor and detailed descriptor, are used. These descriptors are utilized in a similarity calculation model constructed as follows:
S i m i l a r i t y C a s e ( i , j ) = w 1 × S r ( C a s e ( i , j ) ) + w 2 × S a ( C a s e ( i , j ) )
where S i m i l a r i t y C a s e ( i , j ) is the similarity coefficient of point cloud feature i   a n d   j ; S r C a s e i , j is the overall similarity coefficient of feature i   a n d   j ; S a C a s e i , j is the detailed similarity coefficient of feature i   a n d   j ; w 1   a n d   w 2 are the weight coefficient for overall and detailed descriptor, respectively. w 1   a n d   w 2 can be determined based on the type of feature object. The sum of w 1   a n d   w 2 is 1.
The formulas for S r C a s e i , j and S a C a s e i , j are shown in (2) and (3).
S r C a s e i , j = 1 V i V j M a x ( V i , V j )
where V i   a n d   V j are the volumes of feature i   a n d   j , respectively. In this study, the overall descriptor of the point cloud feature is calculated as the volume of the minimum bounding cube. This method is relatively simple and easy to implement.
The correlation coefficient method is used for the detailed similarity coefficient S a C a s e i , j   calculation.
S a C a s e i , j = 1 n 1 i = 1 n X i X ¯ σ X Y i Y ¯ σ Y
In Equation (3), n represents the set of dimensions where the peak values of feature are i   a n d   j . X i X ¯ σ X , X ¯ , and σ X are the standard score, sample mean, and sample standard deviation, respectively.
To ensure accurate calculation of the correlation coefficient, it is necessary to discard the dimensions with all peak values at 0, and the set of dimensions is dynamically selected with peaks between two scenarios to participate in similarity calculation. This is because having most of the peaks at 0 may lead to an inaccurate calculation of the correlation coefficient.
The identification process involves establishing a sample library of known feature types and calculating the similarity between the unknown feature type and each sample in the library. Finally, the resulting similarity values are sorted, and the feature type with the highest similarity score is determined as the unknown type.

4. Materials and Experiments

4.1. Datasets

The dataset used in this paper is the segmentation results of our previous work that used an algorithm based on the DBSCAN density clustering method [12]. The dataset, acquired by using a mobile survey system, covers the study area of a 500 m long street and encompasses trees, street lamps, buildings, and other objects, as depicted in Figure 8.

4.2. Experiments and Analysis

4.2.1. Labeling the Point Cloud Features

We selected simple-shaped objects such as street lamps, complex-shaped objects such as trees, and building facades as experimental objects. Representative entities from each class were carefully chosen and labeled to construct the sample database. In addition, unlabeled objects were selected to form a test database. The sample database comprised 19 trees, 22 street lamps, and 8 buildings, as illustrated in Figure 9, while the test database contained 553 unlabeled objects, as demonstrated in Figure 10.

4.2.2. Point Set Sampling

We performed feature point set sampling using the 3D grid sampling method on the segmented point cloud features to enhance the representativeness of the point set, reduce data redundancy, and remove noises, thereby improving the calculation speed of the feature descriptor. The average distance between points was calculated using the distance between k-nearest points and its range d 0.354 ,   2.040 . For point set sampling, we chose a radius of r = 0.5 for the 3D grid sampling method. The resulting point set samples for street lamps, trees, and houses are displayed in Table 1.

4.3. Feature Description

Following point set sampling, a point cloud feature descriptor database is generated by organizing the name, class, overall description, and detailed description for each feature in the sample database. The overall description is obtained by computing the minimum bounding cube of point cloud features, while the detailed descriptor is calculated using the improved PFH and FPFH methods. For detailed description calculation, the radius for normal estimation is set to r n = 0.9 , and the nearest neighbor search radius is set to r l = 0.9 . The structure of the feature descriptor database is illustrated in Figure 11.
The PFH and FPFH descriptors for representative features are presented in Table 2.
Upon analyzing the result table, we observed that the street lamp exhibited peak values near the 60th dimension and relatively smaller peaks near dimensions 15, 38, 88, and 115. The tree, on the other hand, had two or three higher peaks close to dimensions 15, 38, and 115 and two higher peaks at dimensions 62 and 88. The building facade demonstrated high peaks in dimensions 15, 38, 62, 88, and 115. This observation indicates that the histogram features of different features exhibit distinct peak positions, values, and numbers. Nevertheless, in building facades 2 and 3, although the peak positions are similar, the number of peaks at each position varies, allowing for distinction between the two.
From Table 2, it is evident that when using the improved FPFH method, the street lamps exhibit three higher peaks near dimensions 5, 10, 16, 23, and 30, respectively. For a single tree, high peaks are observed near dimensions 2, 4, 10, 16, 21, 29, 30, and 31. Building facade 1 shows high peaks in dimensions 0, 5, 16, 22, and 27, whereas building facade 2 exhibits high peaks in dimensions 0, 5, 16, and 22. Building facade 3 displays high peaks in dimensions 0, 5, 10, 16, 22, and 33. The histograms of different features vary significantly, and buildings of different forms within the same category have both similarities and differences. For instance, the position and number of peaks in the histograms of street lamps, single trees, and building facades are markedly different. While building facades 2 and 3 have similar peak positions, the value of building facade 3 in the 10th dimension is substantially higher than that of the other two building facades. Therefore, different individuals in the same category can be distinguished.

4.4. Feature Identification Results

We calculated the improved PFH and FPFH descriptors for the test database and measured the similarity between the test database and sample database. When S i m i l a r i t y C a s e ( i , j )   0.8, features i and j are labeled as the same class. The identification results are shown in Table 3.
A total of 83 street lamps are identified using the improved PFH method with detailed descriptions. After manual judgment, nine of them are identified as noise or recognition errors, resulting in an accuracy of 89.1% while the original method has an accuracy of 78.31%. A total of 190 individual trees are identified, and 167 are correct, while the rest are noise and incorrect ones, resulting in an accuracy of 87.9% while the original method has an accuracy of 75.26%. A total of 13 houses are identified, resulting in an accuracy of 84.6% while the original method has an accuracy of 61.54%. The incorrect recognition occurred when rows of trees are identified as houses. The average accuracy of the improved PFH method is 87.20% compared to the original method which has an accuracy of 71.70%.
For the improved FPFH method, a total of 71 street lamps are identified. After manual judgment, 13 of them are incorrect and are identified as noise, resulting in an accuracy of 81.7%. A total of 177 trees are identified, and 140 are correct with an accuracy of 79.1% while the original method has an accuracy of 70.62%. For buildings, the accuracy of the improved method is 80% while the original method has an accuracy of 60% The average accuracy of the improved FPFH method is 67.48% compared to the original method which has an accuracy of 80.27%.
From the identification results, it can be seen that the accuracy of the improved PFH and FPFH methods is higher than the original ones. When calculating PFH, the relationship between adjacent points was taken into account, leading to more detailed descriptions but also requiring a larger initial calculation. The improved method was simplified, resulting in reduced calculations while maintaining a high recognition accuracy. In practical applications, the selection of detailed description methods should be based on the actual situation. The matching results are shown in Figure 12 and Figure 13.

5. Conclusions

Upon comparing the difference in PFH descriptors for 3D basic shapes, such as corners, edges, cones, planes, cylinders, and spheres, we discovered that the PFH method cannot differentiate between shapes with smooth surfaces, such as planes, cylinders, and spheres. To compensate for this deficiency, we improved the PFH method by including the feature-level normal. Our experiments for identifying street lamps, trees, and buildings showed that the identification method, which compares the similarity of PFH or FPFH descriptors and volumes, has the capability to identify point cloud features.
Future work may focus on improving the computation method for similarity, such as using deep learning methods. Additionally, to obtain more accurate results, we may need to increase the number of samples.

Author Contributions

Conceptualization, C.W. and H.Y.; methodology, C.W.; software, X.X.; validation, X.Z., L.L., and W.T.; formal analysis, X.L.; investigation, X.X.; resources, X.Z.; data curation, L.L.; writing—original draft preparation, C.W.; writing—review and editing, C.W. and H.Y.; visualization, C.W.; supervision, W.T.; project administration, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Hainan Province Science and Technology Special Fund under Grant ZDYF2022GXJS228, Haikou Science and Technology Plan Project under Grant 2022-015, and Key Laboratory of Ocean Geomatics, Ministry of Natural Resources, China, under Grant 2021A02.

Data Availability Statement

The data can be shared upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Akel, A.; Kremeike, K.; Filin, S.; Sester, M.; Doytsher, Y. Dense DTM generalization aided by roads extracted from LiDAR data. ISPRS WG III/3 III 2005, 4, 54–59. [Google Scholar]
  2. Popescu, S.C.; Wynne, R.H. Seeing the trees in the forest: Using lidar and multispectral data fusion with local filtering and variable window size for estimating tree height. Photogramm. Eng. Remote Sens. 2004, 70, 589–604. [Google Scholar] [CrossRef]
  3. Bortolot, Z.J.; Wynne, R.H. Estimating forest biomass using small footprint LiDAR data: An individual tree-based approach that incorporates training data. ISPRS J. Photogramm. Remote Sens. 2005, 59, 342–360. [Google Scholar] [CrossRef]
  4. Hollaus, M.; Wagner, W.; Eberhöfer, C.; Karel, W. Accuracy of large-scale canopy heights derived from LiDAR data under operational constraints in a complex alpine environment. ISPRS J. Photogramm. Remote Sens. 2006, 60, 323–338. [Google Scholar] [CrossRef]
  5. Brzank, A.; Heipke, C. Classification of lidar data into water and land points in coastal areas. Int. Arch. Photogramm. Remote Sens. 2006, 36, 197–202. [Google Scholar]
  6. Axelsson, P. Processing of laser scanner data—Algorithms and applications. ISPRS J. Photogramm. Remote Sens. 1999, 54, 138–147. [Google Scholar] [CrossRef]
  7. Murakami, H.; Nakagawa, K.; Hasegawa, H.; Shibata, T.; Iwanami, E. Change detection of buildings using an airborne laser scanner. ISPRS J. Photogramm. Remote Sens. 1999, 54, 148–152. [Google Scholar] [CrossRef]
  8. Gomes Pereira, L.; Janssen, L. Suitability of laser data for DTM generation: A case study in the context of road planning and design. ISPRS J. Photogramm. Remote Sens. 1999, 54, 244–253. [Google Scholar] [CrossRef]
  9. Clode, S.; Rottensteiner, F.; Kootsookos, P.J.; Zelniker, E.E. Detection and vectorisation of roads from lidar data. Photogramm. Eng. Remote Sens. 2007, 73, 517–535. [Google Scholar] [CrossRef]
  10. García, F.; Jiménez, F.; Naranjo, J.E.; Zato, J.G.; Aparicio, F.; Armingol, J.M.; de la Escalera, A. Environment perception based on LIDAR sensors for real road applications. Robotica 2011, 30, 185–193. [Google Scholar] [CrossRef]
  11. Yan, Z.; Wang, H.; Ning, Q.; Lu, Y. Robust Image Matching Based on Image Feature and Depth Information Fusion. Machines 2022, 10, 456. [Google Scholar] [CrossRef]
  12. Wang, C.; Ji, M.; Wang, J.; Wen, W.; Li, T.; Sun, Y. An improved DBSCAN method for LiDAR data segmentation with automatic Eps estimation. Sensors 2019, 19, 172. [Google Scholar] [CrossRef]
  13. Hoffman, R.; Jain, A.K. Segmentation and classification of range images. IEEE Trans. Pattern Anal. Mach. Intell. 1987, 5, 608–620. [Google Scholar] [CrossRef] [PubMed]
  14. Grilli, E.; Menna, F.; Remondino, F. A review of point clouds segmentation and classification algorithms. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 339. [Google Scholar] [CrossRef]
  15. Tombari, F.; Salti, S.; Stefano, L.d. Unique Signatures of Histograms for Local Surface Description. In Proceedings of the European Conference on Computer Vision, Crete, Greece, 5–11 September 2010. [Google Scholar]
  16. Tombari, F. How Does a Good Feature Look Like? In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 6–10 May 2013. [Google Scholar]
  17. Salti, S.; Tombari, F.; Di Stefano, L. SHOT: Unique signatures of histograms for surface and texture description. Comput. Vis. Image Underst. 2014, 125, 251–264. [Google Scholar] [CrossRef]
  18. Frome, A.; Huber, D.; Kolluri, R.; Bülow, T.; Malik, J. Recognizing objects in range data using regional point descriptors. In Computer Vision-ECCV 2004; Springer: Berlin/Heidelberg, Germany, 2004; pp. 224–237. [Google Scholar]
  19. Rusu, R.; Marton, Z.; Blodow, N.; Beetz, M. Learning Informative Point Classes for the Acquisition of Object Model Maps. In Proceedings of the 2008 10th International Conference on Control, Automation, Robotics and Vision, Hanoi, Vietnam, 17–20 December 2008; pp. 643–650. [Google Scholar]
  20. Rusu, R.B. Semantic 3D object maps for everyday manipulation in human living environments. KI-Künstliche Intell. 2010, 24, 345–348. [Google Scholar] [CrossRef]
  21. Rusu, R.B.; Blodow, N.; Marton, Z.C.; Beetz, M. Aligning point cloud views using persistent feature histograms. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 3384–3391. [Google Scholar]
  22. Rusu, R.B.; Blodow, N.; Beetz, M. Fast point feature histograms (fpfh) for 3D registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009. [Google Scholar]
  23. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  24. Ni, D.; Chui, Y.P.; Qu, Y.; Yang, X.; Qin, J.; Wong, T.-T.; Ho, S.S.; Heng, P.A. Reconstruction of volumetric ultrasound panorama based on improved 3D SIFT. Comput. Med. Imaging Graph. 2009, 33, 559–566. [Google Scholar] [CrossRef] [PubMed]
  25. Flitton, G.T.; Breckon, T.P.; Bouallagu, N.M. Object Recognition using 3D SIFT in Complex CT Volumes. In Proceedings of the British Machine Vision Conference, Aberystwyth, UK, 31 August 3–September 2010. [Google Scholar]
  26. Steder, B.; Rusu, R.B.; Konolige, K.; Burgard, W. Point feature extraction on 3D range scans taking into account object boundaries. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011. [Google Scholar]
  27. Högman, V. Building a 3D Map from RGB-D Sensors. Master’s Thesis, Computer Vision and Active Perception Laboratory Royal Institute of Technology (KTH), Stockholm, Sweden, 2012. [Google Scholar]
  28. Johnson, A.E.; Hebert, M. Using spin images for efficient object recognition in cluttered 3D scenes. IEEE Trans. Onpattern Anal. Mach. Intell. 1999, 21, 433–449. [Google Scholar] [CrossRef]
  29. Rodríguez-Garlito, E.C.; Paz-Gallardo, A.; Plaza, A. Monitoring the Spatiotemporal Distribution of Invasive Aquatic Plants in the Guadiana River, Spain. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 228–241. [Google Scholar] [CrossRef]
  30. Zheng, Y.; Peng, J.; Chen, X.; Huang, C.; Chen, P.; Li, S.; Su, Y. Spatial and Temporal Evolution of Ground Subsidence in the Beijing Plain Area Using Long Time Series Interferometry. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 153–165. [Google Scholar] [CrossRef]
  31. Cheng, G.; Xie, X.; Han, J.; Guo, L.; Xia, G.S. Remote Sensing Image Scene Classification Meets Deep Learning: Challenges, Methods, Benchmarks, and Opportunities. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 3735–3756. [Google Scholar] [CrossRef]
  32. Zhao, L.; Ji, S. CNN, RNN, or ViT? An Evaluation of Different Deep Learning Architectures for Spatio-Temporal Representation of Sentinel Time Series. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 44–56. [Google Scholar] [CrossRef]
  33. Li, J.; Sun, Q.; Chen, K.; Cui, H.; Huangfu, K.; Chen, X. 3D large-scale point cloud semantic segmentation using optimal feature description vector network: OFDV-Net. IEEE Access 2020, 8, 226285–226296. [Google Scholar] [CrossRef]
  34. Du, X.; He, S.; Yang, H.; Wang, C. Multi-Field Context Fusion Network for Semantic Segmentation of High-Spatial-Resolution Remote Sensing Images. Remote Sens. 2022, 14, 5830. [Google Scholar] [CrossRef]
  35. Anguelov, D.; Taskarf, B.; Chatalbashev, V.; Koller, D.; Gupta, D.; Heitz, G.; Ng, A. Discriminative learning of markov random fields for segmentation of 3D scan data. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005. [Google Scholar]
  36. Triebel, R.; Kersting, K.; Burgard, W. Robust 3D scan point classification using associative Markov networks. In Proceedings of the Robotics and Automation, 2006. ICRA 2006, Orlando, FL, USA, 15–19 May 2006. [Google Scholar]
  37. Munoz, D.; Bagnell, J.A.; Vandapel, N.; Hebert, M. Contextual classification with functional max-margin markov networks. In Proceedings of the Computer Vision and Pattern Recognition, 2009. CVPR 2009, Miami, FL, USA, 20–25 June 2009. [Google Scholar]
  38. Munoz, D.; Vandapel, N.; Hebert, M. Directional Associative Markov Network for 3-D Point Cloud Classification; Carnegie Mellon University: Pittsburgh, PA, USA, 2008. [Google Scholar]
  39. Munoz, D.; Vandapel, N.; Hebert, M. Onboard contextual classification of 3-D point clouds with learned high-order markov random fields. In Proceedings of the Robotics and Automation, 2009. ICRA’09, Kobe, Japan, 12–17 May 2009. [Google Scholar]
  40. Charles, R.Q.; Su, H.; Kaichun, M.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  41. Alexandre, L.A. 3D descriptors for object and category recognition: A comparative evaluation. In Workshop on Color-Depth Camera Fusion in Robotics at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Portugal; Citeseer: University Park, PA, USA, 2012. [Google Scholar]
  42. Singh, P.; Bose, S.S. A quantum-clustering optimization method for COVID-19 CT scan image segmentation. Expert Syst. Appl. 2021, 185, 115637. [Google Scholar] [CrossRef]
  43. Singh, P. A type-2 neutrosophic-entropy-fusion based multiple thresholding method for the brain tumor tissue structures segmentation. Appl. Soft Comput. 2021, 103, 107119. [Google Scholar] [CrossRef]
  44. Liao, L.; Tang, S.; Liao, J.; Li, X.; Wang, W.; Li, Y.; Guo, R. A Supervoxel-Based Random Forest Method for Robust and Effective Airborne LiDAR Point Cloud Classification. Remote Sens. 2022, 14, 1516. [Google Scholar] [CrossRef]
  45. Point Cloud Library. Point Feature Histograms (PFH) Descriptors. 2023. Available online: https://pcl.readthedocs.io/projects/tutorials/en/master/pfh_estimation.html (accessed on 30 April 2023).
  46. Point Cloud Library. Fast Point Feature Histograms (FPFH) Descriptors. 2023. Available online: https://pcl.readthedocs.io/projects/tutorials/en/master/fpfh_estimation.html (accessed on 30 April 2023).
Figure 1. The technical approach of this paper.
Figure 1. The technical approach of this paper.
Electronics 12 03736 g001
Figure 2. PFH influence region [45].
Figure 2. PFH influence region [45].
Electronics 12 03736 g002
Figure 3. Fast PFH influence region [46].
Figure 3. Fast PFH influence region [46].
Electronics 12 03736 g003
Figure 4. The basic shape of the point cloud features.
Figure 4. The basic shape of the point cloud features.
Electronics 12 03736 g004
Figure 5. The PFH description of basic point cloud features.
Figure 5. The PFH description of basic point cloud features.
Electronics 12 03736 g005
Figure 6. The angular similarity computation of the normals of a point Ps in point cloud and point cloud middle point.
Figure 6. The angular similarity computation of the normals of a point Ps in point cloud and point cloud middle point.
Electronics 12 03736 g006
Figure 7. The improved PFH description of basic point cloud features.
Figure 7. The improved PFH description of basic point cloud features.
Electronics 12 03736 g007
Figure 8. Segmentation results for experiments.
Figure 8. Segmentation results for experiments.
Electronics 12 03736 g008
Figure 9. Sample database (partial). (a) Tree, (b) street lamp; (c) building.
Figure 9. Sample database (partial). (a) Tree, (b) street lamp; (c) building.
Electronics 12 03736 g009
Figure 10. Test database (partial).
Figure 10. Test database (partial).
Electronics 12 03736 g010
Figure 11. Feature descriptor database (PFH). The category items in the picture from left to right are Class, Name, Overall-Descriptor, and Detailed-Descriptor.
Figure 11. Feature descriptor database (PFH). The category items in the picture from left to right are Class, Name, Overall-Descriptor, and Detailed-Descriptor.
Electronics 12 03736 g011
Figure 12. The identification results of streetlights by histograms.
Figure 12. The identification results of streetlights by histograms.
Electronics 12 03736 g012
Figure 13. The identification results of single trees by histograms.
Figure 13. The identification results of single trees by histograms.
Electronics 12 03736 g013
Table 1. Point set sampling.
Table 1. Point set sampling.
TypeOriginal Point SetSampled Point Set
Street LampElectronics 12 03736 i001Electronics 12 03736 i002
TreeElectronics 12 03736 i003Electronics 12 03736 i004
BuildingElectronics 12 03736 i005Electronics 12 03736 i006
Table 2. PFH description of partial samples.
Table 2. PFH description of partial samples.
IDPoint Cloud FeatureImproved PFH DescriptorImproved FPFH Descriptor
S1Electronics 12 03736 i007Electronics 12 03736 i008Electronics 12 03736 i009
S2Electronics 12 03736 i010Electronics 12 03736 i011Electronics 12 03736 i012
S3Electronics 12 03736 i013Electronics 12 03736 i014Electronics 12 03736 i015
T1Electronics 12 03736 i016Electronics 12 03736 i017Electronics 12 03736 i018
T2Electronics 12 03736 i019Electronics 12 03736 i020Electronics 12 03736 i021
T3Electronics 12 03736 i022Electronics 12 03736 i023Electronics 12 03736 i024
B1Electronics 12 03736 i025Electronics 12 03736 i026Electronics 12 03736 i027
B2Electronics 12 03736 i028Electronics 12 03736 i029Electronics 12 03736 i030
B3Electronics 12 03736 i031Electronics 12 03736 i032Electronics 12 03736 i033
Table 3. Identification results.
Table 3. Identification results.
DescriptorClassTotalOriginal MethodImproved Method
CorrectAccuracyCorrectAccuracy
PFHStreet lamp836578.31%7489.10%
Tree19014375.26%16787.90%
Building13861.54%1084.60%
Average--71.70%-87.20%
FPFHStreet lamp715171.83%5881.70%
Tree17712570.62%14079.10%
Building10660.00%880.00%
Average--67.48%-80.27%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, C.; Xiong, X.; Zhang, X.; Liu, L.; Tan, W.; Liu, X.; Yang, H. A 3D Point Cloud Feature Identification Method Based on Improved Point Feature Histogram Descriptor. Electronics 2023, 12, 3736. https://doi.org/10.3390/electronics12173736

AMA Style

Wang C, Xiong X, Zhang X, Liu L, Tan W, Liu X, Yang H. A 3D Point Cloud Feature Identification Method Based on Improved Point Feature Histogram Descriptor. Electronics. 2023; 12(17):3736. https://doi.org/10.3390/electronics12173736

Chicago/Turabian Style

Wang, Chunxiao, Xiaoqing Xiong, Xiaoying Zhang, Lu Liu, Wu Tan, Xiaojuan Liu, and Houqun Yang. 2023. "A 3D Point Cloud Feature Identification Method Based on Improved Point Feature Histogram Descriptor" Electronics 12, no. 17: 3736. https://doi.org/10.3390/electronics12173736

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop