remotesensing-logo

Journal Browser

Journal Browser

Machine Learning for LiDAR Point Cloud Analysis

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 March 2023) | Viewed by 13420

Special Issue Editors

Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong, China
Interests: LiDAR; 3D scene perception and analysis; environmental remote sensing; sensor fusion
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
National Key Laboratory of Science and Technology on Multi-spectral Information Processing, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
Interests: computer vision; robot target recognition; 3D reconstruction of large scene; machine learning
Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China
Interests: LiDAR remote sensing; forest inventory; point cloud processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

LiDAR, as an active remote sensing technology, can automatically and quickly establish 3D digital world as a point cloud. Recently, developments in LiDAR sensors and various platforms (including satellite, aerial, UAV, vehicle-borne, backpack, handheld, and static terrestrial) have greatly promoted the application of LiDAR in various fields, such as 3D real scene modeling, digital twins, agriculture and forestry monitoring, AR, autonomous driving, powerline inspection, and remote sensing archaeology. LiDAR point cloud analysis or processing, including point cloud registration or mapping, filtering, segmentation and classification, 3D modeling, and visualization, is a fundamental prerequisite for rigorously applying LiDAR point clouds to these fields. Diverse algorithms have since then been made available in the forms of data-driven, model-driven, or hybrid approaches to analyze and explore LiDAR point clouds. The latest techniques in machine learning and deep learning have even enabled us to extract semantic information from LiDAR point clouds in a more intelligent and effective way and further expand the application scope of LiDAR point clouds.

The Special Issue aims at contributions that focus on LiDAR point cloud analysis using machine learning (or deep learning) techniques. We are particularly interested in original papers that address innovative techniques and algorithms for generating, handling, and analyzing LiDAR point clouds, challenges in dealing with point cloud data in emerging remote sensing applications, and which unfold new applications for LiDAR point clouds. Additionally, we look forward to seeing new algorithms, techniques, and applications in various fields of LiDAR point clouds.

Dr. Wei Yao
Prof. Dr. Wenbing Tao
Dr. Jie Shao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • LiDAR point cloud acquisition from various platforms
  • machine learning (or deep learning) for LiDAR point cloud processing
  • point cloud registration and filtering, fusion of multisource LiDAR point clouds
  • feature extraction, object detection, semantic labeling, and change detection
  • indoor modeling, BIM, and semantic urban GeoBIM from LiDAR point clouds
  • 3D real scene, digital twins from LiDAR point clouds
  • object classification and recognition from LiDAR point clouds
  • industrial applications with large-scale aerial LiDAR point clouds
  • high-definition map construction from mobile LiDAR point clouds for autonomous driving
  • agriculture and forestry monitoring based on LiDAR point clouds

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

23 pages, 18889 KiB  
Article
Uncertainty Modelling of Laser Scanning Point Clouds Using Machine-Learning Methods
by Jan Hartmann and Hamza Alkhatib
Remote Sens. 2023, 15(9), 2349; https://doi.org/10.3390/rs15092349 - 29 Apr 2023
Cited by 1 | Viewed by 1504
Abstract
Terrestrial laser scanners (TLSs) are a standard method for 3D point cloud acquisition due to their high data rates and resolutions. In certain applications, such as deformation analysis, modelling uncertainties in the 3D point cloud is crucial. This study models the systematic deviations [...] Read more.
Terrestrial laser scanners (TLSs) are a standard method for 3D point cloud acquisition due to their high data rates and resolutions. In certain applications, such as deformation analysis, modelling uncertainties in the 3D point cloud is crucial. This study models the systematic deviations in laser scan distance measurements as a function of various influencing factors using machine-learning methods. A reference point cloud is recorded using a laser tracker (Leica AT 960) and a handheld scanner (Leica LAS-XL) to investigate the uncertainties of the Z+F Imager 5016 in laboratory conditions. From 49 TLS scans, a wide range of data are obtained, covering various influencing factors. The processes of data preparation, feature engineering, validation, regression, prediction, and result analysis are presented. The results of traditional machine-learning methods (multiple linear and nonlinear regression) are compared with eXtreme gradient boosted trees (XGBoost). Thereby, it is demonstrated that it is possible to model the systemic deviations of the distance measurement with a coefficient of determination of 0.73, making it possible to calibrate the distance measurement to improve the laser scan measurement. An independent TLS scan is used to demonstrate the calibration results. Full article
(This article belongs to the Special Issue Machine Learning for LiDAR Point Cloud Analysis)
Show Figures

Figure 1

22 pages, 19830 KiB  
Article
D-Net: A Density-Based Convolutional Neural Network for Mobile LiDAR Point Clouds Classification in Urban Areas
by Mahdiye Zaboli, Heidar Rastiveis, Benyamin Hosseiny, Danesh Shokri, Wayne A. Sarasua and Saeid Homayouni
Remote Sens. 2023, 15(9), 2317; https://doi.org/10.3390/rs15092317 - 27 Apr 2023
Cited by 2 | Viewed by 1558
Abstract
The 3D semantic segmentation of a LiDAR point cloud is essential for various complex infrastructure analyses such as roadway monitoring, digital twin, or even smart city development. Different geometric and radiometric descriptors or diverse combinations of point descriptors can extract objects from LiDAR [...] Read more.
The 3D semantic segmentation of a LiDAR point cloud is essential for various complex infrastructure analyses such as roadway monitoring, digital twin, or even smart city development. Different geometric and radiometric descriptors or diverse combinations of point descriptors can extract objects from LiDAR data through classification. However, the irregular structure of the point cloud is a typical descriptor learning problem—how to consider each point and its surroundings in an appropriate structure for descriptor extraction? In recent years, convolutional neural networks (CNNs) have received much attention for automatic segmentation and classification. Previous studies demonstrated deep learning models’ high potential and robust performance for classifying complicated point clouds and permutation invariance. Nevertheless, such algorithms still extract descriptors from independent points without investigating the deep descriptor relationship between the center point and its neighbors. This paper proposes a robust and efficient CNN-based framework named D-Net for automatically classifying a mobile laser scanning (MLS) point cloud in urban areas. Initially, the point cloud is converted into a regular voxelized structure during a preprocessing step. This helps to overcome the challenge of irregularity and inhomogeneity. A density value is assigned to each voxel that describes the point distribution within the voxel’s location. Then, by training the designed CNN classifier, each point will receive the label of its corresponding voxel. The performance of the proposed D-Net method was tested using a point cloud dataset in an urban area. Our results demonstrated a relatively high level of performance with an overall accuracy (OA) of about 98% and precision, recall, and F1 scores of over 92%. Full article
(This article belongs to the Special Issue Machine Learning for LiDAR Point Cloud Analysis)
Show Figures

Figure 1

17 pages, 4489 KiB  
Article
MSSF: A Novel Mutual Structure Shift Feature for Removing Incorrect Keypoint Correspondences between Images
by Juan Liu, Kun Sun, San Jiang, Kunqian Li and Wenbing Tao
Remote Sens. 2023, 15(4), 926; https://doi.org/10.3390/rs15040926 - 08 Feb 2023
Cited by 4 | Viewed by 1516
Abstract
Removing incorrect keypoint correspondences between two images is a fundamental yet challenging task in computer vision. A popular pipeline first computes a feature vector for each correspondence and then trains a binary classifier using these features. In this paper, we propose a novel [...] Read more.
Removing incorrect keypoint correspondences between two images is a fundamental yet challenging task in computer vision. A popular pipeline first computes a feature vector for each correspondence and then trains a binary classifier using these features. In this paper, we propose a novel robust feature to better fulfill the above task. The basic observation is that the relative order of neighboring points around a correct match should be consistent from one view to another, while it may change a lot for an incorrect match. To this end, the feature is designed to measure the bidirectional relative ranking difference for the neighbors of a reference correspondence. To reduce the negative effect of incorrect correspondences in the neighborhood when computing the feature, we propose to combine spatially nearest neighbors with geometrically “good” neighbors. We also design an iterative neighbor weighting strategy, which considers both goodness and correctness of a correspondence, to enhance correct correspondences and suppress incorrect correspondences. As the relative order of neighbors encodes structure information between them, we name the proposed feature the Mutual Structure Shift Feature (MSSF). Finally, we use the proposed features to train a random forest classifier in a supervised manner. Extensive experiments on both raw matching quality and downstream tasks are conducted to verify the performance of the proposed method. Full article
(This article belongs to the Special Issue Machine Learning for LiDAR Point Cloud Analysis)
Show Figures

Graphical abstract

22 pages, 28258 KiB  
Article
Multiscale Feature Fusion for the Multistage Denoising of Airborne Single Photon LiDAR
by Shuming Si, Han Hu, Yulin Ding, Xuekun Yuan, Ying Jiang, Yigao Jin, Xuming Ge, Yeting Zhang, Jie Chen and Xiaocui Guo
Remote Sens. 2023, 15(1), 269; https://doi.org/10.3390/rs15010269 - 02 Jan 2023
Cited by 2 | Viewed by 2364
Abstract
Compared with the existing modes of LiDAR, single-photon LiDAR (SPL) can acquire terrain data more efficiently. However, influenced by the photon-sensitive detectors, the collected point cloud data contain a large number of noisy points. Most of the existing denoising techniques are based on [...] Read more.
Compared with the existing modes of LiDAR, single-photon LiDAR (SPL) can acquire terrain data more efficiently. However, influenced by the photon-sensitive detectors, the collected point cloud data contain a large number of noisy points. Most of the existing denoising techniques are based on the sparsity assumption of point cloud noise, which does not hold for SPL point clouds, so the existing denoising methods cannot effectively remove the noisy points from SPL point clouds. To solve the above problems, we proposed a novel multistage denoising strategy with fused multiscale features. The multiscale features were fused to enrich contextual information of the point cloud at different scales. In addition, we utilized multistage denoising to solve the problem that a single-round denoising could not effectively remove enough noise points in some areas. Interestingly, the multiscale features also prevent an increase in false-alarm ratio during multistage denoising. The experimental results indicate that the proposed denoising approach achieved 97.58%, 99.59%, 95.70%, and 77.92% F1-scores in the urban, suburban, mountain, and water areas, respectively, and it outperformed the existing denoising methods such as Statistical Outlier Removal. The proposed approach significantly improved the denoising precision of airborne point clouds from single-photon LiDAR, especially in water areas and dense urban areas. Full article
(This article belongs to the Special Issue Machine Learning for LiDAR Point Cloud Analysis)
Show Figures

Figure 1

25 pages, 42407 KiB  
Article
Power Pylon Reconstruction from Airborne LiDAR Data Based on Component Segmentation and Model Matching
by Yiya Qiao, Xiaohuan Xi, Sheng Nie, Pu Wang, Hao Guo and Cheng Wang
Remote Sens. 2022, 14(19), 4905; https://doi.org/10.3390/rs14194905 - 30 Sep 2022
Cited by 1 | Viewed by 1864
Abstract
In recent years, with the rapid growth of State Grid digitization, it has become necessary to perform three-dimensional (3D) reconstruction of power elements with high efficiency and precision to achieve full coverage when simulating important transmission lines. Limited by the performance of acquisition [...] Read more.
In recent years, with the rapid growth of State Grid digitization, it has become necessary to perform three-dimensional (3D) reconstruction of power elements with high efficiency and precision to achieve full coverage when simulating important transmission lines. Limited by the performance of acquisition equipment and the environment, the actual scanned point cloud usually has problems such as noise interference and data loss, presenting a great challenge for 3D reconstruction. This study proposes a model-driven 3D reconstruction method based on Airborne LiDAR point cloud data. Firstly, power pylon redirection is realized based on the Principal Component Analysis (PCA) algorithm. Secondly, the vertical and horizontal distribution characteristics of the power pylon point cloud and the graphical characteristics of the overall two-dimensional (2D) orthographic projection are analyzed to determine segmentation positions and the key segmentation position of the power pylon. The 2D alpha shape algorithm is adopted to obtain the pylon body contour points, and then the pylon feature points are extracted and corrected. Based on feature points, the components of original pylon and model pylon are registered, and the distance between the original point cloud and the model point cloud is calculated at the same time. Finally, the model with the highest matching degree is regarded as the reconstructed model of the pylon. The main advantages of the proposed method include: (1) identifying the key segmentation position according to the graphical characteristics; (2) for some pylons with much missing data, the complete model can be accurately reconstructed. The average RMSE (Root-Mean-Square Error) of all power pylon components in this study was 15.4 cm. The experimental results reveal that the effects of power pylon structure segmentation and reconstruction are satisfactory, which provides method and model support for digital management and security analysis of transmission lines. Full article
(This article belongs to the Special Issue Machine Learning for LiDAR Point Cloud Analysis)
Show Figures

Graphical abstract

Other

Jump to: Research

17 pages, 2628 KiB  
Technical Note
Geometric Prior-Guided Self-Supervised Learning for Multi-View Stereo
by Liman Liu, Fenghao Zhang, Wanjuan Su, Yuhang Qi and Wenbing Tao
Remote Sens. 2023, 15(8), 2109; https://doi.org/10.3390/rs15082109 - 17 Apr 2023
Viewed by 1495
Abstract
Recently, self-supervised multi-view stereo (MVS) methods, which are dependent primarily on optimizing networks using photometric consistency, have made clear progress. However, the difference in lighting between different views and reflective objects in the scene can make photometric consistency unreliable. To address this issue, [...] Read more.
Recently, self-supervised multi-view stereo (MVS) methods, which are dependent primarily on optimizing networks using photometric consistency, have made clear progress. However, the difference in lighting between different views and reflective objects in the scene can make photometric consistency unreliable. To address this issue, a geometric prior-guided multi-view stereo (GP-MVS) for self-supervised learning is proposed, which exploits the geometric prior from the input data to obtain high-quality depth pseudo-labels. Specifically, two types of pseudo-labels for self-supervised MVS are proposed, based on the structure-from-motion (SfM) and traditional MVS methods. One converts the sparse points of SfM into sparse depth maps and combines the depth maps with spatial smoothness constraints to obtain a sparse prior loss. The other generates initial depth maps for semi-dense depth pseudo-labels using the traditional MVS, and applies a geometric consistency check to filter the wrong depth in the initial depth maps. We conducted extensive experiments on the DTU and Tanks and Temples datasets, which demonstrate that our method achieves state-of-the-art performance compared to existing unsupervised/self-supervised approaches, and even performs on par with traditional and supervised approaches. Full article
(This article belongs to the Special Issue Machine Learning for LiDAR Point Cloud Analysis)
Show Figures

Figure 1

14 pages, 754 KiB  
Technical Note
MSPR-Net: A Multi-Scale Features Based Point Cloud Registration Network
by Jinjin Yu, Fenghao Zhang, Zhi Chen and Liman Liu
Remote Sens. 2022, 14(19), 4874; https://doi.org/10.3390/rs14194874 - 29 Sep 2022
Cited by 2 | Viewed by 1866
Abstract
Point-cloud registration is a fundamental task in computer vision. However, most point clouds are partially overlapping, corrupted by noise and comprised of indistinguishable surfaces, especially for complexly distributed outdoor LiDAR point clouds, which makes registration challenging. In this paper, we propose a multi-scale [...] Read more.
Point-cloud registration is a fundamental task in computer vision. However, most point clouds are partially overlapping, corrupted by noise and comprised of indistinguishable surfaces, especially for complexly distributed outdoor LiDAR point clouds, which makes registration challenging. In this paper, we propose a multi-scale features-based point cloud registration network named MSPR-Net for large-scale outdoor LiDAR point cloud registration. The main motivation of the proposed MSPR-Net is that the features of two keypoints from a true correspondence must match in different scales. From this point of view, we first utilize a multi-scale backbone to extract the multi-scale features of the keypoints. Next, we propose a bilateral outlier removal strategy to remove the potential outliers in the keypoints based on the multi-scale features. Finally, a coarse-to-fine registration way is applied to exploit the information both in feature and spatial space. Extensive experiments conducted on two large-scale outdoor LiDAR point cloud datasets demonstrate that MSPR-Net achieves state-of-the-art performance. Full article
(This article belongs to the Special Issue Machine Learning for LiDAR Point Cloud Analysis)
Show Figures

Graphical abstract

Back to TopTop