remotesensing-logo

Journal Browser

Journal Browser

Advances in the Application of Lidar

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Engineering Remote Sensing".

Deadline for manuscript submissions: 20 June 2024 | Viewed by 5848

Special Issue Editors

Department of Urban and Regional Planning, University of Illinois at Urbana Champaign, Champaign, IL, USA
Interests: landscape mapping; object-based image analysis using LiDAR; machine learning algorithm
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

LiDAR (light detection and ranging, also LIDAR, LiDAR, and LADAR) has advanced rapidly since 1960. It is an active remote sensing technology that measures the distance by taking the speed of light and the time elapsed to travel from the target object to the sensor. The most common information provided by LiDAR is elevation and the structural profile of the terrain surface. Applications of LiDAR include flood risk measurements, terrain modeling, tree height measurements, etc. There are also advanced approaches for data fusion using both LiDAR data and high spatial and high spectral resolution images for ground cover classification and target recognition. In the last few decades, remote sensing technology has evolved dramatically with the better quality of LiDAR products as well as a rapid development of machine and deep learning algorithms. These most advanced technologies lead the remote sensing community to further explore the advanced applications of LiDAR data.

This Special Issue aims at studies that provide insights into the most advanced LiDAR remote sensing and their applications at local, regional, or global scales. Topics may include anything from advances of the physical principles or data processing of LiDAR to LiDAR for agriculture and forest application, urban application, change detection, or geoscience applications, etc. In addition, multisource data fusion with LiDAR, classification algorithms development, accuracy assessment, etc., are all welcome.

Suggested themes and article types for submissions.

  • LiDAR data fusion technologies and algorithm development.
  • LiDAR application in forestry: e.g., individual tree mapping, canopy mapping, etc.
  • LiDAR application in agriculture: e.g., crop planning, yield forecasting, etc.
  • LiDAR urban application e.g., road extraction, building extraction, etc.
  • LiDAR application in disaster management: e.g., post-flooding mapping, etc.
  • LiDAR application for geoscience such as surface hydrology, fluvial landforms etc.

Dr. Fang Fang
Dr. Yaqian He
Dr. Qinghua Xie
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • LiDAR remote sensing
  • change detection
  • remote sensing
  • classification
  • machine learning
  • LiDAR in forestry

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 15955 KiB  
Article
Combining Cylindrical Voxel and Mask R-CNN for Automatic Detection of Water Leakages in Shield Tunnel Point Clouds
by Qiong Chen, Zhizhong Kang, Zhen Cao, Xiaowei Xie, Bowen Guan, Yuxi Pan and Jia Chang
Remote Sens. 2024, 16(5), 896; https://doi.org/10.3390/rs16050896 - 03 Mar 2024
Viewed by 667
Abstract
Water leakages can affect the safety and durability of shield tunnels, so rapid and accurate identification and diagnosis are urgently needed. However, current leakage detection methods are mostly based on mobile LiDAR data, making it challenging to detect leakage damage in both mobile [...] Read more.
Water leakages can affect the safety and durability of shield tunnels, so rapid and accurate identification and diagnosis are urgently needed. However, current leakage detection methods are mostly based on mobile LiDAR data, making it challenging to detect leakage damage in both mobile and terrestrial LiDAR data simultaneously, and the detection results are not intuitive. Therefore, an integrated cylindrical voxel and Mask R-CNN method for water leakage inspection is presented in this paper. This method includes the following three steps: (1) a 3D cylindrical-voxel data organization structure is constructed to transform the tunnel point cloud from disordered to ordered and achieve the projection of a 3D point cloud to a 2D image; (2) automated leakage segmentation and localization is carried out via Mask R-CNN; (3) the segmentation results of water leakage are mapped back to the 3D point cloud based on a cylindrical-voxel structure of shield tunnel point cloud, achieving the expression of water leakage disease in 3D space. The proposed approach can efficiently detect water leakage and leakage not only in mobile laser point cloud data but also in ground laser point cloud data, especially in processing its curved parts. Additionally, it achieves the visualization of water leakage in shield tunnels in 3D space, making the water leakage results more intuitive. Experimental validation is conducted based on the MLS and TLS point cloud data collected in Nanjing and Suzhou, respectively. Compared with the current commonly used detection method, which combines cylindrical projection and Mask R-CNN, the proposed method can achieve water leakage detection and 3D visualization in different tunnel scenarios, and the accuracy of water leakage detection of the method in this paper has improved by nearly 10%. Full article
(This article belongs to the Special Issue Advances in the Application of Lidar)
Show Figures

Figure 1

31 pages, 15712 KiB  
Article
UAS Quality Control and Crop Three-Dimensional Characterization Framework Using Multi-Temporal LiDAR Data
by Nadeem Fareed, Anup Kumar Das, Joao Paulo Flores, Jitin Jose Mathew, Taofeek Mukaila, Izaya Numata and Ubaid Ur Rehman Janjua
Remote Sens. 2024, 16(4), 699; https://doi.org/10.3390/rs16040699 - 16 Feb 2024
Viewed by 1163
Abstract
Information on a crop’s three-dimensional (3D) structure is important for plant phenotyping and precision agriculture (PA). Currently, light detection and ranging (LiDAR) has been proven to be the most effective tool for crop 3D characterization in constrained, e.g., indoor environments, using terrestrial laser [...] Read more.
Information on a crop’s three-dimensional (3D) structure is important for plant phenotyping and precision agriculture (PA). Currently, light detection and ranging (LiDAR) has been proven to be the most effective tool for crop 3D characterization in constrained, e.g., indoor environments, using terrestrial laser scanners (TLSs). In recent years, affordable laser scanners onboard unmanned aerial systems (UASs) have been available for commercial applications. UAS laser scanners (ULSs) have recently been introduced, and their operational procedures are not well investigated particularly in an agricultural context for multi-temporal point clouds. To acquire seamless quality point clouds, ULS operational parameter assessment, e.g., flight altitude, pulse repetition rate (PRR), and the number of return laser echoes, becomes a non-trivial concern. This article therefore aims to investigate DJI Zenmuse L1 operational practices in an agricultural context using traditional point density, and multi-temporal canopy height modeling (CHM) techniques, in comparison with more advanced simulated full waveform (WF) analysis. Several pre-designed ULS flights were conducted over an experimental research site in Fargo, North Dakota, USA, on three dates. The flight altitudes varied from 50 m to 60 m above ground level (AGL) along with scanning modes, e.g., repetitive/non-repetitive, frequency modes 160/250 kHz, return echo modes (1n), (2n), and (3n), were assessed over diverse crop environments, e.g., dry corn, green corn, sunflower, soybean, and sugar beet, near to harvest yet with changing phenological stages. Our results showed that the return echo mode (2n) captures the canopy height better than the (1n) and (3n) modes, whereas (1n) provides the highest canopy penetration at 250 kHz compared with 160 kHz. Overall, the multi-temporal CHM heights were well correlated with the in situ height measurements with an R2 (0.99–1.00) and root mean square error (RMSE) of (0.04–0.09) m. Among all the crops, the multi-temporal CHM of the soybeans showed the lowest height correlation with the R2 (0.59–0.75) and RMSE (0.05–0.07) m. We showed that the weaker height correlation for the soybeans occurred due to the selective height underestimation of short crops influenced by crop phonologies. The results explained that the return echo mode, PRR, flight altitude, and multi-temporal CHM analysis were unable to completely decipher the ULS operational practices and phenological impact on acquired point clouds. For the first time in an agricultural context, we investigated and showed that crop phenology has a meaningful impact on acquired multi-temporal ULS point clouds compared with ULS operational practices revealed by WF analyses. Nonetheless, the present study established a state-of-the-art benchmark framework for ULS operational parameter optimization and 3D crop characterization using ULS multi-temporal simulated WF datasets. Full article
(This article belongs to the Special Issue Advances in the Application of Lidar)
Show Figures

Graphical abstract

19 pages, 8091 KiB  
Article
Scene Classification Method Based on Multi-Scale Convolutional Neural Network with Long Short-Term Memory and Whale Optimization Algorithm
by Yingying Ran, Xiaobin Xu, Minzhou Luo, Jian Yang and Ziheng Chen
Remote Sens. 2024, 16(1), 174; https://doi.org/10.3390/rs16010174 - 31 Dec 2023
Viewed by 902
Abstract
Indoor mobile robots can be localized by using scene classification methods. Recently, two-dimensional (2D) LiDAR has achieved good results in semantic classification with target categories such as room and corridor. However, it is difficult to achieve the classification of different rooms owing to [...] Read more.
Indoor mobile robots can be localized by using scene classification methods. Recently, two-dimensional (2D) LiDAR has achieved good results in semantic classification with target categories such as room and corridor. However, it is difficult to achieve the classification of different rooms owing to the lack of feature extraction methods in complex environments. To address this issue, a scene classification method based on a multi-scale convolutional neural network (CNN) with long short-term memory (LSTM) and a whale optimization algorithm (WOA) is proposed. Firstly, the distance data obtained from the original LiDAR are converted into a data sequence. Secondly, a scene classification method integrating multi-scale CNN and LSTM is constructed. Finally, WOA is used to tune critical training parameters and optimize network performance. The actual scene data containing eight rooms are collected to conduct ablation experiments, highlighting the performance with the proposed algorithm with 98.87% classification accuracy. Furthermore, experiments with the FR079 public dataset are conducted to demonstrate that compared with advanced algorithms, the classification accuracy of the proposed algorithm achieves the highest of 94.35%. The proposed method can provide technical support for the precise positioning of robots. Full article
(This article belongs to the Special Issue Advances in the Application of Lidar)
Show Figures

Figure 1

20 pages, 4470 KiB  
Article
LiDAR-Generated Images Derived Keypoints Assisted Point Cloud Registration Scheme in Odometry Estimation
by Haizhou Zhang, Xianjia Yu, Sier Ha and Tomi Westerlund
Remote Sens. 2023, 15(20), 5074; https://doi.org/10.3390/rs15205074 - 23 Oct 2023
Viewed by 1149
Abstract
Keypoint detection and description play a pivotal role in various robotics and autonomous applications, including Visual Odometry (VO), visual navigation, and Simultaneous Localization And Mapping (SLAM). While a myriad of keypoint detectors and descriptors have been extensively studied in conventional camera images, the [...] Read more.
Keypoint detection and description play a pivotal role in various robotics and autonomous applications, including Visual Odometry (VO), visual navigation, and Simultaneous Localization And Mapping (SLAM). While a myriad of keypoint detectors and descriptors have been extensively studied in conventional camera images, the effectiveness of these techniques in the context of LiDAR-generated images, i.e., reflectivity and ranges images, has not been assessed. These images have gained attention due to their resilience in adverse conditions, such as rain or fog. Additionally, they contain significant textural information that supplements the geometric information provided by LiDAR point clouds in the point cloud registration phase, especially when reliant solely on LiDAR sensors. This addresses the challenge of drift encountered in LiDAR Odometry (LO) within geometrically identical scenarios or where not all the raw point cloud is informative and may even be misleading. This paper aims to analyze the applicability of conventional image keypoint extractors and descriptors on LiDAR-generated images via a comprehensive quantitative investigation. Moreover, we propose a novel approach to enhance the robustness and reliability of LO. After extracting keypoints, we proceed to downsample the point cloud, subsequently integrating it into the point cloud registration phase for the purpose of odometry estimation. Our experiment demonstrates that the proposed approach has comparable accuracy but reduced computational overhead, higher odometry publishing rate, and even superior performance in scenarios prone to drift by using the raw point cloud. This, in turn, lays a foundation for subsequent investigations into the integration of LiDAR-generated images with LO. Full article
(This article belongs to the Special Issue Advances in the Application of Lidar)
Show Figures

Figure 1

12 pages, 3827 KiB  
Communication
Three-Dimensional Mapping of Habitats Using Remote-Sensing Data and Machine-Learning Algorithms
by Meisam Amani, Fatemeh Foroughnia, Armin Moghimi, Sahel Mahdavi and Shuanggen Jin
Remote Sens. 2023, 15(17), 4135; https://doi.org/10.3390/rs15174135 - 23 Aug 2023
Cited by 1 | Viewed by 1309
Abstract
Progress toward habitat protection goals can effectively be performed using satellite imagery and machine-learning (ML) models at various spatial and temporal scales. In this regard, habitat types and landscape structures can be discriminated against using remote-sensing (RS) datasets. However, most existing research in [...] Read more.
Progress toward habitat protection goals can effectively be performed using satellite imagery and machine-learning (ML) models at various spatial and temporal scales. In this regard, habitat types and landscape structures can be discriminated against using remote-sensing (RS) datasets. However, most existing research in three-dimensional (3D) habitat mapping primarily relies on same/cross-sensor features like features derived from multibeam Light Detection And Ranging (LiDAR), hydrographic LiDAR, and aerial images, often overlooking the potential benefits of considering multi-sensor data integration. To address this gap, this study introduced a novel approach to creating 3D habitat maps by using high-resolution multispectral images and a LiDAR-derived Digital Surface Model (DSM) coupled with an object-based Random Forest (RF) algorithm. LiDAR-derived products were also used to improve the accuracy of the habitat classification, especially for the habitat classes with similar spectral characteristics but different heights. Two study areas in the United Kingdom (UK) were chosen to explore the accuracy of the developed models. The overall accuracies for the two mentioned study areas were high (91% and 82%), which is indicative of the high potential of the developed RS method for 3D habitat mapping. Overall, it was observed that a combination of high-resolution multispectral imagery and LiDAR data could help the separation of different habitat types and provide reliable 3D information. Full article
(This article belongs to the Special Issue Advances in the Application of Lidar)
Show Figures

Graphical abstract

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Dear Colleagues,

LiDAR (light detection and ranging, also LIDAR, LiDAR, and LADAR) has advanced rapidly since 1960. It is an active remote sensing technology that measures the distance by taking the speed of light and the time elapsed to travel from the target object to the sensor. The most common information provided by LiDAR is elevation and the structural profile of the terrain surface. Applications of LiDAR include flood risk measurements, terrain modeling, tree height measurements, etc. There are also advanced approaches for data fusion using both LiDAR data and high spatial and high spectral resolution images for ground cover classification and target recognition. In the last few decades, remote sensing technology has evolved dramatically with the better quality of LiDAR products as well as a rapid development of machine and deep learning algorithms. These most advanced technologies lead the remote sensing community to further explore the advanced applications of LiDAR data.

This Special Issue aims at studies that provide insights into the most advanced LiDAR remote sensing and their applications at local, regional, or global scales. Topics may include anything from advances of the physical principles or data processing of LiDAR to LiDAR for agriculture and forest application, urban application, change detection, or geoscience applications, etc. In addition, multisource data fusion with LiDAR, classification algorithms development, accuracy assessment, etc., are all welcome.

Suggested themes and article types for submissions.

  • LiDAR data fusion technologies and algorithm development.
  • LiDAR application in forestry: e.g., individual tree mapping, canopy mapping, etc.
  • LiDAR application in agriculture: e.g., crop planning, yield forecasting, etc.
  • LiDAR urban application e.g., road extraction, building extraction, etc.
  • LiDAR application in disaster management: e.g., post-flooding mapping, etc.
Back to TopTop