Next Article in Journal
Spatio-Temporal Analysis of Intense Convective Storms Tracks in a Densely Urbanized Italian Basin
Next Article in Special Issue
Quantification Method for the Uncertainty of Matching Point Distribution on 3D Reconstruction
Previous Article in Journal
Extended Classification Course Improves Road Intersection Detection from Low-Frequency GPS Trajectory Data
Previous Article in Special Issue
Extracting Representative Images of Tourist Attractions from Flickr by Combining an Improved Cluster Method and Multiple Deep Learning Models
 
 
Article
Peer-Review Record

Classification and Segmentation of Mining Area Objects in Large-Scale Spares Lidar Point Cloud Using a Novel Rotated Density Network

ISPRS Int. J. Geo-Inf. 2020, 9(3), 182; https://doi.org/10.3390/ijgi9030182
by Yueguan Yan 1, Haixu Yan 1,*, Junting Guo 2 and Huayang Dai 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
ISPRS Int. J. Geo-Inf. 2020, 9(3), 182; https://doi.org/10.3390/ijgi9030182
Submission received: 23 February 2020 / Revised: 4 March 2020 / Accepted: 22 March 2020 / Published: 24 March 2020
(This article belongs to the Special Issue Deep Learning and Computer Vision for GeoInformation Sciences)

Round 1

Reviewer 1 Report

The article deals with the very interesting and contemporary topic of the semantic objects classification registered in LiDAR clouds points. Unfortunately, I am disappointed with its incompleteness, which does not allow me to recreate the experiments described in it.

Below some comments - not necessarily according to their relevance:

  • Figure 2 is incomprehensible to me.
  • The authors provide a laconic description of the technical infrastructure used in the experiment, which can only confirm that such devices is able to perform the presented calculations. However, it adds nothing about the complexity of the operation (and its time efficiency) compared to other known (classic) solutions.
  • The conclusions are very concise and should be expanded
  • The quality of the article (which may be very interesting) suffers from the lack of more detailed descriptions of the experiment, selection of data for network training, validating, etc. In the article you feel unsatisfied with such information.
  • Some of the references have a marginal connection with the subject of the article.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

The work presents a methodology to pre-proccess point clouds and then use 3d convolutional neural networks to segment point clouds. This is an interesting topic and several works have been published. I think the work is interesting, but some parts should be modify to improve the quality of the manuscript.

Some editing issues have been detected during the review, some as: 

-2nd paragraph of the section 1.1. Background: "For engineering survey, high, medium, low, and sparse density point cloud is defined
as point-density is < 2 pts/m3, (2,7] pts/m3, (7,10] pts/m3 and >10 pts/m3, respectively" wrong order, high density (>10 pts/m3), medium density ( (7,10] pts/m), etc...

- At the end of the 2nd paragraph, “Two major constraints to use the parse point cloud are summarized as: “. I think authors miss a s there (sparse point cloud)

Also, I have some technical questions:

-What is the difference between the presented GT-Boxes and a Voxel structure?

-The height of each GT-box is defined as the as one-fifth of the height of the sparse point cloud and the width as one-hundredth of the boundary of the sparse point cloud. Why? Did the dimensions of the GT-box depend what we want to segment? How did these GT-box dimensions work with different point clouds with different dimensions?

-I have not understand the section 2.2.1 Rotation unit. What means “When rotating the point cloud of an object along with the main structure axis (Z-axis in Figure 6), the point-density distribution changes dramatically for the structural feature (red points in Figure 6)”? This section explanation should be improved.

-In section 3.2. Implementation details, authors said that 25.000 samples are used for training and the rest (33.200 – 25.000) are used as test sets. How the samples are selected for training? Is the same area with different density used both for training and test? How did this affect to the results?

Also add other information, such as how the point cloud was classified before use it for the training, how the increase/decrease of the amount of data can vary the results, etc.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Authors have response to all my previous questions/suggestions. Now the article has been improved, with more information than before.

Back to TopTop