Next Article in Journal
A Deep Learning Network for Individual Tree Segmentation in UAV Images with a Coupled CSPNet and Attention Mechanism
Next Article in Special Issue
An Underwater Side-Scan Sonar Transfer Recognition Method Based on Crossed Point-to-Point Second-Order Self-Attention Mechanism
Previous Article in Journal
Improving the Spatial Prediction of Sand Content in Forest Soils Using a Multivariate Geostatistical Analysis of LiDAR and Hyperspectral Data
Previous Article in Special Issue
Spectral Swin Transformer Network for Hyperspectral Image Classification
 
 
Article
Peer-Review Record

Deep Spatial Graph Convolution Network with Adaptive Spectral Aggregated Residuals for Multispectral Point Cloud Classification

Remote Sens. 2023, 15(18), 4417; https://doi.org/10.3390/rs15184417
by Qingwang Wang 1,2, Zifeng Zhang 1,2, Xueqian Chen 1,2, Zhifeng Wang 3, Jian Song 1,2 and Tao Shen 1,2,*
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3:
Remote Sens. 2023, 15(18), 4417; https://doi.org/10.3390/rs15184417
Submission received: 30 July 2023 / Revised: 30 August 2023 / Accepted: 5 September 2023 / Published: 7 September 2023

Round 1

Reviewer 1 Report

The manuscript presents a method for classifying multispectral point clouds by integrating spatial and spectral information. Overall, the topic is interesting and holds practical value. The manuscript is fluently presented and easy to follow. The results are promising. I have only minor suggestions that could help improve its form:

In Section 2.2, the process involves multiple weightings. Please provide further illustration regarding the definition of weighting values and their influence on the results.

 

Regarding the experiments, equations 8-12 correspond to widely known evaluation methods. I suggest considering their removal.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

This paper proposes a new network called DSGCN-ASR, for multispectral point cloud classification. It performs deep convolution according to the spatial graph and adds the spectral aggregated residuals adaptively to achieve efficient joint use of spatial-spectral information for finer multispectral point cloud classification. In general, the idea seems to be feasible.

Below are detailed comments:

1. The abstract needs to be further polished, e.g., the description of current classification methods for multispectral point clouds should be concise.

2. In the introduction section, the authors should add relevant literatures on multispectral point cloud classification rather than graph-based methods, e.g., CNN, PointNet, 3D FCN, and even the traditional segmentation-based point cloud classification methods.

3. I suggest the specific calculation process of Algorithm 1 should be placed in Section 2.1.

4、Please clarify respectively what is the difference of Aspatial and W between the calculation formula (2) and (4) in detail.

5、The loss function should be addressed and given in Section 2.2.

6. In Section 2.2, please further explain how DSGCN-ASR make significant contributions to tackle the insufficient capability of shallow graph neural networks in fitting the nonlinearity of multispectral point clouds in complex remote sensing scenes.

7、In the experiments, I suggest supplementing the results on the impact of parameters α and β in the formula (5) and (6).

The quality of English Language should be refined due to many grammatical errors.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Introduction

In the introduction, the problem of analysis, classification and visualization of data obtained using LiDAR is sufficiently considered.

Tasks are formed correctly and are relevant today.

Methodology.

The method for constructing spatial and spectral graphs is presented in detail.

The parameters calculated by CNN necessary for correcting the processed data are described to a degree sufficient for a complete understanding.

The general idea of the work is described in detail and sufficiently disclosed.

Experiment.

From the above experiments, there is no complete understanding on which data set the CNN was trained. If the training was carried out on the presented data, then for the purity of the experiment it was necessary to take a separate data set that does not take part in the training.

Comparison of the obtained results was done objectively enough to reveal the advantages of the proposed method for processing data obtained using LiDAR for the data sets presented in the article.

In general, the efficiency and accuracy of the data obtained in relation to real objects is not clear, perhaps it was necessary to compare the obtained results with high expansion multispectral images of the underlying surface obtained from other sources (for example, from UAVs).

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

The authors have revised the manuscript according to my comments point to point. I have no more comments.

Back to TopTop