Next Article in Journal
Research on the Relationship between the Structure of Forest and Grass Ecological Spaces and Ecological Service Capacity: A Case Study of the Wuding River Basin
Next Article in Special Issue
A Case Study on the Effect of Atmospheric Density Calibration on Orbit Predictions with Sparse Angular Data
Previous Article in Journal
Dual-Level Contextual Attention Generative Adversarial Network for Reconstructing SAR Wind Speeds in Tropical Cyclones
 
 
Article
Peer-Review Record

Dim and Small Space-Target Detection and Centroid Positioning Based on Motion Feature Learning

Remote Sens. 2023, 15(9), 2455; https://doi.org/10.3390/rs15092455
by Shengping Su 1,2, Wenlong Niu 1,2,*, Yanzhao Li 1,2, Chunxu Ren 1,2, Xiaodong Peng 1,2, Wei Zheng 1 and Zhen Yang 1
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3:
Reviewer 4:
Remote Sens. 2023, 15(9), 2455; https://doi.org/10.3390/rs15092455
Submission received: 27 February 2023 / Revised: 29 April 2023 / Accepted: 4 May 2023 / Published: 7 May 2023

Round 1

Reviewer 1 Report

This paper proposed a space-target detection framework comprising a space-target detection network and k-means clustering target centroid positioning method, which performs well in environment with low signal-to-noise ratio (SNR) targets and complex backgrounds.

The description of the experiments data to be detected needs to be appended in the abstract and introduction, In order to provide a more intuitive understanding of the proposed framework for readers.

 

Detailed comments follow.

1.        Line 152: ‘RCNN’ Please supplement its complete spelling, and same to other abbreviations in the paper.

 

2.        Line 166: the DeepStreaks achieved a detection rate of 96–98 % while maintaining a false-positive 167 rate of less than 1 % for streak-like. This system performs quite well, please append relevant comparative experimental results.

 

3.        2.1 Detection Methods: There is no literature research on Deep Neural Networks and k-Means Clustering used for target detection. Please supplement relevant literature research to explain the feasibility of Deep Neural Networks and k-Means Clustering which are used in the proposed framework.

 

4.        Line 260: “n = 32”, how is this parameter determined? What size are the targets to be detected in the image? Whether the target is likely to be split?

 

5.        Line 545: texts in the Figure 13 is not easy to read, please improve the image quality.

 

6.        Line 602: In order to ensure that the experimental results are comparable, is it better that the two image sequences containing two and three targets with a same environment or background?

 

 

7.        Line 620: Figure 20 and 21, check that all sub-graphs are positioned correctly. (a) is the ground truth of the target, so the two sub-graphs should contain two and three targets respectively. So as to Figure 20 (b).

 

8.        Line 867, References. Supplement and update the references of the last two years.

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The paper proposed a 3D CNN based 2-stage space-target detector with proposed K-mean centroid estimation. The topic is interesting, however the experiments are conducted mostly if not all on some what synthetic data. And lack of experiment on real world data is the biggest issue.

 

1.In section 2, add couple of example frames to illustrate point-like and streak-like detection, in case readers are not familiar with the topic.

 

2.Most experiment results are based on synthetic data. However similar paper has not been cited about why certain synthetic generation approach is used. If the input is the videos, it is not clear how is the motion simulated in the synthetic data? Does the assumption including all the targets have same speed in one clip? The experiment part is missing different targets with different speed in the same clip.

 

3.the proposed network takes video clip (CTHW) as input, it is not clear if the output TxN or just N. N is the number of bounding boxes. If it is N, it seems very inefficient, and some other work related to using spatial-temporal input for object detection are missing in the paper.

 

4.Not clear what is the input of the k-mean. Citations are needed for applying k-mean to separate fore/back ground. How is k selected?

 

5.Figure 12 b seems to have more noise than c and d.

 

6.The comparison among different methods doesn’t seem to be fair if some of the methods are using image frame as input. It is best to state which methods are using image frames and which are using clips.

 

7.Any explanation in table 8? Why is that the faster the speed the lower the worse the performance?

 

8.Biggest concern being lack of experiment results on real world data. Probably include some pilot results from real world data could give more evidence of the proposed methods.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

What is the distance between target and sensor? are you referring to the detection of GEO faint target detection? If so, please clarify.

If so, is the CCD the appropriate recording device? 

The major problem is the background noise, which encompasses target exposure under variable illumination. This is a challenging project and the description of  the background noise in the paper content, is missing.

What is the computational efficiency of the proposed technique?

The authors efforts are deeply appreciated. However, the authors chose to introduce a space target technique, which relies on simulations based on oversimplified assumptions deprived of any real conditions and physical constraints. Therefore, translation of of their study into a real space scenario would require additional knowledge. 

In summary, the authors should address clearly the limiting factors of their study, and a future plan aimed to address those one.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 4 Report

Very interesting work with relevance to Space Situational Awareness (SSA ) applications. More detailed explanation on the nature, size, validity of the dataset used in the study is required to understand the scope and accuracy of the work presented. It's not clear if the presented would would be applicable to other types of images of interest.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

All my concerns has been addressed.

Back to TopTop