Next Article in Journal
A New Integrated Approach for Landslide Data Balancing and Spatial Prediction Based on Generative Adversarial Networks (GAN)
Previous Article in Journal
Vegetation Productivity Losses Linked to Mediterranean Hot and Dry Events
 
 
Article
Peer-Review Record

Sentinel-2 Cloud Removal Considering Ground Changes by Fusing Multitemporal SAR and Optical Images

Remote Sens. 2021, 13(19), 3998; https://doi.org/10.3390/rs13193998
by Jianhao Gao 1,*,†, Yang Yi 2,†, Tang Wei 3 and Guanhao Zhang 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Remote Sens. 2021, 13(19), 3998; https://doi.org/10.3390/rs13193998
Submission received: 27 July 2021 / Revised: 23 September 2021 / Accepted: 29 September 2021 / Published: 7 October 2021
(This article belongs to the Section AI Remote Sensing)

Round 1

Reviewer 1 Report

The manuscript presents an interesting method for fusion of multitemporal optical and SAR images with deep neural network, to obtain a cloud removal result which can reflect the ground information change. Different from those deep learning methods, the proposed method can operate without training dataset. The authors presented a clear explanation of their motivation and the novelty. Furthermore, the proposed method outperforms other multitemporal-based methods.

There are still some issues to be revised.

 

Line 10: ‘Existing reconstruction methods can hardly fail to reflect the real-time information.’ The original sentence seems to express an exact opposite meaning again the whole paper. It seems that ‘fail to’ should be removed.

 

Line 57: ‘Accuracy and authority of multitemoporal methods’ results can be guaranteed.’  ‘Multitemoporal’ should be ‘multitemporal’

 

Consider adding a brief explanation of the evaluation indexes with indication of whether a higher or lower score is aimed for.

 

In Table 1, the acquisition periods of O1 and O2 in Real data exp 2 do not correspond to S1 and S2. Please check them.

 

Figure 7 and Figure 8 seem to share the same explanation. Please check them or re-explain Figure 8.

 

About reference: Reference part should use the citation style of ‘Remote sensing’. It is obvious that the author used a wrong citation style.

 

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The paper proposes a fair technique to remove clouds using multi-temporal SAR + optical imagery.

The paper is hard to follow, especially the methodology.

For example its exactly not clear what does the author mean by local or global information in the paper. The symbol ⊕ in Eq.5 is not properly defined. Why doing a local reconstruction first and then followed by a global reconstruction.

Similarly the results too miss proper ablation studies wrt the local and global terms as well as the Total variation term.

1> While the authors have done extensive literature survey, It would be good if the authors can also compare/cite/ differentiate their work from some of the seminal works in this domain such as

A> Cloud removal in Sentinel-2 imagery using a deep residual neural network and SAR-optical data fusion. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7386944/

B> Cloud-Gan: Cloud Removal for Sentinel-2 Imagery Using a Cyclic Consistent Generative Adversarial Networks https://ieeexplore.ieee.org/document/8519033

C> Cloud Removal in Satellite Images Using Spatiotemporal Generative Networks https://openaccess.thecvf.com/content_WACV_2020/papers/Sarukkai_Cloud_Removal_from_Satellite_Images_using_Spatiotemporal_Generator_Networks_WACV_2020_paper.pdf

 

Some Minor Revisions:

 1>  Line 97: where -> which

2> Eq.2 Should be O2 instead of O1

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

This work tackles the problem of realistic images restoration obtained removing clouds in Sentinal-2 images exploiting multitemporal Sentinel-1 and Sentinal-2 images. The authors propose a novel deep neural network (DNN) approach that has virtually “no need” of a large training dataset to be effective. The Introduction (1) and Methods (2) chapters are well organized and described. On the other side, the layout of the images in the Experiments (3) chapter is not easily understandable, and the ablation study reported in chapter (4) is small. I would have appreciated a more in-depth analysis to better understand the effectiveness of the proposed method.
Another downside of this presentation, in my opinion, is a consideration reported in the Conclusion (5) chapter. The authors underline that the proposed method outperforms previous ones without the needing of a
large training dataset, but as future work, they would like to collect a large training dataset to train the method. This statement gives the idea of uncertainty on the robustness and efficacy
of the proposed method since with a large dataset it could be possible to obtain better results.

Moreover, I don't like how the results are presented both in the tables and in the figures. A better explanation should be inserted when commenting tables, and figures/tables captions should be improved too.

The use of English language should be improved; the paper requires extensive editing. For example, in line 89 the sentence "

[32] first utilized the contemporary SAR image

" could be rewritten in a more convenient shape.

The same concept applies to several other sentences:

line 106:

"

to remove cloud of Sentinel-2 images with mul- 106
titemporal Sentinel-1 and Sentinel-2 images.

" should be rewritten as: "to remove clouds from images produced by Sentinel-2 satellite by exploiting multitemporal images coming from Sentinel-1 and Sentinel-2 satellites."

line 111: 

We finally obtain the 111
output of network as the cloud removal result after several times of optimization

line 116:

We propose a novel method for cloud removal of Sentinel-2 images based on deep 116
neural network with multitemporal Sentinel-1 and Sentinel-2 images

line 266 reports an ugly words repetition which could be improved:

" Two areas are selected and magnified in Figure 3 and 4. From Figure 3 and 4,  "

line 338 should be corrected as well:

" in Table 3. We mark the highest scores of the 338
indexes in bald and the second highest scores with underline "

And many more.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

The authors have answered all my comments.

I think, the manuscript is good to be published in the revised form.

Reviewer 3 Report

The revised version of the paper is a clear improvement w.r.t. the original manuscript. I appreciate the performed changes, which address most of my raised concerns. The only thing that could be still improved is the use of English language, with some sentences which could be better rewritten to avoid also unnecessary repetition - for example, in section 4.1 the sentence:

"We select an area from Figure 6 and display them in Figure 7. It can be viewed from Figure 7 that global loss function contributes ...."

 could be written as:

"We select an area from Figure 6 and display them in Figure 7. As it can be noticed, the global loss function contributes...." 

Back to TopTop