Next Article in Journal
Development of an Automated Visibility Analysis Framework for Pavement Markings Based on the Deep Learning Approach
Next Article in Special Issue
Classification of Very-High-Spatial-Resolution Aerial Images Based on Multiscale Features with Limited Semantic Information
Previous Article in Journal
A Framework for Unsupervised Wildfire Damage Assessment Using VHR Satellite Images with PlanetScope Data
Previous Article in Special Issue
Development of Land Cover Classification Model Using AI Based FusionNet Network
 
 
Letter
Peer-Review Record

Uncertainty-Based Human-in-the-Loop Deep Learning for Land Cover Segmentation

Remote Sens. 2020, 12(22), 3836; https://doi.org/10.3390/rs12223836
by Carlos García Rodríguez 1,*, Jordi Vitrià 1 and Oscar Mora 2
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3:
Remote Sens. 2020, 12(22), 3836; https://doi.org/10.3390/rs12223836
Submission received: 19 October 2020 / Revised: 17 November 2020 / Accepted: 18 November 2020 / Published: 23 November 2020

Round 1

Reviewer 1 Report

The paper deals with increasing the accuracy of AI-based segmentation models by means of an expert human-in-the-loop approach. The problem faced by the authors is quite interesting and well-known in the remote sensing community Indeed, using AI-based segmentation approaches is currently limited by the low level of accuracy if compared to the practical need of agencies or companies. As a consequence, the solution proposed by the authors is quite interesting as it is based on a trade-off between using AI-based approaches and limiting the intervention of expert humans. Nonetheless, the authors should address some comments and improve some parts of the paper before accepting the paper for publication.
The comments are reported in the attached PDF.

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

In this work, a method to semantical segmentation of satellite images is presented. The main intended contribution is the addition of a human reader at the end of the pipeline to address the network output predictions. The authors argue that is not possible compare the results with other works due to the differences in the dataset sizes.

The article has some methodological flaws, mainly using the ground truth labels to replace the human component in the "human-in-the-loop" paradigm. This corrupts the results, since a human can act in different ways depending on the situation, and make assumptions like this induces to falsify the true performance of the method. The correct way is using a group of real humans to evaluate this performance in both quality and time saved.

However, I add some comments on the article:

  1. Some aspects of the introduction should be referenced/better explained:
    • line 13, "getting 80% accuracy...is not enough"
    • line 15, "These institutions work with images provided by...", Which institutions? Any example works?
    • line 21, since Sentinel-2 images are used in this paper, provide a reference related to the acquisition properties and dataset description.
    • line 27, these works need a briefly description. What technique are they using to solve the problem? In what aspects are them related with this work?
    • line 38, Referencing to LeCunt is ok, but please, support the use of deep learning providing other more specific references or move this part to related work.  
    • line 40, "The acceptable accuracy...has been set by a set of cartographic professionals at a minimum of 90%". Which professionals? Where? Any reference to this statement?
  2. Related work:
    • Table 1, I miss a column with the number of images in each dataset.
  3. First part of the methodology is clear, however point 3.3 should be better described. Is not clear enough the contribution of the human-in-the-loop paradigm proposed here to improve classification since is not comparable with other works on the same dataset that do not use human corrections. If this human action are applied to other methodologies, which would be the results?
  4. Data:
    • Sentinel-2 used bands should be described.
  5. A real human should be used in the human-in-the-loop paradigm to obtain real performance metrics, specially if this is the main contribution of the work.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

This paper is interesting to develop the human-in-the-loop deep learning methodology to improve the accuracy in existing satellite cover prediction. Based on the method authors presented, authors have presented a case and indicated that the accuracy can be improved by up to 90%. 

Overall, the content presented by the authors is actually short, but authors indicate this paper is not a 'article' but a 'letter'. Therefore, I think this paper is then acceptable.

 

Author Response

Thank you very much for your attention in reviewing the article.

Round 2

Reviewer 1 Report

Thank you for having considered my comments. I believe the paper has improved in clarity and now is ready for publication. 

Author Response

Many thanks for your comments that helped us improve our manuscript.

Reviewer 2 Report

After a new review, it is still hard to me to find a real contribution in this paper. I can undertand that authors found some difficulties in obtaining results but, since this work is not comparable with others, the main suggested contribution (human in the loop) can not be tested correctly in depth and synthetic techniques are used to replace the human agent, i considere that the remaining content is not enough to be publish.

Author Response

Many thanks for your comments that helped us improve our manuscript.

Back to TopTop