Next Article in Journal
Global Multi-Scale Information Fusion for Multi-Class Object Counting in Remote Sensing Images
Next Article in Special Issue
Deep Learning to Near-Surface Humidity Retrieval from Multi-Sensor Remote Sensing Data over the China Seas
Previous Article in Journal
Multi-Band and Polarization SAR Images Colorization Fusion
Previous Article in Special Issue
Prediction of Sea Surface Temperature in the East China Sea Based on LSTM Neural Network
 
 
Article
Peer-Review Record

End-to-End Neural Interpolation of Satellite-Derived Sea Surface Suspended Sediment Concentrations

Remote Sens. 2022, 14(16), 4024; https://doi.org/10.3390/rs14164024
by Jean-Marie Vient 1,2,*, Ronan Fablet 2, Frédéric Jourdin 3 and Christophe Delacourt 1
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Remote Sens. 2022, 14(16), 4024; https://doi.org/10.3390/rs14164024
Submission received: 13 July 2022 / Revised: 12 August 2022 / Accepted: 15 August 2022 / Published: 18 August 2022
(This article belongs to the Special Issue AI for Marine, Ocean and Climate Change Monitoring)

Round 1

Reviewer 1 Report

In this study, a method was used to fuse the regional suspended matter distribution products without gap, and 4DVarNet was effectively verified. However, there are still a few small questions that need to be answered by the author.

1. Why do only one regional fusion product, the final result has too many uncertain factors, and depends on the relationship between MARS and the regional suspension model

2. Whether a product from a larger research area can be produced for verification, the larger the area, the more applicable MARS

Author Response

In this study, a method was used to fuse the regional suspended matter distribution products without gap, and 4DVarNet was effectively verified. However, there are still a few small questions that need to be answered by the author.



COMMENT 1. Why do only one regional fusion product, the final result has too many uncertain factors, and depends on the relationship between MARS and the regional suspension model

 

ANSWER 1. Here we do not perform a fusion of the MARS model outputs and the MODIS satellite images. These two sets of data enter in two different experiments, named OSSE and OSE. Thank you for this comment because our article is not at all clear enough about the description of this approach. In particular we forgot to clearly define the term OSE in the introduction and did not exactly precise what we do with the data in the beginning of Section 2. As a consequence, we updated the end of the Introduction (notably introducing the OSE) and re-wrote the introduction of Section 2.



COMMENT 2. Whether a product from a larger research area can be produced for verification, the larger the area, the more applicable MARS

 

ANSWER 2. Thank you for this comment because we effectively forgot to mention that the MARS model has been deployed on a much wider area than the area of interest in our present article. In fact the MARS area extends from 41N to 55N and 18W to 9.5E (see Mengual et al 2019) which is about 15 times our area of interest (in terms of surface). As a consequence, we added this crucial information at the end of Section 2.2. Also we added a sentence at the end of Section 2.3 to recall the reader that the area of interest (a smaller extract from the MARS wider simulations) is a well known rich area in terms of physical and biological processes (the Bay of Biscay: e.g. Borja et al 2019). Hence this area constitutes, in this way, a fine testing-ground in terms of spatial and temporal variability of the turbidity.

Reviewer 2 Report

End-to-end deep learning schemes, namely 4DVarNet, was proposed for the data assimilation of suspended sediment concentration in the Bay of Biscay. This study followed the previous work by the same group using “Data-driven interpolation method”. 4DVarNet was found to show better performance over the other data-driven schemes such as OI and DINEOF. 

 

I think there are several points that could be improved.

1.     Section 2.3. Introduction about the NAP is not clear. There are many short abbreviations that needs the whole name, like IFREMER, et al. I think this part could be simplified by only introducing the algorithms used for estimate NAP. Generally speaking, there will be large differences between the total suspended matter (SPM) and Chl-a. How necessary to do this subtraction between both parameters? Figure 1 shows the spatial distribution of SPM, not NAP. The unit of SSSC is g/L, not NTU. However, the NAP and Chl-a were converted to NTU as described on Line 141. More explanations and a clear description about these data are expected.

2.     How many data points were used for the comparison as shown in these tables? 

3.     Retrieval of fine-scale turbidity patterns from satellite data might be one of the advantages of the 4DVarNet. It’s not convincing for us to know this only from one figure (Figure 4). Can see more examples? If it is indeed, I think this point could highlighted in the abstract. 

4.     DINEOF has been used widely for filling the gap of satellite data. I think in order to highlight the advantage of this method, more comparisons and discussions could be strengthened.

5.     I think this manuscript could be accepted as a technical note after revision.

6.     The writing structure of this manuscript could be simplified. 

Author Response

End-to-end deep learning schemes, namely 4DVarNet, was proposed for the data assimilation of suspended sediment concentration in the Bay of Biscay. This study followed the previous work by the same group using “Data-driven interpolation method”. 4DVarNet was found to show better performance over the other data-driven schemes such as OI and DINEOF. 

I think there are several points that could be improved.



COMMENT 1.     Section 2.3. Introduction about the NAP is not clear. There are many short abbreviations that needs the whole name, like IFREMER, et al. I think this part could be simplified by only introducing the algorithms used for estimate NAP. Generally speaking, there will be large differences between the total suspended matter (SPM) and Chl-a. How necessary to do this subtraction between both parameters? Figure 1 shows the spatial distribution of SPM, not NAP. The unit of SSSC is g/L, not NTU. However, the NAP and Chl-a were converted to NTU as described on Line 141. More explanations and a clear description about these data are expected.

 

ANSWER 1.     Thank you for this relevant comment, yes indeed the entire Section 2.3 is not at all clear enough. So it has been almost entirely re-written, including simplifications and a clearer definition of NAP, with its unit in mg/L. For clarity we also added a new Equation (Eq. 1). As a consequence, a large part of the previous Section 2.2 has been re-written as well. For information Chl-a is well converted to living SPM before subtraction to total SPM in order to get NAP in mg/L. Also we simplified the reference to NTU since, in fact, it is the total SPM that is converted to NTU via a statistical law (obtained from Nechad et al 2010).



COMMENT 2.     How many data points were used for the comparison as shown in these tables? 

 

ANSWER 2.

Concerning the data points available for comparison we have a total number of points of about  2.0E6 points (2 000 000). That is because the average number of points per reconstructed image is 6800 points, and the evaluation is made over one full year (except a few empty images). More precisely we work with a threshold of 500 observation points per day. This means that when we have more than these 500 points for a given daily image, we evaluate the reconstruction by masking and rescinding more than half of these points (see 3.2 for the sampling strategy). Following your comment, we created a new section 3.3 (Performance metrics) where we put this above information (on the number of total points).



COMMENT 3.     Retrieval of fine-scale turbidity patterns from satellite data might be one of the advantages of the 4DVarNet. It’s not convincing for us to know this only from one figure (Figure 4). Can see more examples? If it is indeed, I think this point could highlighted in the abstract. 

 

ANSWER 3.     Thank you for underlying the fact that our architecture's fine scale resolution was not enough highlighted in the manuscript. We added an article with an appliance of similar architecture on surface satellite data (SST-SSH) which showed a similar ability to reconstruct fine scale patterns (Fablet et 2022, ref : 50). Moreover, as described in the section 3.1 our validation process for the4DVarNet method is based on the gradient of the variational cost, according to the ODE/CDE based 4DVar. The main goal of that 2 parameter evaluation is precisely to achieve a better fine resolution process reconstruction. Thanks to your comment we added the end of sentence “and improve the high spatial resolution of patterns in the reconstruction processes” in the abstract in order to emphasize this aspect and the following sentence in Section 5.3 “A similar behavior is observed when applying this method (4DVarNet) to sea surface heights and sea surface temperatures from satellites (Fablet et al 2022, preprint)”.



COMMENT 4.     DINEOF has been used widely for filling the gap of satellite data. I think in order to highlight the advantage of this method, more comparisons and discussions could be strengthened.

 

ANSWER 4.     In fact we particularly emphasize here (following our OSE experiments) the interest of using DINEOF because, contrary to 4DVarNet, it does not need a strong expertise, and it performs particularly well in areas containing both coastal and open sea domains (see also Barth et al 2020 on this subject). Thanks to your comment we also added the following information in section 5.2 (Comparison of interpolation methods): “DINEOF performs much better than OI, particularly with real satellite data. This is what our present article demonstrates by the results obtained with the OSE experiments (using real data) compared with what was expected (similar performance between OI and DINEOF) after the OSSE experiments (using simulated data). Furthermore it has been well demonstrated (Barth et al 2020) that DINEOF is well suited for complex areas comprising at the same time coastal and open sea domains. Given that DINEOF is quite simple to implement and does not require a strong expertise, this method should definitely be considered as a baseline scheme for routine operational ocean colour products similar to those studied in this article.” The advantage of 4DVarNet lays only in some gains in accuracy, but after some expertise in carrying out the learning phases of this machine learning process.



COMMENT 5.     I think this manuscript could be accepted as a technical note after revision.

 

ANSWER 5.     NA (thank you for this review).



COMMENT 6.     The writing structure of this manuscript could be simplified. 

 

ANSWER 6.     Thanks for your feedback. We checked and modified sentences (and remaining errors) in the overall article. It appeared especially that Sections 2.2 and 2.3 were badly written. These two sections were almost completely re-written.

Reviewer 3 Report

The reviewer would like to thank the authors for this thoughtful manuscript. This work has good potential. The authors are requested to put in some additional efforts to improve the quality of this manuscript. 

 

Fig. 1

The authors are requested to provide high-resolution versions of these images.

 

Shading Differences 

The authors are requested to explain how they resolve the issue of differential shadowing that may help towards misclassification of sediment concentration. 

 

Training and Evaluation Framework

The authors are requested to discuss the effectiveness of performance metrics and cite the following articles discussing the use of these metrics for comparing predicted and satellite observed measurements. 

i) Gascoin et al, Theia Snow collection: high-resolution operational snow cover maps from Sentinel-2 and Landsat-8 data, ESSD, 2019

ii) Muhuri et al, Performance Assessment of Optical Satellite-Based Operational Snow Cover Monitoring Algorithms in Forested Landscapes, IEEE JSTARS, 2021

 

Random Point Sampling Strategy 

How do the authors ensure that they sample sufficient points from each sediment category? Do you perform some kind of bin filling approach to equalize the number of samples from each category.

 

Author Response

The reviewer would like to thank the authors for this thoughtful manuscript. This work has good potential. The authors are requested to put in some additional efforts to improve the quality of this manuscript. 



COMMENT 1:

Fig. 1

The authors are requested to provide high-resolution versions of these images.

 

ANSWER 1:The figure was updated to a higher resolution. please notice than because of spatial resolution reconstruction scale the MARS-MUSTANG SSSC are in a grid of 128*128 points while the MODIS grid is composed of 256*256 points. This difference may explain the “pixelate” view of the second map in the figure. 



COMMENT 2:

Shading Differences 

The authors are requested to explain how they resolve the issue of differential shadowing that may help towards misclassification of sediment concentration. 

 

ANSWER 2:

The database of satellite images has been processed by Francis Gohin (Gohin et al 2005), and all pixel images containing clouds and cloud shadow have been largely removed. Thanks to your comment we added the following information at the beginning of Section 2.3 (MODIS real satellite data): “All clouds and cloud shadows in raw satellite images were flagged with a low detection threshold so as to remove all questionable signals. Also, atmospheric over-corrections are taken into account using reflectance at 412~nm (Gohin et al 2005).



COMMENT 3:

Training and Evaluation Framework

The authors are requested to discuss the effectiveness of performance metrics and cite the following articles discussing the use of these metrics for comparing predicted and satellite observed measurements.

  1. i) Gascoin et al, Theia Snow collection: high-resolution operational snow cover maps from Sentinel-2 and Landsat-8 data, ESSD, 2019
  2. ii) Muhuri et al, Performance Assessment of Optical Satellite-Based Operational Snow Cover Monitoring Algorithms in Forested Landscapes, IEEE JSTARS, 2021

 

ANSWER 3: In terms of metrics, we chose to validate our results with the standard RMSE on log10 values of concentrations for two main reasons:

  • First, the statistical distribution of particle concentrations typically follows a lognormal probability distribution (Eleveld et al 2008) so that log10 values follow a Gaussian distribution. Then, providing biais are negligible (all bias in all experiments were found equal or inferior to 0.01), the RMSE is comparable to a standard deviation and then completely characterizes the statistical distribution;
  • Second, the evaluation on log10 of concentrations emphasizes the validation of low concentrations which are important in the determination of water transparency, which is a main goal in our studies.

Thanks to your comment we created Section 3.3 (Performance metrics) and added the previous information in it. However, in further studies, for instance in order to evaluate extreme values of turbidity that did not follow the main gaussian distribution, it could be interesting to introduce further metrics based on classification that you are referring to us.  



COMMENT 4:

Random Point Sampling Strategy 

How do the authors ensure that they sample sufficient points from each sediment category? Do you perform some kind of bin filling approach to equalize the number of samples from each category.



ANSWER 4:

For the sampling strategy, we work with a threshold of 500 observation points per day. This means that when we have more than these 500 points, we evaluate the reconstruction by masking and subsampling more than half of these points (see 3.2 for the sampling strategy) with an average of 6800 points per day for the resconstructed year. We do not work by category in the sediment as stated in our previous response, each reconstructed day has at least 250 points to assess evaluation. 

Round 2

Reviewer 2 Report

I agree with the acceptance of this article.

Back to TopTop