Next Article in Journal
High-Resolution Wideband Waveform Design for Sonar Based on Multi-Parameter Modulation
Next Article in Special Issue
Combining Hydrological Models and Remote Sensing to Characterize Snowpack Dynamics in High Mountains
Previous Article in Journal
A Downscaling–Merging Scheme for Monthly Precipitation Estimation with High Resolution Based on CBAM-ConvLSTM
Previous Article in Special Issue
Geospatial Modeling Based-Multi-Criteria Decision-Making for Flash Flood Susceptibility Zonation in an Arid Area
 
 
Article
Peer-Review Record

Statistical Evaluation of the Performance of Gridded Daily Precipitation Products from Reanalysis Data, Satellite Estimates, and Merged Analyses over Global Land

Remote Sens. 2023, 15(18), 4602; https://doi.org/10.3390/rs15184602
by Weihua Cao 1, Suping Nie 2,3,4,*, Lijuan Ma 5 and Liang Zhao 6
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Remote Sens. 2023, 15(18), 4602; https://doi.org/10.3390/rs15184602
Submission received: 5 July 2023 / Revised: 15 September 2023 / Accepted: 16 September 2023 / Published: 19 September 2023
(This article belongs to the Special Issue Remote Sensing of Floods: Progress, Challenges and Opportunities)

Round 1

Reviewer 1 Report

In the last 10 years, we have seen significant changes in the climate, namely an increase in temperature and precipitation.

Therefore, it is advisable to update the research data using data for 2010-2020.

For comparison, I also suggest using GNSS meteorology data (this is a proposal).

Author Response

Thank you for your comments. For a detailed response to your comments, please refer to the attached document.

Author Response File: Author Response.pdf

Reviewer 2 Report

Authors presented well-designed and clear-described study. Conclusions and findings covered in the article will appear useful methodological material when implementing global precipitation grids in complex analysis and data processing chains.

Minor criticism only can be introduced from my side:

- line 23 – looks like the first letter in the sentence is not capitalized – please check;

- line 33 – looks like capital “S” in “Statistical” have to be decapitalized – please check;

- lines 44 and 52 – please check probable misprint – “estimates” instead of “estimations”;

- it is better to redesign the text to allow joining of Table 1 caption to table itself in one page in the printable version of the article, as now the caption is cut off the table by page break;

- check please is the “(The value in the upper right corner of each subfigure is the global mean value.)” phrase needed in Figure 3 caption, as no any numbers/digits presented in upper right corners of subfigures;

- it is better to relocate figures 3-5 to ensure presenting of citing of each figure appearing before figure itself, in the corresponding sections;

- whitespaces are missed before “mm/d” in lines 302, 304, 399, 402, 403;

- finally, it is strongly advised (from my side) to incorporate in the paper a set of appendixes to present figures 1-6 with higher resolution, or to publish additionally supporting materials containing raw gridded maps and time series data that were used to compile the figures.

Author Response

Thank you for your comments. For a detailed response to your comments, please refer to the attached document.

Author Response File: Author Response.pdf

Reviewer 3 Report

Manuscript ID: 2517601 

Statistical evaluation of the performance of gridded daily precipitation products from reanalysis data, satellite estimates, and merged analyses over global Land

 

This study analyzes various operational precipitation data products over land during January 2003 to December 2016. It also compares these data products with rain gauge dataset from CPC-U. Finally, it has been found that BCC Merged Estimation of Precipitation (BMEP) compares well with CPC-U.

 

Major Comments

The authors have done a good job of presenting the study as well as the findings but it is not clear as to what is the novelty of the work as many datasets are already in use and have more or less similar performances with slight variation.

 

Specific Comments

Line 78-81: Long sentence and grammatical mistakes. Many long sentences have been used throughout the manuscript.

 

Line 85-87: The timely and reliable nature of BMEP makes it invaluable for accurate and up-to-date flood prediction, mitigation, and emergency response efforts.

What is meant by timely and reliable nature?

 

Figure 1: BMEP is closest to CPC-U output but the results are still higher than CPC-U. Is there a reason for that?

 

Figure 2: Why BMEP and ERA-I are similar in all areas except in South America and Eurasia?

 

Figure 3: There should be a discussion on why and how ERAI performs very similar to BMEP.

 

Section 3.4: What is the meaning of spatial bias? Should be explained a little better, for people in allied fields to understand the manuscript.

 

Section 3.5: Spatial correlation coefficients should be explained a little better for people in allied fields to understand the manuscript.

Comments for author File: Comments.pdf

The sentences can be made shorter to improve quality of english and reduce grammatical mistakes. Should be reviewed by a proficient english writer.

Author Response

Thank you for your comments. For a detailed response to your comments, please refer to the attached document.

Author Response File: Author Response.pdf

Reviewer 4 Report

Review of "Statistical evaluation of the performance of gridded daily pre-2 cipitation products from reanalysis data, satellite estimates, and 3 merged analyses over global Land" by Weihua et al

The manuscript conducts a systematic comparison of different avaialable global gridded precipitation data sets. These data set are all aimed at a user community in meteorological forecasting, climatology, public infrastucture, civil protection and general understanding of the hydrological cycle. Although the comparison itself is technically straight forward and no in-depth insights are gained, it definitely deserves publication as such a systematic comparison provides important background information to the user community with respect to characterisation of their capabilities. I have only minor points of criticism. In a few places the introduction of presented comparison parameters should be a bit more detailed (issue 1). In some other places, the advertisement of the data set regarded best by the authors (BMEP) leads to an overly positive wording which I suggest to put a bit more neutral (issue 2). The results itself are convincing enough. Issue 3 is the missing explanation for observed discrepancies which should be possible from the data evaluatio.

I recommend publication after minor revision.


Issues in detail:

Line 79-90, "recently developed ... BMEP". Is a 2016 publication "recently"? Moreover the following lines read like an advertisement of BMEP. Up to here, you gave the impression that it is only one of several datasets to be compared. Either be honest and state, before the introduction of all data sets, that it is the purpose of this paper to illustrate the usefulness of BMEP. Or be more neutral in wording here before you even started the comparison.

line 101: "newly developed": It is more than 7 years old?! Please adjust wording.

line 107: Please tell the reader where one can get the version of the data sets you have used. Please mention download webpages and/or DOIs of data sets for all of them.

line 147: It is necessary to add information on the accuracy of the validation data set. The accuracy strongly depends on rain-gauge density and quality in different parts of the world. There must be a map presentation of this uncertainty and it should be shown or summarized in words.

line 196-198: This early quality statement "on behalf" of BMEP seems to be oversimplified. The values of GPCP-1DD and BMEP over the Amazon seem very comparable! Differences between these seem in the range of 10%. On the other hand GPCP is closer to CPC-U over Central Africa and North America and relative differences there are rather in the 25-100% range! The color bar distorts the impression as large relative differences in the "blue" areas seem small (still "blue"). This is corroborated by the next figures's bias graphs! Please adjust discussion. It is more obvious though, that ERA-I and CFSR and in part JRA-55 overestimate sub-tropical and tropical rainfall maxima all over the world. Please make clear how much data overlap between the CPC-U data set and the BMEP data set is given.

line 199, Fig1: Please add statement on number in upper right of each image.

line 203: Please repeat in text... Are these for daily values?

line 227, Fig2: For these 3x5 figures it would be nice to have thinner coastlines. Maybe also for Fig1. This way important values around Indonesia can hardly be seen.

lin 247, Fig3: Enlarge to full width of page to make details better visible. Thinner lines, larger image, better resolution would much improve the image! In addition, please state the time reolsution of the data shown in the caption.

line 266, "exceptional cosistency": This is difficult to justify with Fig3. The difference between green, red, yellow and sometimes blue is hard to detect. The impression relies on the order of plotting. Improve Fig3.

line 270-272: Please comment in the manuscript where this bias comes from. I guess it's mainly the tropical bias?

lines 273-280: It has to be noted that the downloadable data for CFSR ends 2010, if I see it correctly. This would mean an end of validity of the setup after that is no surprise. Please discuss and give more details on the used data set versions and their accessability.

line 281: Spatial Bias chapter: Please introduce the concept "spatial bias" better. Mention figure 2 early, because you will discuss spatial averages of these quantitities over time, aren't you? I was a bit lost first.
Alternatively, think about skipping the whole chapter 3.4. You state yourself, that there is no additional information compared to the above anlaysis. Maybe it is not really needed. I really have doubts about its value. Opposing large biases distributed over the grid/world, e.g., due to small scale "noise" or variability in one data set, could lead to small mean spatial biases. What is it good for then?


line 321: Please again help the reader by mentioning "correlation as visible as temporal average in Fig 2".

line 330: Why? Please discuss reasons for this annual cylce.

line 353: Again. Please give reason for the results presented in the above chapter.

line 365: Maybe, once more, reference Fig2.

line 414: Please spend a few additional sentences on what is a "hit" and "false alarm" and how the three discussed scores are constructed.

line 453: "GPCP-1DD": Plese mention that it shows the smallest overall biases together with BMEP.

line 455: "CFSR": You have to mention again that a large part of their problems stems from the post-2010 period though.

line 462, "outperforming": As CPC data set is the basis of comparison, it can not outperfrom any of the other. Please adjust wording.

none

Author Response

Thank you for your comments. For a detailed response to your comments, please refer to the attached document.

Author Response File: Author Response.pdf

Back to TopTop