Next Article in Journal
Long-Term Exposure to Polystyrene Nanoplastics Impairs the Liver Health of Medaka
Next Article in Special Issue
Artificial Neural Networks and Multiple Linear Regression for Filling in Missing Daily Rainfall Data
Previous Article in Journal
Long-Term Trend and Inter-Annual Variation of Ocean Heat Content in the Bohai, Yellow, and East China Seas
Previous Article in Special Issue
Improved Monthly and Seasonal Multi-Model Ensemble Precipitation Forecasts in Southwest Asia Using Machine Learning Algorithms
 
 
Article
Peer-Review Record

A Study on the Optimal Deep Learning Model for Dam Inflow Prediction

Water 2022, 14(17), 2766; https://doi.org/10.3390/w14172766
by Beom-Jin Kim 1, You-Tae Lee 2 and Byung-Hyun Kim 3,*
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Water 2022, 14(17), 2766; https://doi.org/10.3390/w14172766
Submission received: 18 July 2022 / Revised: 31 August 2022 / Accepted: 1 September 2022 / Published: 5 September 2022

Round 1

Reviewer 1 Report

In this study, the authors employed four machine learning models to predict the inflow of four dams in South Korea under droughts and typhoon conditions. In general, I found this paper is well written and can be of interest to a wide range of international readers. Here I have a few comments for the authors to further improve the readability of this paper.

1. The authors used water balance equation to calculate the inflow of two dams. In reality, however, this approach could lead to discontinuous or negative inflow values due to the backwater effect of reservoirs. More details should be given on whether this problem exists in the case study and how the authors deal with it.

2. It is suggested the SFM model be moved to the Study Methods section. In particular, the calibration and validation strategy of SFM need to be presented. I am asking for this because this is required to interpret Table 15. The authors stated that RNN outperforms SFM but the given information in Table 15 is not fully supportive of this statement. For example, the SFM simulates less inflow than RNN and the observed inflow for all typhoon events for Imha Dam, which could be a systematic error of SFM that can be easily corrected through SFM model calibration. 

3. Some figures lack legends, for example Fig. 3.

In general, I found the paper interesting and worthy of publication after addressing the comments above.

Author Response

Please see the attachment.

Thank you

 

Author Response File: Author Response.pdf

Reviewer 2 Report


Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Thank you

 

Author Response File: Author Response.pdf

Reviewer 3 Report

This manuscript conducted a study on the optimal deep learning model for dam inflow prediction of the Andong Dam and Imha Dam in Korea. The study provides a feasible tool for dam inflow prediction and comply with the aims and scope of this journal. However, I would like the authors to respond to the following comments before it can be accepted for publication:

1. The main findings are nor clear in Abstract part, please rewrite to quantitatively show the main objectives and findings of the study.

2. In the Introduction part, the authors should comment on the advances and progresses on the related topics, rather than just give a list of several studies.

3. The necessity of the study was not clear, which was clarified in the study to cope with the weather conditions varying time to time and predict the dam inflow?

4. Figure 2, what was the small black dots and black line boundary in the map?

5. Table 2, it was improper as be named as “classification”.

6. There were too many equations in the manuscript, please just reserve the important ones rather than the simple and commonly known ones.

7. Why did the authors only compare some quartile values and maximum values of inflow, rather than the flow processes?

8. Some figures and tables showed the same information and could be deleted.

9. The tables and figures were not readability, please improve them.

10. Please give an explanation or discussion about the suitability of these various deep learning techniques for different scenarios (e.g., drought, flood, and typhoon).

11. Please give a detailed comparison about the prediction improvement of the deep learning models and the current SFM model.

Author Response

Please see the attachment.

Thank you

 

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

Thanks authors for the revision and response, although my comments are not really well addressed. The authors stated the inflow is 'measured' by k-water while also stating it is calculated from the water balance. The authors did not provide sufficient explanation why the calibrated SFM that leads to an underestimated flood peak for all typhoon events can be considered adequate for comparison with maching learning methods.

These issues could be minor and therefore I leave the decision to the editor.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

1.     Why were other hyperparameters of deep learning models such as learning rate, dropout rate, hidden units, etc. not tuned based on grid search and trial and error? Why was the hidden layer set to 3? With these questions, I think the authors have a wrong understanding and application of deep learning algorithms, resulting in unreliable research results.

2.     The dataset usually is divided into three subsets: calibration, validation, and testing. The calibration is used for optimizing several deep learning models (with different combinations of hyperparameters), the validation set is used for selecting an optimized deep learning model, and the testing set is finally used for validating the selected optimized deep learning model. However, limited by the amount of data, the cross-validation strategy is usually used for both calibration and validation sets to determine an optimized deep learning model. In other words, for 5-fold cross-validation, the datasets including the calibration set and validation set should be equally divided into 5 parts, each part of the dataset will be used as a validation set meanwhile the rest of the dataset will be used as a calibration set. Please read more cross-validation literature to ensure accurate use of cross-validation strategies. Otherwise, cross-validation should not be used in this manuscript to mislead the readers.

 

3.     How were the predictors of each deep learning model determined? If you only used precipitation for forecasting dam inflow? You should explain how the predictors were determined. For example, you can use …, Pt-i,…Pt-2, Pt-1, and Pt to predict Qt+1. (The symbol ‘P’ and ‘Q’ are referred to as the Precipitation and the dam inflow, respectively). The readers might want to know how you determine the parameter ‘i’ in Pt-i. Besides, only using the precipitation as the predictor’s source may lose some significant information.

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop