Next Article in Journal
Analysis of BDS-3 Real-Time Satellite Clock Offset Estimated in Global and Asia-Pacific and the Corresponding PPP Performances
Next Article in Special Issue
Monitoring Seasonal Movement Characteristics of the Landslide Based on Time-Series InSAR Technology: The Cheyiping Landslide Case Study, China
Previous Article in Journal
Applicability Assessment of a Spatiotemporal Geostatistical Fusion Model for Disaster Monitoring: Two Cases of Flood and Wildfire
Previous Article in Special Issue
Comparison of Three Mixed-Effects Models for Mass Movement Susceptibility Mapping Based on Incomplete Inventory in China
 
 
Article
Peer-Review Record

The Suitability of UAV-Derived DSMs and the Impact of DEM Resolutions on Rockfall Numerical Simulations: A Case Study of the Bouanane Active Scarp, Tétouan, Northern Morocco

Remote Sens. 2022, 14(24), 6205; https://doi.org/10.3390/rs14246205
by Ali Bounab 1, Younes El Kharim 1 and Rachid El Hamdouni 2,*
Reviewer 1: Anonymous
Reviewer 2:
Remote Sens. 2022, 14(24), 6205; https://doi.org/10.3390/rs14246205
Submission received: 6 October 2022 / Revised: 2 December 2022 / Accepted: 4 December 2022 / Published: 7 December 2022

Round 1

Reviewer 1 Report

The paper presents the impact of the digital elevation model (DEM) resolution on rockfall simulations through a case study. In particular, DEMs with three different resolutions are implemented in the simulations, which show the best reliability of the results obtained with the high-resolution model compared to the observation during a previous rockfall event.

I congratulate the authors for the work, the paper is well-written and the discussion is very clear. I have no particular comments, only I suggest to review the manuscript, in order to correct some minor shortcomings (e.g. "planting trees … etc" or the format of the letters -a, -b, -c for sub-figures), and enlargethe text on some figures which is poorly legible (e.g. Figures 2,3,5)

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

A nice paper that I recommend for publication. I think that I would be more worried if the rockfall models underestimated the runout distance and velocity. It was interesting that you could judge the age of impact scars on trees, and a good balance of field inspection with the remotely-sensed data.

Well done.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 3 Report

I have firstly a set of general comments which all of them can be regarded as major.
1, I have to wonder about the motivation and real benefits of the presented work. I think it must have been clear to the authors from the beginning that DEM with a resolution of 5 m/30 m cannot provide good results for a simulation of runoff for boulders with a diameter around 1.5 m. I think there is a potential for a good scientific publication in the work done, however I would see it elsewhere. I would e.g. recommend authors trying to find an optimal DEM resolution to be used for their kind of simulations. I mean test various resolutions of UAV-derived DEM (e.g. 0.2 m, 0.5 m, 1 m, 2 m, 5 m) and evaluate quality of simulations based on them with respect to needed processing time.
2, whole study is based on a visual interpretation of results, there is no statistical evaluation or other kind of an objective evaluation
3, quality of presentation must be improved. Some figures are not readable, improper terminology is used in some cases.

Specific comments:
P1L42: I have never heard about laser scanning from a satellite for DEM creation. In the work of Tarquini et al. (2012) which you cite in this sentence, they analyzed quality of Italian DEM created from vector data containing countour lines and elevation points.
In relation to airborne LiDAR data I would like to note, that there are many European countries which have a DEM created by aerial LiDAR covering large regions or even a whole country. See e.g. https://planlaufterrain.com/LiDAR-Data-and-FAQ/ for some specific information. 1 m spatial resolution is standard for this type of productm, therefore UAV is definitely not necessary for a production of DEM with this resolution (as you write in the Conclusion section)
P2L53: you use DTM term for the first time. Please, provide its full meaning
P2L56: why don't you provide references in [] as is standard for MDPI papers and you do it in other sections of your manuscript?
P2L85: please, what do you mean with "N50° to N90°"? You mean orientation of faults with respect to north or something else? I would recommend rewriting this sentence. The same applies for line 94.
P3 Figure 1b - the aerial image is really not well readable. Could you enlarge its size and also try to increase its quality?
P4L158: authors have to provide information about quality of used DEMs. Firstly provide official information about height errors of DEMs with 5m and 30 m resolution, they should be available in their metadata. Consequently, you can use the elevation points provided by Urban Agency of Tetouan to evaluate accuracy of all used DEMs or use UAV-derived DEM to evaluate accuracy of the other two DEMs.
P4L167: GPS is not Ground Positioning System, but Global Positioning System. GPS receiver built in the DJI Phantom 4 definetely does not have the stated positioning accuracy of 0.1/0.3 m in horizontal/vertical direction. How did you achieve this result? Did you use georeferencing via ground control points in the processing of images or was you Phantom 4 equipped with some advanced GNSS module allowing a differential positioning solution (as RTK, PPK)? Probably you have mismatched accuracy of positioning with the accuracy of drone's ability to hover.
P4L169: 12.4 Mpix is a total number of effective pixels on the sensor of DJI Phantom 4 camera, it is not "a pixel resolution".
P5L170: please, use proper terminology when describing process of RGB images processing in order to derive a raster DEM. Provide also the most important information related to image collection (size of the area, height above the ground during flight, image overlap, etc.) and their processing (software, steps, etc.). In a scientific paper, you cannot definitely use terms as "filtering "bad" data points", you have to use a proper terminology.
P5L184: please, provide more information about the elevation points which you used in order to prepare a digital terrain model from your digitial surface model. I mean information about their number, spatial distribution in the area and mainly their accuracy. Did you use some elevation points also for an independent validation of your final digital terrain model or not? You base your study on this digital terrain model, so you should evaluate its quality. This is in relation with my comment on P4L158
P6L197: Authors provide information that Rocpro 3D software was used for their simulations. Since they mention usage of digital terrain models in this software and also an existence of dense pine trees canopy in their area, I wonder if the simulation software can count with this type of vegetation cover which can definitely slow down or at least influence a trajectory of the boulder during its rocfall? Can you comment on this in your manuscript?
P8 Figure 5: increase size of this figure, it is not readable in its current form
P9 L297: the paragraph starting with "By looking at the ..." is given twice in the text.
P10 Figure 8: you must improve the way how do you present this result. Firstly, I recommend zooming into the area with boulder trajectories - the current scale of the image does not allow the reader to see any detail. In the text, you mention three boulders which you studied - I would therefore expect to see three individual trajectories representing the ground truth (or at least what is meant to be the ground truth).
Then it is not clear to me how did you generate various modeled trajectories - in which parameters of the simulation do they differ? Moreover, besides an oral description of the results, you have to provide a statistical evaluation . Otherwise your assesment is based only on a visual analysis (which can be subjective).
P11 Section 4.4: I wonder if the whole section 4.4 can bring any valuable information to the reader? If I am not wrong, you can only present results on velocity/energy based on different DEM, but are not able to objectively asses, which of them are the closest to the reality. Can you comment on this?
P14L457: Figure caption mentions Figure 11e, however there is not such a figure (only a to d).
P15L462: I think you have mixed up low and high resolution DEM terms. In your case, high resolution is 1 m, low resolution is 5/30 m. Or do you really want to say that 5/30 m DEMs are "suitable for rockfall simulations especially at the small scale"?
P15L465: I think that your sentence "The results of this paper suggest that at minimum, a 1m resolution DEM is required in order to produce reasonably accurate simulation models." is an overstatement. You do not provide any objective proof for this statement in your whole paper. In this place, I would like to remark my general comment n. 1 and my recommendation in it.
P15 Conclusion: I think some parts of your Discussion section actually belong to the Conclusion section as they only summarize your results. I therefore recommend a revision of your Discussion and Conclusion section.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Round 2

Reviewer 3 Report

I want to thank the authors for the revised version of their manuscript. I agree that the quality of the document has been improved. On the other hand, unfortunately the authors haven't adressed some of my comments although in their cover letter they always write that they did so. Even though I am not happy about it, I have to ask the authors again to adress all my comments properly.

In your cover letter, you write that you are not able to test various resolutions of UAV-derived DEM because your simulation software cannot work with such data. Since I recommended you to test also various resolutions exceeding 1 m used by you, I do not understand your answer. Still, if you do not want to perform the recommended analysis, please, at least invest some time in improving quality of definition of motivation, goals and specific benefits of your work in the Introduction section. Otherwise I do not see its income to be clear and high enough for a scientific publication in the Remote Sensing journal.

Specific comments:
P2L84: Although you wrote in your cover letter that you revised the sentences, you haven't answered my question from the first review: what do you mean with "N50° to N90°"? You mean orientation of faults with respect to north or something else? Please, make this clear in your manuscript. Typically, writing N50° is used for specification of latitude, therefore geographic coordinate (what is probably a different thing than you want to express).
P3 Figure 1b - the problem of the images is definetely not due to MS Word - its size is simply too small to make it readable. Therefore I asked you to enlarge and I do it again.
P5L189: although the authors write in their cover letter that "More information regarding the spatial distribution of GCPs is presented in the text", I do not see it anywhere. Can you please tell me where exactly (in which figure) I can see a spatial distribution of used GCPs in the area? And mention this information directly in the text at page 5 in the paragraph which is introducing usage of GCPs?
Moreover, you write that position of GCPs was estimated by "differential GPS". There are many differential techniques with various quality of positioning. Please, be more specific in this regards and provide more relevant information.
There are also some other questions which need to be answered or at least discussed: you write, that you used GCPs firstly to georeference sparse point cloud and optimize parameters (task 1). Then you used 186 GCPs to obtain DEM from DSM in vegetated areas (task 2). And finally, you used GCPs to evaluate quality of all three tested DEMs (task 3). I wonder - did you use the same set of GCPs for task 1 and 2 and then for task 3? Because if you did so, your evaluation of DEMs in step 3 is not fair, because you used the same data set for creating the DEM from UAV imagery and also for its evaluation. Please, clarify this situation (the proper approach would be to use some points for step 1 and step 2 and other points (means independent set of points) for step 3).
P5L197: please, write in the manuscript, how did you smoothed the DEM after subtracting canopy from it to get the final product?
P6L204: information about quality of tested DEMs should be in section 3.3, not in section 3.4 called "3D trajectory simulation". Please, always keep information in sections where they are supposed to be.
Besides providing histogram in Figure 3d, please provide corresponding values of basic statistical parameters (mean error, standard deviation, RMSE) for individual DEMs. From the histogram it is clear that errors of individual DEMs are significantly, different, however it is necessary to quantify them also in the text.
P6L202: I am afraid that you haven't understood my comment, therefore I repeat it. I understand that you excluded forest vegetation from your DSM based on UAV imagery and prepared a DEM representing only the ground. However, my question is focusing on a completely different topic: can the used simulation software count with dense vegetation cover (trees) which can definitely slow down or at least influence a trajectory of the boulder during its rocfall? What I mean - the boulder can easily hit tree trunk and therefore be stopped by it or at least loose part of its energy and change its trajectory. Is this possibility included in the simulation process?
P6L220: please, provide appropriate references for mentioned approaches as ANOVA and PostHoc test.
P10L334: it is not clear to me, how exactly did you compute the "horizontal distance to real trajectory" values which are shown in Figure 9. You have to explicitely describe it in the section devoted to methodology of your work - probably in 3.5. You already mention here "horizontal spatial error", however it is not clear, how exactly it was computed. Optimally, provide a formula. Also, please, be consistent in your terms - in 3.5 you write "horizontal spatial error", in 4.4 "horizontal error", in figure 9 caption "hirozintal distance to real trajectory". It is confusing for the reader. Important question related to interpretation of these results: how did you decide that range 0-2 m means a "good error range"? Why 0-2 m and not something different? What is the base of your statement?
P10 Figure 8: thank you for improving the figure. I think that you should zoom in once more - as the parts of figure showing the trajectories are the only important thing in the figure, they should fill full width of figures. Mainly Figure 8a can be significantly improved in this respect.
Figure 8d - as you have three reference trajectories, I would expect to see three boxplots for each DEM, not just one boxplot for each DEM. I think this would be more logical than you current setup, where plot results for various reference trajectories in a single boxplot. I would also suggest increasing size of boxplots as they are not well readable.
You haven't reacted on my original question: how did you generate various modeled trajectories - in which parameters of the simulation do they differ? Please, answer it in the manuscript.
P12L369: please, make it explicitely clear in the manuscript, what were tested null hypothesis in your statistical tests. To realize ANOVA, some assumptions need to be met: values need to have a normal distribution and have a comparable variance (homoscedasticity). Were these assumptions met in case of your data? How did you find it out?
P15 Conclusion: In my original review, I recommended you to do a revision of your Discussion nad Conclusion sections, because I thought that some parts of your Discussion section actually belong to the Conclusion section as they only summarize your results. In your cover letter your wrote that "The conclusion and discussion were revised according to your recommendations." However, I do not see any changes in the Discussion section at all (despite a few really cosmetic ones) and there is only one sentence added in the Conclusion section. I therefore ask you again to take my recommendation into account or reply why you haven't done so.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Back to TopTop