Next Article in Journal
Drone-Based Autonomous Motion Planning System for Outdoor Environments under Object Detection Uncertainty
Next Article in Special Issue
Current Status and Future Opportunities for Grain Protein Prediction Using On- and Off-Combine Sensors: A Synthesis-Analysis of the Literature
Previous Article in Journal
Research and Analysis of Ecological Environment Quality in the Middle Reaches of the Yangtze River Basin between 2000 and 2019
Previous Article in Special Issue
Incorporating Multi-Scale, Spectrally Detected Nitrogen Concentrations into Assessing Nitrogen Use Efficiency for Winter Wheat Breeding Populations
 
 
Article
Peer-Review Record

Predicting Equivalent Water Thickness in Wheat Using UAV Mounted Multispectral Sensor through Deep Learning Techniques

Remote Sens. 2021, 13(21), 4476; https://doi.org/10.3390/rs13214476
by Adama Traore 1, Syed Tahir Ata-Ul-Karim 2, Aiwang Duan 1, Mukesh Kumar Soothar 1, Seydou Traore 3 and Ben Zhao 1,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2021, 13(21), 4476; https://doi.org/10.3390/rs13214476
Submission received: 6 October 2021 / Revised: 30 October 2021 / Accepted: 3 November 2021 / Published: 8 November 2021
(This article belongs to the Special Issue Proximal and Remote Sensing for Precision Crop Management)

Round 1

Reviewer 1 Report

Dear authors,

I believe your text is interesting and important. However, some issues arise.

(1) You note (line 470 and 535) that farmers per se will able to use your results and recommended models to manage irrigation... Are you sure? I am not...

(2) You used the UAV with multispectral sensors. That's perfect, but we know that such sensors are relatively expensive. Did (or could) you discuss some opportunities to use standard visual sensors?

(3) Why were different wheat cultivars used? Is this important for understanding of your results?

 

References — please, add doi where they are applicable!

 

Besides, there are a lot of typos and rather complicated phrases. Please, check your text very carefully. For instance:

line 48

did not > does not

[5],[6],[7] > [5–7]

 — also check similar problems (e.g. line 53)

line 53

The use of UAV platform not only allow the non-destructive and timely data acquisition but also capture the variability on field scale > The use of UAV platform does not only allow the non-destructive and timely data acquisition but captures the variability on field scale as well

line 59

(2001, 2002) > [#,#]

— also lines 507, 522

line 62

1555-1750 > 1555–1750

line 74

is due > was due

line 106

studies — please, add a reference

lines 116–121

we investigated the feasibility of different ML algorithms such 116 as deep neural networks (DNN), artificial neural network (ANN), boosted regression tree 117 (BRT), and support vector machines (SVMs) to predict wheat EWT using multispectral 118 band images and VIs. Especially to: (i) using a DT to select best VIs to predict EWT; (ii) 119 determine the relationship between EWT and vegetations indices (VIs) using the different 120 ML models.

— please, rewrite these sentences in more explicit manner

line 144

An of 0.36 >  An area of 0.36 [right?]

line 146

experiment > experiments

line 165

the weight of fresh mass and fresh [??? masses] of leaf,

lines 181–182

Pix4 Dmapper or pix4D mapper — what is correct?

lines 215–216

The independent variables were determined by the value of y. — Are you sure?

lines 219–220

The SVM models in the case of a regression problem > In the case of a regression problem, the SVM models

lines 234–235

Owing to no requirement of data transformation and outlier elimination are the major reasons of wide use of BRT to produce the desired results. — please, try to rewrite (may be: Wide use of BRT to produce the desired results is mainly determined by absence of requirement for data transformation and outlier elimination...)

line 308

root means > root mean

lines 334–335

From the results > Our results show

line 422

map ... show > map ... shows

lines 429–430

tool with which to model > tool to model

 

Author Response

Responses to Reviewer 1

 

[ comment] I believe your text is interesting and important. However, some issues arise.

Response: Thank you very much for your comments. We have tried our best to carefully address all of the issues raised by you. We believe manuscript has been substantially improved from scientific as well as English language point of view. However, if further improvement is required, please let us know.

[ comment] You note (line 470 and 535) that farmers per se will able to use your results and recommended models to manage irrigation... Are you sure? I am not...

Response: We believe that the newly developed models will assist RS scientist and/or progressive farmers with larger farms size to generate these valuable maps of their fields for irrigation management by identifying the area with lower equivalent water thickness despite the complex nature of ML and DL models. However, learning to operate UAV and all the processing technique as well as machine learning models in the future may being an integral component of this technology. Of course, this will require an extensive training of farmers on these tools and models. Besides, the newly developed models will also assist the experts in fields of technology, agriculture, programming, policy makers and extension for making the recommendations that can guide smallholder farmers to accomplish water management. However, further investigation with different regions and wheat cultivars are recommended to test the applicability of newly developed models for crop water diagnosis.

  • [ comment] You used the UAV with multispectral sensors. That's perfect, but we know that such sensors are relatively expensive. Did (or could) you discuss some opportunities to use standard visual sensors?

Response: Discussion paragraph has been added in page 26 (line 599-628). We agree that the multispectral sensors are relatively expensive. However, we have used it for high resolution images. The multispectral camera can capture images in Red, Green, NIR, Blue and Red Edge regions. The option of using the standard visual camera such as the RGB camera has been added in the discussion section. RGB camera can be used as a low-cost substitute diagnose EWT using machine learning approaches instead of the multispectral camera.

 

  • [ comment] Why were different wheat cultivars used? Is this important for understanding of your results?

 Response: Different wheat cultivars were used to test the applicability of newly developed EWTcanopy model for other wheat cultivars.

[ comment] References — please, add doi where they are applicable!

Response: We have added doi for the references where applicable. Doi for references were not added in previous version of the manuscript as addition of doi was not mandatory in the template of remote sensing for manuscript preparation. 

[ comment] Besides, there are a lot of typos and rather complicated phrases. Please, check your text very carefully. For instance:

Response: The manuscript has been edited by MDPI English editing service.

[ comment] line 48: did not > does not.

Response: “Did not” has been replaced by “does not” in revised manuscript on line 49.

[ comment] line 48:  [5],[6],[7] > [5–7]

Response: Correction has been made as per your suggestions on line 49.

 [ comment] — also check similar problems (e.g. line 53)

Response: All the similar problems has been checked and correction has been made in revised manuscript.

[ comment] line 53: The use of UAV platform not only allow the non-destructive and timely data acquisition but also capture the variability on field scale > The use of UAV platform does not only allow the non-destructive and timely data acquisition but captures the variability on field scale as well.

Response: Sentence has been revised to: {The use of UAV platform does not only allow the non-destructive and timely data acquisition but captures the variability on field scale as well.} on lines 54-55

 

[ comment] line 59: (2001, 2002) > [#,#]

Response: Revised accordingly on line 60.

[ comment] — also lines 507, 522

Response: Revised accordingly on lines 561., 571. All the similar problems have been checked and correction has been made in revised manuscript.

 

[ comment] line 62: 1555-1750 > 1555–1750

Response:  Revised accordingly on line  63.

[ comment] line 74: is due > was due: 

Response: “is due” has been replaced by “was explained” in revised manuscript on line 75.

[ comment] line 106: studies — please, add a reference:

Response: References has been added on line 121.

[ comment] lines 116–121: we investigated the feasibility of different ML algorithms such 116 as deep neural networks (DNN), artificial neural network (ANN), boosted regression tree 117 (BRT), and support vector machines (SVMs) to predict wheat EWT using multispectral 118 band images and VIs. Especially to: (i) using a DT to select best VIs to predict EWT; (ii) 119 determine the relationship between EWT and vegetations indices (VIs) using the different 120 ML models.

— please, rewrite these sentences in more explicit manner.

Response:  The sentence has been rephrased on lines 132-138 as: Therefore, the objective of the present study was to investigate the feasibility of different ML algorithms, such as DNN, ANN, boosted regression tree (BRT), and support vector machines (SVMs) models, to predict wheat EWT using multispectral single-band images and VIs. The key idea behind using several ML methods is to take advantage of each method’s capability to predict EWT. We thus investigated an FS algorithm-based DT to determine best VIs for use as input parameters in the different machine learning models.

[ comment] line 144:   An of 0.36 > An area of 0.36 [right?]

Response:  Thank you very much for pointing this out. Revised accordingly on line 162 as follow: Areas of 0.36 m2

[ comment] line 146: experiment > experiments

Response: Revised accordingly.; we integrated the missing letter on line 163.

[ comment] line 165: the weight of fresh mass and fresh [??? masses] of leaf:

Response:  Revised accordingly on lines 184-185. We used the weight of the fresh and dry masses of the leaf, stem and ear for the calculation of

[ comment] lines 181–182: Pix4 Dmapper or pix4D mapper — what is correct?

Response:  The correct term is Pix4D mapper. We have corrected it in revised manuscript on lines 202-203.

[ comment] lines 215–216: The independent variables were determined by the value of y. — Are you sure?

Response: Thank you. It’s a good question. Revised accordingly on line 269. It was a typographical error, the independent variables are represented with , ,…,  values.

[ comment] lines 219–220: The SVM models in the case of a regression problem > In the case of a regression problem, the SVM models.

Response: We have rephrased the sentence considering your suggestion on lines 273-274.

lines 234–235:  Owing to no requirement of data transformation and outlier elimination are the major reasons of wide use of BRT to produce the desired results. — please, try to rewrite (may be: Wide use of BRT to produce the desired results is mainly determined by absence of requirement for data transformation and outlier elimination...)

Response: Thanks for your suggestion. The sentence has been rephrased in revised manuscript as per your suggestion on line 292-293 as: “BRT is commonly used because it does not require data transformation or outlier elimination”.

 

[ comment] line 308: root means > root mean

Response: “root means” has been replaced by “root mean” in revised manuscript on line 361.

[ comment] lines 334–335: From the results > Our results show

Response:  The sentence has been rephrased as per your suggestion on line 388.

[ comment] line 422: map ... show > map ... shows

Response: The sentence has been rephrased in revised manuscript as per your suggestion on line 476.

[ comment] lines 429–430: tool with which to model > tool to model

Response: The sentence has been rephrased in revised manuscript as per your suggestion on line 483.

 

Reviewer 2 Report

Review - “Predicting equivalent water thickness in wheat using UAV mounted multispectral sensor through deep learning techniques”

The manuscript describes a method to investigate the performance of vegetation indices together with machine learning techniques in retrieving Equivalent Water Thickness, for wheat crop under different water regimes and nitrogen levels.

There are already a lot of works describing methodologies to estimate wheat bio-physiological parameters from UAV and satellites remote sensing images. The data were collected over one year of experimentation (from March to May 2020), however I believe this kind of work should be performed at least over two years of experimentation. The use of the vegetation indices to estimate crop parameters (together with ML) was already discussed in a lot of papers. What are the innovation and originality of the research? I believe this manuscript does not present any novelty to the research domain, and thus has not the potential for being published by Remote Sensing.

Comments

Lines 57 - 58: I believe this sentence can be deleted and replaced by a more specific one introducing the EWT in the line 65. Like that you are introducing in the beginning of the paragraph a general presentation of the water reflectance band, then you are going to specify it for the detection of the EWT.

Lines 69 – 78: The paragraph should be re-written, describing in a more detailed way the most sensitive spectral bands to EWT in spatial.

Lines 93 - 94: The sentence can be deleted.

Line 104: The sentence is not clear. It should be re-written.

Line 127: As I understood, the data collection was limited to one year of experimentation, and more specifically to three months. I surely think that this kind of experimentation requires at least two years of data.

Line 145: Missed subject?

Line 200: I don’t believe that the use of the NDVI index is enough to separate vegetation from other land covers. The bibliography is rich by a combination of indices together with the NDVI, and specified thresholds to limit vegetation from other classes.

Lines 205 – 206: On what basis you concluded that the integration of NDVI with Otsu algorithm yielded efficient results in separating canopy and soil?

Comments for author File: Comments.docx

Author Response

Responses to Reviewer 2

The manuscript describes a method to investigate the performance of vegetation indices together with machine learning techniques in retrieving Equivalent Water Thickness, for wheat crop under different water regimes and nitrogen levels.

[comment] There are already a lot of works describing methodologies to estimate wheat bio-physiological parameters from UAV and satellites remote sensing images. The data were collected over one year of experimentation (from March to May 2020), however I believe this kind of work should be performed at least over two years of experimentation. The use of the vegetation indices to estimate crop parameters (together with ML) was already discussed in a lot of papers. What are the innovation and originality of the research? I believe this manuscript does not present any novelty to the research domain, and thus has not the potential for being published by Remote Sensing.

Response: We agree with reviewer that a lot of work describing methodologies to estimate wheat bio-physiological parameters from UAV and satellites remote sensing images has been already done. This study was conducted under nitrogen-water colimitation (with varied nitrogen levels and irrigation levels). The assessment of EWT under nitrogen-water colimitation is novel aspect of this study. The information has been added in revised manuscript. Additionally, this study was aimed to investigate the ML methods such as SVM, ANN, BRT and DNN and compared their performance in term of estimating EWT, which has not yet been explored, particularly under nitrogen-water colimitation. Most of the previous studies were conducted to assess nitrogen content, chlorophyll content, plant biomass etc.

Moreover, to the best of our knowledge, no attempt has been made to assess the EWT using DNN. Previous studies assessing crop biophysical parameters generally used VIs already reported to have good correlation using linear regression models. Since neural network (NN) was proposed, experts have been exploring the possibility to train deep neural network (DNN) with many hidden layers like the human neural. However, the success of the ANN was limited 1 or 2 hidden layers and the results after training is usually worse. The difficulties of training DNN lie in vanishing gradients with the increment of the hidden layer number (depth of network). The DNN-MLP use as activation function for smoothing the issue of vanishing or exploding gradients and by speeding-up training. In addition, deep architectures often have an advantage over ANN architectures and traditional ML models (SVMs, BRT, etc.)  when dealing with complex learning problems. The reviews of the existing literature on ML in agriculture are outdated. However, our study provides a comprehensive ML that could capture the relationships between typical DNN models and EWT. Traditionally, most researches assessing crop biophysical parameters used VIs already reported to have good correlation using linear regression models. Most reported studies retrieved EWT in the region of 900-2500 nm. The hypothesis is: if there is relation between N, chlorophyll content and Water content. The developed indices in the range of 500- 800 nm to assess these crop bio parameters may be used to predict EWT. The relationships between (VIs) derived from spectral reflectance and EWT are known to be nonlinear. As a result, nonlinear machine learning methods have the potential to improve the estimation accuracy. The results of this study can significantly improve the estimation of EWT using UAV remote sensing. The application machine learning methods especially, DNN offers a new opportunity to better use remote sensing data for monitoring and guiding precision irrigation. More studies are needed to further improve these machine learning-based models by combining UAV data and other related climatic parameters for applications in precision irrigation schedule. The employment of such techniques may allow famers a more efficient method to manage irrigation scheduling in large-scale.

[comment]  Lines 57 - 58: I believe this sentence can be deleted and replaced by a more specific one introducing the EWT in the line 65. Like that you are introducing in the beginning of the paragraph a general presentation of the water reflectance band, then you are going to specify it for the detection of the EWT.

Response: Considering your suggestion, we have removed the sentence

[comment]  Lines 69 – 78: The paragraph should be re-written, describing in a more detailed way the most sensitive spectral bands to EWT in spatial.

Response: The paragraph has been re-written on lines 58-79 and more details as per your suggestions has been added. If it is still not clear or further details are required, please let us know.

 

[comment] Lines 93 - 94: The sentence can be deleted.

Response: The sentence has been deleted from revised manuscript.

[comment] Line 104: The sentence is not clear. It should be re-written.

Response: The sentence has been rephrased in revised manuscript for the clarity of expression on line 121.

[comment] Line 127: As I understood, the data collection was limited to one year of experimentation, and more specifically to three months. I surely think that this kind of experimentation requires at least two years of data.

Response:  Yes, the data collection was limited to one year of experimentation. To ensure the reliability of results, 3 experiments were conducted in this study. Out of 3 experiments, one was conducted under control conditions. Several studies conducted under control conditions used one year experiment. Besides, 2 field experiments were conducted to get reliable results. We also used different wheat cultivars in field and controlled condition experiments to test the reliability of newly developed models.

However, we agree with your concern that data acquired from experiments repeated in time and space (years and locations) may give more reliable. Considering your suggestions, we will design our future studies with at least two years data.

Recent studies published in Remote Sensing also used one year data for this kind of experiment.

For example:

Hussain, S.; Gao, K.; Din, M.; Gao, Y.; Shi, Z. Assessment of UAV-Onboard Multispectral Sensor for Non-Destructive Site-Specific Rapeseed Crop Phenotype Variable at Different Phenological Stages and Resolutions. Remote Sens. 2020, 12, 397, doi:https://doi.org/10.3390/rs12030397.

Brown, L.A.; Ogutu, B.O.; Dash, J. Estimating Forest Leaf Area Index and Canopy Chlorophyll Content with Sentinel-2: An Evaluation of Two Hybrid Retrieval Algorithms. Remote Sens. 2019, 11, 1752. https://doi.org/10.3390/rs11151752

[comment] Line 145: Missed subject?

Response: The sentence has been rephrased on lines 163-164 as, “However, in Exps. 2-3, the samples were collected from three different locations and the average value was then used.

[comment] Line 200: I don’t believe that the use of the NDVI index is enough to separate vegetation from other land covers. The bibliography is rich by a combination of indices together with the NDVI, and specified thresholds to limit vegetation from other classes.

Response: The use of NDVI (Baluja et al., 2012a; Bosilj et al., 2018; Corti et al., 2017; Poblete et al., 2017) has been widely used in several studies to separate canopy from soil. The thresholds values of NDVI using Otsu segmentation differ when you are using multispectral, hyperspectral as well as UAV or satellite sensing data remote. It is important to note that the value of thresholds change during the growth stages and differ from one crop to other.

 

[comment] Lines 205 – 206: On what basis you concluded that the integration of NDVI with Otsu algorithm yielded efficient results in separating canopy and soil?

Response: Thanks for your question. The NDVI and Otsu segmentation method successfully separated the samples areas form the wheat canopy. The objective was to minimize soil background on VIs values (Baluja et al., 2012b; Corti et al., 2017).

 

 

References

Baluja, J., Diago, M. P., Balda, P., & Tardaguila, J. (2012a). Assessment of vineyard water status variability by thermal and multispectral imagery using an unmanned aerial vehicle ( UAV ). Irrig. Sci., 30, 511–522. https://doi.org/10.1007/s00271-012-0382-9

Baluja, J., Diago, M. P., Balda, P., & Tardaguila, J. (2012b). Assessment of vineyard water status variability by thermal and multispectral imagery using an unmanned aerial vehicle ( UAV ). 511–522. https://doi.org/10.1007/s00271-012-0382-9

Bosilj, P., Duckett, T., & Cielniak, G. (2018). Connected attribute morphology for unified vegetation segmentation and classification in precision agriculture. Comput. Ind., 98, 226–240. https://doi.org/https://doi.org/10.1016/j.compind.2018.02.003

Corti, M., Marino, P., Cavalli, D., & Cabassi, G. (2017). Hyperspectral imaging of spinach canopy under combined water and nitrogen stress to estimate biomass , water , and nitrogen content. Biosyst. Eng., 158, 38–50. https://doi.org/10.1016/j.biosystemseng.2017.03.006

Poblete, T., Ortega-far, S., & Bardeen, M. (2017). Artificial Neural Network to Predict Vine Water Status Spatial Variability Using Multispectral Information Obtained from an Unmanned Aerial Vehicle ( UAV ). Sensors, 17(1–17). https://doi.org/10.3390/s17112488

 

 

Reviewer 3 Report

I believe this is an excellent study with strong contribution to agriculture and vegetation remote sensing. The application of machine learning and deep learning models for vegetation health monitoring is incredibly important. I recommend that this article be published after the authors address minor revisions. Extensive type editing is required to correct grammatical and general English language errors. A small sample of these errors are addressed below, but the entire paper requires a thorough review of grammar and general English. Thank you for conducting fascinating and applicable research.

Abstract and Introduction- Differentiate between machine learning and Deep learning. Distinguishing between the two methods would add another element of comparison. Yes, the neural networks you describe in the article are machine learning models, but they are also considered deep-learning models, which is a subset of machine learning. I believe it would add another element, and another level of comparison, to the article.

Lines 47-48- Run-on sentence needs touch up grammatically.

Lines 54-56- Are you asserting this, or proposing this is true?

Line 73-76- not a clear sentence. Redundant?  

Line 145 “an ___ of”. What is meant to go there? Area?

Line 155 Was the sUAS imagery acquired before destructive sampling each time? I am not sure what you mean by concurrence, though for accurate modeling the imagery should be gathered just before sampling. Concurrently indicates at the same time; could you be more specific?

Iine 172- could you be more specific on the model of sUAS?

Line 173- what was side-lap and front lap set to? 80%? 85%

Line 174- Mentioning the name of the sensor (Micasense Red Edge -MX) in the text would be beneficial here.

Section 2.3- Good description of the simple Radiometric calibration methods. Were ground control points used for georectification? If so, how many?

Line 179- perhaps remove “Besides.”

Section 2.4- The inclusion of brief background on the methods is very good. However, how do you then apply the pixel values to get the estimates? I believe more explanation would help the reader make the jump from your materials sections to the results. You describe fairly well for the deep learning, but there seems to be some detail missing in between. Perhaps a figure might help make the connection from what you describe in Figure 2 to section 3, the results.

Line 218- do you have some examples in the literature of SVM being applied? The same for all sections. examples may be helpful. I agree that there are many, but examples would give concrete support to your assertion.

Table 7- change R2 to R2

Line 450- et? Please correct

Section 4.4- you describe many advantages, and you even describe some future research that can be conducted, but you do not describe any limitations of ML in the work you share. I would love to read some insights you have from using ML on sUAS data. What limitations can you share?

Also, I think a very important advantage is missed: sUAS or UAV are a personal remote sensing device- temporal and spatial resolutions are more in the remote sensing scientists’ control than ever before. Using your prescribed methods, a RS scientist or farmer is able to generate these valuable maps on demand. Despite the complex nature of ML and DL models, is this really something farmers could accomplish?

There are several other grammatical and English-related errors. Please review thoroughly.

Author Response

Responses to Reviewer 3

[ comment] I believe this is an excellent study with strong contribution to agriculture and vegetation remote sensing. The application of machine learning and deep learning models for vegetation health monitoring is incredibly important. I recommend that this article be published after the authors address minor revisions. Extensive type editing is required to correct grammatical and general English language errors. A small sample of these errors are addressed below, but the entire paper requires a thorough review of grammar and general English. Thank you for conducting fascinating and applicable research.

Response: Thank you very your comments about the significance of our study. The revised manuscript has been checked by MDPI editing services to address the issue of grammatical and general English language errors. We hope that the manuscript has been improved and meet the MDPI standards after this revision.

Abstract and Introduction- Differentiate between machine learning and Deep learning. Distinguishing between the two methods would add another element of comparison. Yes, the neural networks you describe in the article are machine learning models, but they are also considered deep-learning models, which is a subset of machine learning. I believe it would add another element, and another level of comparison, to the article.

Response: Thanks for you for point it out. We added a paragraph on lines 94-109 which define NN and DNN as follows: Machine learning (ML) is a way to implement artificial intelligence without the need for explicit programming while deep learning is a subset of machine learning techniques which provide ways for computers, given data makes intuitive and intelligent decisions using a neural network stacked layer-wise. Machine Learning (ML) algorithms such us neural networks (NN) are known to learn the underlying relationship the input and the output data(Khan et al., 2020).  NNs are one of the best learning algorithms for understanding exemplary performance in regression problems (Colombo et al., 2008; Li et al., 2018; Zha et al., 2020). The success of NN captured attention of researchers by exploring the possibility to train NN with several hidden layers. However, the success of the NN was limited one or two hidden layers. For instance, the difficulties of training NN such as artificial neural network (ANN) with several hidden layers lie in vanishing gradients with the increment of the hidden layer number (depth of network). The introduction of deep neural network (DNN) smoothed the issue of vanishing or exploding gradients and by speeding-up training (Wang et al., 2020). The attractive feature of DNN is its ability to exploit new activation functions and learning algorithms. In addition, deep architectures often have an advantage over ANN architectures when dealing with complex learning problems(Khan et al., 2020).

[Comment] In Lines 47-48- Run-on sentence needs touch up grammatically.

Response: Response: Correction regarding grammatical error has been made as per your suggestions on lines 48-49.

[Comment] Lines 54-56- Are you asserting this, or proposing this is true?

Response: We are proposing that UAV is a tool to capture the variability on field scale by allowing the non-destructive and timely data acquisition.

[Comment] Lines 73-76- not a clear sentence. Redundant?  

Response: The sentence has been rephrased for clarity of expression on lines 74-77.

[Comment] Line 145 “an ___ of”. What is meant to go there? Area?

Response: Revised accordingly as follow: Areas of 0.36 m2 on line 162.

[Comment] Line 155 Was the UAS imagery acquired before destructive sampling each time? I am not sure what you mean by concurrence, though for accurate modeling the imagery should be gathered just before sampling. Concurrently indicates at the same time; could you be more specific?

Response: The destructive sampling was performed immediately after UAV flight (30 min maximum). The information has been added on line 175 in the revised manuscript.

[Comment] Iine 172- could you be more specific on the model of sUAS?

Response: A multi-rotor UAV Spreading Wings S900 (DJI-Innovations Inc., Shenzhen, China) was used. The information has been added on line 192 in revised manuscript.

[Comment] Line 173- what was side-lap and front lap set to? 80%? 85%

Response: :  In this study, one picture was designed to acquire 80% forward overlap (between picture along the same flight line) and 85% side overlap (between picture on adjacent flight lines) (https://support.pix4d.com/hc/en-us/articles/203756125-How-to-verify-that-there-is-enough-overlap-between-the-images) . It is necessary to capture a large number of overlapped images that must be mosaicked together to produce a single and accurate ortho- an ortho-mosaicked image. This will ensure that every part of the field is taken into account during the UAV data acquisition and processing

[Comment] Line 174- Mentioning the name of the sensor (Micasense Red Edge -MX) in the text would be beneficial here.

Response: The name of sensor has been added in sub-section 2.3 as per your suggestion on lines 195-196.

[Comment] Section 2.3- Good description of the simple Radiometric calibration methods. Were ground control points used for georectification? If so, how many?

Response: We used 4 to 6 ground control points (GCPs). In Exp. 1) 4 GCPs were used, while 6 GCPs were used in Exps. 2 and 3. We use. The number of GCPs was determine by the size of experimental area. Moreover, Pix4D mapper have ability to compute accurate GCPs after matching the mutual tie points positions of images capture from camera automatically to minimize the error probability in orthomosaic image generation.

[Comment] Line 179- perhaps remove “Besides.”

Response: Thank you for your comment, we have removed the word “Besides”on line 201.

Section 2.4- The inclusion of brief background on the methods is very good. However, how do you then apply the pixel values to get the estimates? I believe more explanation would help the reader make the jump from your materials sections to the results. You describe fairly well for the deep learning, but there seems to be some detail missing in between. Perhaps a figure might help make the connection from what you describe in Figure 2 to section 3, the results.

Response: Considering your suggestion, a figure describing an example of supervised machine learning model in MATLAB has been added on line 258. Additionally, a subsection 2.4.1 describing the ML regression models in MATLAB has also been added on lines 232-255.

[Comment] Line 218- do you have some examples in the literature of SVM being applied? The same for all sections. examples may be helpful. I agree that there are many, but examples would give concrete support to your assertion.

Response: We added some examples of SVM being applied, we also included examples to other sections on lines 277-280.

[Comment] Table 7- change R2 to R2

Response: Suggested correction has been made in Table 8 (Table 7 changed to Table 8) on line 494.

[Comment] Line 450- et? Please correct

Response: The word “et” has been deleted on line 504.

[Comment] Section 4.4- you describe many advantages, and you even describe some future research that can be conducted, but you do not describe any limitations of ML in the work you share. I would love to read some insights you have from using ML on sUAS data. What limitations can you share?

Response: Massive and inclusive/unbiased data required by ML for training, enough time required by algorithms learn and develop enough to fulfill their purpose with accuracy, and the ability to accurately interpret results generated by the ML algorithms are the major limitations of ML.  As for as UAVs are concerned, harnessing the full potential of UAVs requires diligent consideration of their underlying principles and limitations.

 We added in the discussion part some limitations on lines 599-628 as follow: Although the use of UAVs combined with ML enabled the prediction of , there are several limitations that prevent their wider use. The multispectral camera was relatively expensive. However, these costs could be reduced by using low-cost RGB imaging instead of a multispectral camera. Sánchez-Sastre et al. (Sánchez-Sastre et al., 2019)  successfully used RGB VIs to estimate chlorophyll content in sugar beet leaves. The same results may be achieved by using RGB VIs to assess . Hence, further work is needed to assess the applicability of RGB imaging for assessing using RGB VIs and ML models. In relation to the main limitations of this ML method, it should be noted that an average farmer may require on how to operate the UAV platform and process the data using the ML methods, which may be costly. This fact may prohibit the adoption of UAV technologies for individual farmers with only small agricultural fields. This may affect the adoption of the UAV remote sensing technology reported in the literature (Tsouros et al., 2019). We propose that scientists and experts should assist farmers in the field. Finally, crop biophysical parameters are known to be influenced by UAV flight height, as reported by Oniga et al. (Oniga et al., 2018). Further research should be undertaken to determine the influence of height in the retrieval of . Another drawback of commercial UAV’s technology is the short flight time, which ranges from 20 min to 1 h and can thus only cover a very restricted area with every flight. In addition, effective UAVs cannot be used on a very windy or rainy day, meaning flights must be postponed. In addition, feature selection is essential for optimizing the accuracy of the model and for enhancing model interpretability. However, in the FS method, all 67 VIs were blindly fed into the FS algorithm to select the 5 best VIs for  modeling. Other FS methods need to be investigated in order to assess the effects of other input variables on  assessment.

 

[Comment] Also, I think a very important advantage is missed: sUAS or UAV are a personal remote sensing device- temporal and spatial resolutions are more in the remote sensing scientists’ control than ever before. Using your prescribed methods, a RS scientist or farmer is able to generate these valuable maps on demand. Despite the complex nature of ML and DL models, is this really something farmers could accomplish?

Response: Thanks for highlighting one of the very important advantage that sUAS or UAV are emerged as personal remote sensing devices and has given control to remote sensing scientists on temporal and spatial resolutions than ever before. RS scientist and/or progressive farmer with larger farms size are able to generate these valuable maps on demand using the methods prescribed in this study despite the complex nature of ML and DL models. Besides, the newly developed models might also assist the policy makers and agriculture extensionist for making the recommendations that can guide smallholder farmers to accomplish water management. However, further investigation with different regions and wheat cultivars are recommended to test the applicability of newly developed models for crop water diagnosis.

 

There are several other grammatical and English-related errors. Please review thoroughly.

Response: The revised manuscript has been checked by MDPI editing service to address the issue of grammatical and general English language related errors.

References

 Colombo R, Meroni M, Marchesi A, Busetto L, Rossini M, Giardino C, Panigada C , 2008. Estimation of leaf and canopy water content in poplar plantations by means of hyperspectral indices and inverse modeling. Remote Sensing of Environment. 112: 1820–1834. https://doi.org/10.1016/j.rse.2007.09.005

 Khan A, Sohail A, Zahoora U, Qureshi A S , 2020. A Survey of the Recent Architectures of Deep Convolutional Neural Networks 1 Introduction. Artificial Intelligence Review. 53: 5455–5516. https://doi.org/https://doi.org/10.1007/s10462-020-09825-6 1 Introduction Machine

 Li S, Ding X, Kuang Q, SyedTahir A-U-K, Cheng T, Xiaojun L, Yongchao T, Zhu Y, Cao W, Cao Q , 2018. Potential of UAV-based active sensing for monitoring rice leaf nitrogen status. Frontiers in Plant Science. 9: 1834. https://doi.org/10.3389/fpls.2018.01834

 Oniga V, Breaban A, Statescu F , 2018. Determining the Optimum Number of Ground Control Points for Obtaining High Precision Results Based on UAS Images †. Proceedings. 2: 1–11. https://doi.org/10.3390/ecrs-2-05165

 Sánchez-Sastre L F, Alte da Veiga N M S, Ruiz-Potosme N M, Carrión-Prieto P, Marcos-Robles J L, Navas-Gracia L M, Martín-Ramos P , 2019. Assessment of RGB Vegetation Indices to Estimate Chlorophyll Content in Sugar Beet Leaves in the Final Cultivation Stage. AgriEngineering. 2: 128–149. https://doi.org/https://doi.org/10.3390/agriengineering2010009

 Tsouros D C, Bibi S, Sarigiannidis P G , 2019. A Review on UAV-Based Applications for Precision Agriculture †. Information. 10: 349. https://doi.org/10.3390/info10110349

 Wang N, Wang Y, Er M J , 2020. Review on deep learning techniques for marine object recognition: Architectures and algorithms. Control Engineering Practice. 1–18. https://doi.org/10.1016/j.conengprac.2020.104458

 Zha H, Miao Y, Wang T, Li Y, Zhang J, Sun W , 2020. Sensing-Based Rice Nitrogen Nutrition Index Prediction with Machine Learning. Remote Sensing. 12: 1–22.

Round 2

Reviewer 2 Report

The authors took into consideration the suggested comments and modifications. I believe after the applied corrections, the paper can be published.

Back to TopTop