Next Article in Journal
Residual Carbon Derived from Different Maize Parts Differed in Soil Organic Carbon Fractions as Affected by Soil Fertility
Previous Article in Journal
Study on the Regulation Mechanism of 1-MCP Combined with SO2 Treatment on Postharvest Senescence of Bamboo Shoots (Chimonobambusa quadrangularis) in Karst Mountain Area
 
 
Article
Peer-Review Record

Deep Learning YOLO-Based Solution for Grape Bunch Detection and Assessment of Biophysical Lesions

Agronomy 2023, 13(4), 1120; https://doi.org/10.3390/agronomy13041120
by Isabel Pinheiro 1,2, Germano Moreira 1,3, Daniel Queirós da Silva 1,2, Sandro Magalhães 1,4, António Valente 1,2, Paulo Moura Oliveira 2, Mário Cunha 1,3 and Filipe Santos 1,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Agronomy 2023, 13(4), 1120; https://doi.org/10.3390/agronomy13041120
Submission received: 14 March 2023 / Revised: 7 April 2023 / Accepted: 12 April 2023 / Published: 14 April 2023
(This article belongs to the Section Precision and Digital Agriculture)

Round 1

Reviewer 1 Report

check the attachment 

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

This paper focused on grape bunch detection and damage classification based on transferred-learned YOLO-based DL approaches. Overall, it is an interesting topic to the general audience and the paper was written in a good flow. The results are promising and the models are applicable to general usage. That said, there are still some minor comments concerning the data and results. First of all, as one of the main advantages of YOLO is its efficiency and real-time inference, How much improvement or compromise in speed the three models achieved or sacrificed compared with the state-of-art models? Besides, although authors showcased a few failed cases of the model, there is still inadequate analysis on the limitation of the model performance. For some common scenarios, such as occlusion and densely arranged objects, that are common in practical applications but might be underrepresented in public datasets, how well these three models could handle them? Is there any specific type of complexity any of these three models tend to fail? At what degree of occlusion or overlapping the model started to fail? A further evaluation of the results is thus recommended. Moreover, how well these models would work on images taken from a different sensor under a different illumination condition at different shooting distances and angles? Since most of the images were from the same phone camera, did the model overfit to this type of data source? Also, as regard to some difficulties discussed by authors, if possible, bringing in spectral information such as NIR or red-edge may also be helpful and could still be useful for application considering the wide usage of NIR spectrometry. A more extended discussion on the application of models would also be beneficial. The color of bounding boxes in some figures are also hard to read. A more contrasting color is suggested.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop