Next Article in Journal
Extended Cross-Calibration Analysis Using Data from the Landsat 8 and 9 Underfly Event
Previous Article in Journal
Assessing the Impact of Land Use and Land Cover Changes on Aflaj Systems over a 36-Year Period
 
 
Article
Peer-Review Record

Enhanced CNN Classification Capability for Small Rice Disease Datasets Using Progressive WGAN-GP: Algorithms and Applications

Remote Sens. 2023, 15(7), 1789; https://doi.org/10.3390/rs15071789
by Yang Lu 1,*, Xianpeng Tao 1, Nianyin Zeng 2, Jiaojiao Du 1 and Rou Shang 3
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2023, 15(7), 1789; https://doi.org/10.3390/rs15071789
Submission received: 25 February 2023 / Revised: 25 March 2023 / Accepted: 25 March 2023 / Published: 27 March 2023
(This article belongs to the Topic Computational Intelligence in Remote Sensing)

Round 1

Reviewer 1 Report

This paper is GAN based paper on rice. the topic is good. I have few suggestions:

1. The abstract can be enhances by putting more results.

2. Must mention the drawbacks of GANs and why it is prefered over other augumentation techniques.

3.  Latest papers of 2022 are missing in the manuscript. Some are:

A high-quality rice leaf disease image data augmentation method based on a dual GAN

A novel GCL hybrid classification model for paddy diseases.

GAN based image augmentation for increased CNN performance in Paddy leaf disease classification

4. Is there any specific reason for 4 classes?

5. In the first reference, you have seen dual GAN. Also, compare your results with the dual GAN.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report


Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

This paper presents a deep AI learning based classification model for rice diseases. This work is in particular, designed for data scarcity conditions. To address this issue, a novel progressive WGAN network is used to generate artificial dataset that can be used for training the classification model. Three small scale publicly available datasets are used in this work to test and validate the models. The results demonstrates the strengths of this work. However, improve the paper, the following tests should be done:

1.       Generalisability across datasets. i.e., train on one and test on another dataset. (ie. the distribution of training and test data are different)

2.       Robustness to noise and other perturbations.

3.       A thorough analysis to test the strengths of the work re only using a small fraction of the datasets (e.g., 5%, 10%, 20%, …, 90%, 100%). [e.g., https://www.mdpi.com/2072-4292/12/19/3137]

4.       Compare with data augmentation (e.g., random upsampling) and SMOTE.

5.       Computational time analysis for the generation of training data.

6.       Little more justification is needed for the use of Leaky ReLU. E.g., https://ieeexplore.ieee.org/abstract/document/9455411

On a side note, it is unclear if the manuscript matches Remote Sensing. Not a single paper has been cited from this journal. The scope also looks like a tangential match.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Everything is ok. However, generalization of model and utilization of independent data is not clear yet.  

1.       There are two works of mine, which includes rice diseases:

I)                     https://doi.org/10.1016/j.compag.2019.104948

II)                  https://www.mdpi.com/2073-8994/13/3/511

            I think number of class is not enough to evaluate the performance. Please consider, at least five classes excluding healthy data. Consider some examples, those have overlapping features (symmetric features). How does it work? Represent some examples.  For imbalance and overlapping features, you can also investigate one class vs other classes such as in my work (https://link.springer.com/chapter/10.1007/978-3-030-65390-3_23)

2.       Consider, same amount of test data is not same. However, different number of test data make biasness to any class and represented result does not make any sense. Table 2 proves that number of test data is not same.

3.       Depict a table for hyper-parameters.

Comments for author File: Comments.pdf

Author Response

Thanks for your time and efforts in handling my paper.

Reviewer 3 Report

The authors have addressed most of my comments. Based on that the revised version can be accepted. 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop