Next Article in Journal
Optimal Unmanned Ground Vehicle—Unmanned Aerial Vehicle Formation-Maintenance Control for Air-Ground Cooperation
Next Article in Special Issue
Artificial Intelligence Mortality Prediction Model for Gastric Cancer Surgery Based on Body Morphometry, Nutritional, and Surgical Information: Feasibility Study
Previous Article in Journal
A Comparative Study of a Fully-Connected Artificial Neural Network and a Convolutional Neural Network in Predicting Bridge Maintenance Costs
Previous Article in Special Issue
Data Augmentation Based on Generative Adversarial Networks to Improve Stage Classification of Chronic Kidney Disease
 
 
Article
Peer-Review Record

Texture and Materials Image Classification Based on Wavelet Pooling Layer in CNN

Appl. Sci. 2022, 12(7), 3592; https://doi.org/10.3390/app12073592
by Juan Manuel Fortuna-Cervantes 1, Marco Tulio Ramírez-Torres 2,*, Marcela Mejía-Carlos 1, José Salomé Murguía 3,4, José Martinez-Carranza 5, Carlos Soubervielle-Montalvo 6 and César Arturo Guerra-García 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Appl. Sci. 2022, 12(7), 3592; https://doi.org/10.3390/app12073592
Submission received: 8 March 2022 / Revised: 25 March 2022 / Accepted: 30 March 2022 / Published: 1 April 2022
(This article belongs to the Special Issue Recent Advances in Deep Learning for Image Analysis)

Round 1

Reviewer 1 Report

The authors proposed to use a new pooling method in CNN for the classification tasks. While it is useful, some issues should still be emphasized.

Please shorten the length of paper as much as possible, focusing on the new knowledge and findings to the reader. Much of content in section 3 can be deleted as they are more about the existed knowledge.

Please consider reducing the number of pooling methods, providing the strongest cases.

The novelty was not well pointed out in the introduction, and please discuss the blank of this field before the authors proposed this method.

Author Response

Thank you for the valuable suggestions. Below are the comments from the reviewer, and we respond to each one.

  1. Please shorten the length of paper as much as possible, focusing on the new knowledge and findings to the reader. Much of content in section 3 can be deleted as they are more about the existed knowledge.

R: We agree with the reviewer that it is not necessary to describe known definitions. Hence, we have only focused on describing the wavelet analysis, which is the approach that has been used to develop the wavelet pooling method. Hence, we have reduced the length of the article.

  1. Please consider reducing the number of pooling methods, providing the strongest cases.

R: Thank you for this suggestion. We have modified the section "5.1. Model Training Results and Analysis" to describe the strongest cases in our research. However, it is necessary to clarify that this study involves all possible configurations of the proposed method, since we have different datasets when validating our proposal.

  1. The novelty was not well pointed out in the introduction, and please discuss the blank of this field before the authors proposed this method.

R: Thank you for pointing this out. Therefore, we revised the introduction section to better describe the objective. In this case, we have added some sentences to the article (page 2, paragraph 2, line 37) to emphasize the objective of this paper, which is to propose a classification system with a new pooling method, called Discrete Wavelet Transform Pooling (DWTP)—focused on the recognition of textures and materials in images. Also, the advantages and novelty of the proposed method are briefly mentioned, how it is decided to implement it, and the datasets with which our system is validated.

Reviewer 2 Report

Classifying texture and materials image, the author(s) did good work. However, revising the following points may improve the article’s outcome.

  1. Determine what type of training method the author(s) used? And why?
  2. How the training, validation, and testing dataset was divided?
  3. Evaluating performance metrics require to show the training and the running time per algorithm(s)?
  4. Tables 4, 5, and 6 show a high rate for the Test-loss and the inverse for Table 3! Explain why?
  5. The author(s) required to give a table comparing their findings against the recent relative ones?
  6. Assessing classification method(s) requires ROC analysis?

I can accept the article after amendment

Author Response

Thank you for the valuable suggestions. Below are the comments, and we respond to each one.

  1. Determine what type of training method the author(s) used? And why?

R: We agree with the reviewer's point of view. We have rewritten this part of the paper (page 6, paragraph 1, line 212), which describes the methodology to train our CNN architecture with Keras.

  1. How the training, validation, and testing dataset was divided?

R: Thank you for this suggestion. We have modified the section "5.1. Model Training Results and Analysis" to describe the strongest cases in our research. However, it is necessary to clarify that this study involves all possible configurations of the proposed method, since we have different datasets when validating our proposal.

  1. Evaluating performance metrics require to show the training and the running time per algorithm(s)?

R: Yes, because it allows us to demonstrate the contribution of the method. The analysis of the learning curves is performed to understand if the method is overfitting or underfitting the problem. The research is related to the metrics. The objective we are looking for is to determine the best classification performance. As for the runtime per algorithm(s) (epochs) allow us to visualize and diagnose if the algorithm is predicting right or wrong.

  1. Tables 4, 5, and 6 show a high rate for the Test-loss and the inverse for Table 3! Explain why?

R: Thank you for this suggestion. We have considered adding the comment in the "6. Discussion" section in this new version. We believe it is an essential result of our research. Therefore, this section contains a paragraph (page 14, paragraph 1, line 403) explaining why.

  1. The author(s) required to give a table comparing their findings against the recent relative ones?

R: In response, we believe it is necessary to compare with state-of-the-art techniques focused on texture classification within the fields of deep learning and wavelets. Hence, a comparative Table has already been added in section "6. Discussion," which describes the results obtained for the DTD dataset.

  1. Assessing classification method(s) requires ROC analysis?

R: That is a good question. In this case, ROC analysis is not necessary. My classes in each dataset are balanced, and each set has representative images. The metrics that I evaluated also let me know the overall model performance and the method classification_report() from Scikit learn library let me know the classification performance of each class. Also, this classification report is supported with the multiple confusion matrix, allowing me to see the correlation of the classes with their real and predicted values.

Round 2

Reviewer 1 Report

The authors have well addressed the reviewer's comment, and the reviewer agree to accept it.

Back to TopTop