Next Article in Journal
The Use of IoT for Determination of Time and Frequency Vibration Characteristics of Industrial Equipment for Condition-Based Maintenance
Previous Article in Journal
Knowledge Graph Engineering Based on Semantic Annotation of Tables
 
 
Article
Peer-Review Record

Filter Pruning with Convolutional Approximation Small Model Framework

Computation 2023, 11(9), 176; https://doi.org/10.3390/computation11090176
by Monthon Intraraprasit * and Orachat Chitsobhuk *
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3:
Reviewer 4:
Computation 2023, 11(9), 176; https://doi.org/10.3390/computation11090176
Submission received: 21 June 2023 / Revised: 4 August 2023 / Accepted: 16 August 2023 / Published: 5 September 2023

Round 1

Reviewer 1 Report

This paper designs a new framework for CNN compression. Given the growing size of large models, this work has a strong motivation. The CASM proposed from the paper is very simple. It tries to reduce the size of some kernels in the model, but to keep closer accuracy of the training. The evaluation is conducted on several typical benchmarks and the results are impressive.

However, I suggest the authors add some explanation in Sec 2 or Sec 3, to highlight your major diffrence from the existing pruning techniques. I feel the kernel-based compression has been well-known and it seems you method is very similar to the existing pruning works. So I hope you explain about your contribution which distinguishes this work from the prior works.

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

This paper proposes a pruning method based on the convolutional approximation small model, overall the paper is well presented. I recommend that the limitations of the proposed method should be discussed in the paper.

The English is good enough for understanding, while there are some grammar mistakes.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

Contributions:

This paper proposes a Convolutional Approximation Small Model (CASM) framework. My comments are given below:

 

  1. In the experiments, the performance of the original networks (without pruning) should be listed for comparison.
  2. Directly using a light CNN-based model is easier than the proposed method.
  3. (Page 5) As shown in Fig. 2, I think the weak filters may differ for various training sets. If so, the determination of weak filters may be unstable.
  4. (Page 3) How do you define alpha in eq.(1)?
  5. (Line 128 on page 3)Equation 1 should be revised as “equation (1)”. Please check the usage throughout this paper.
  6. (Line 106 on page 3)The abbreviation should be LGAH rather than LGAP.
  7. (Page 4) In eq. (2) a symbol should replace the objective function.
  8. (Page 6) Equation (3) is cross-correlation. 
  9. (Pages 5 to 6) The training parameters can be summarized in a table. It will improve the clarity.
  10. (Page 7) Table 1 should be discussed explicitly. 
  11. (Pages 9 and 10) The border lines should be removed for Figs. 3 and 4.

The quality of the English language can be further improved.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 4 Report

In general, the paper is well written.

Comparison with other approaches should be thorough.

The authors speak of "accuracy drop" but I don’t introduce the meaning neither the modalities of calculation.

What are top-1 and top-5?

In general many acronyms are used without explanation making reading difficult.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 3 Report

The authors have improved the quality of this paper. I think it can be accepted for publication.

Reviewer 4 Report

New version responds to change requests

Back to TopTop