Next Article in Journal
Contributions of Body Segments to the Toe Velocity during Taekwondo Roundhouse Kick
Next Article in Special Issue
Planet Optimization with Deep Convolutional Neural Network for Lightweight Intrusion Detection in Resource-Constrained IoT Networks
Previous Article in Journal
Comparison between Personal Protective Equipment Wearing Protocols to Shorten Time to Treatment in Pre-Hospital Settings
Previous Article in Special Issue
SGXDump: A Repeatable Code-Reuse Attack for Extracting SGX Enclave Memory
 
 
Article
Peer-Review Record

Improving Adversarial Robustness of CNNs via Maximum Margin

Appl. Sci. 2022, 12(15), 7927; https://doi.org/10.3390/app12157927
by Jiaping Wu, Zhaoqiang Xia and Xiaoyi Feng *
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Appl. Sci. 2022, 12(15), 7927; https://doi.org/10.3390/app12157927
Submission received: 16 June 2022 / Revised: 2 August 2022 / Accepted: 3 August 2022 / Published: 8 August 2022
(This article belongs to the Special Issue Recent Advances in Cybersecurity and Computer Networks)

Round 1

Reviewer 1 Report

It is better to refer to any function using its number and not using “ as:”, below, or above. Usually, the caption of a table will be above the table and the caption of a figure will be below the figure.

Author Response

Dear Reviewers,

Thank you very much for your time involved in reviewing the manuscript and your very encouraging comments on the merits.

We appreciate the detailed and constructive comments. We have carefully revised the manuscript by incorporating all the suggestions.

 

Comments: 

“It is better to refer to any function using its number and not using “ as:”, below, or above. Usually, the caption of a table will be above the table and the caption of a figure will be below the figure.”

Response:

Thank you for the detailed review. We have carefully and thoroughly proofread the manuscript to correct all the captions of tables and references to formulas.

 

We hope this revised manuscript has addressed your concerns, and look forward to hearing from you.

Sincerely,

The Authors

Reviewer 2 Report

 

Summary: In this paper, the authors proposed Adversarial Training with Supported Vector Machine (AT-SVM) to improve the standard AT by inserting an SVM auxiliary classifier to learn a larger margin. The authors selected examples close to the decision boundary through the SVM auxiliary classifier and train only on these more important examples.   

 

Comments:

1 . The intuition of using the auxiliary SVM classifier to provide the mask is not well explained.  If the SVM classifier can indeed solve the robustness issue, then why not directly replace the last linear layer in the network with a SVM classifier?

 

2. I do appreciate the authors for conducting extensive experiments to validate the performance of the proposed method, however, the evaluation methods used in the paper seem a bit weak. A lot of studies have suggested that many defenses can cause obfuscated gradient issues and give a false sense of security under only PGD attack. Therefore, I would suggest the authors to further evaluate the proposed method under stronger attack baselines such as AutoAttack (contains black-box attack) / RayS (hard-label attack) 

 

"Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks." International Conference on Machine Learning. PMLR, 2020.


"Rays: A ray searching method for hard-label adversarial attack." Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2020.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

The authors have proposed a novel adversarial training technique using the auxiliary Support Vector Machine classifier. The paper is interesting and well-constructed. The authors started formulating the research problem based on the reviewed related research works. The review of related research works included the relevant and up-to-date works. Then the original approach was proposed and presented with all the details. The proposed method was experimentally verified, and the results of experiments included comparing the performance of the proposed method with selected state-of the-art approaches.

Considering the above remarks, the paper can be accepted after improving the English language when it comes to grammar and style.

 

Author Response

Dear Reviewers,

Thank you very much for your time involved in reviewing the manuscript and your very encouraging comments on the merits.

We appreciate the detailed and constructive comments. We have carefully revised the manuscript by incorporating all the suggestions.

 

Comments: 

“The authors have proposed a novel adversarial training technique using the auxiliary Support Vector Machine classifier. The paper is interesting and well-constructed. The authors started formulating the research problem based on the reviewed related research works. The review of related research works included the relevant and up-to-date works. Then the original approach was proposed and presented with all the details. The proposed method was experimentally verified, and the results of experiments included comparing the performance of the proposed method with selected state-of the-art approaches.

Considering the above remarks, the paper can be accepted after improving the English language when it comes to grammar and style.”

Response:

Thank you for the detailed review. We have carefully and thoroughly proofread the manuscript to correct all the grammar and typos.

We hope this revised manuscript has addressed your concerns, and look forward to hearing from you.

Sincerely,

The Authors

Round 2

Reviewer 2 Report

I have read the authors’ response. And I still have a few concerns on the experimental results part. 

 

First, the added AutoAttack experiment should be merged with Table 1 for a clear comparison. Based on the AA result, the improvement over the standard adversarial training is actually minimal. Yet methods such as TRADES (maybe the author should also compare with) has shown to be more robust than standard adversarial training on AutoAttack benchmarks. The authors need to justify the effectiveness of the proposed method.

 

Second, I would still recommend the authors to try the RayS attack as currently there is no hard-label or gradient-independent attack tested. 

 

Therefore, I would still recommend minor revision.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop