sensors-logo

Journal Browser

Journal Browser

Adversarial Machine Learning in Sensors: Attacks, Defenses and Outlooks

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (26 May 2023) | Viewed by 2453

Special Issue Editor


E-Mail Website
Guest Editor
Software and Information Systems Engineering, Ben-Gurion University, Beer Sheba, Israel
Interests: offensive AI; adversarial machine learning; deepfakes; intrusion detection

Special Issue Information

Dear Colleagues,

Machine learning has become a critical part of society, used in forecasting, autonomous vehicles, critical infrastructure, healthcare and even surveillance. However, machine learning algorithms are vulnerable to attacks during training and testing time, which can lead to  their confidentiality, integrity, and availability being harmed. Although many defences have been proposed in the past, the issue remains largely unsolved, as even state-of-the-art defences can be evaded by attackers.

In this Special Issue, we welcome any papers that explore or identify vulnerabilities in machine learning and papers which propose robust defences. Special attention will be given to papers which explore the issue of evadable defences and provide insights into how defences can create more resilient attackers.

Dr. Yisroel Mirsky
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • adversarial machine learning
  • sensor security
  • adversarial examples
  • cyber–physical systems
  • security of AI
  • robust sensor systems

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 3811 KiB  
Article
Evaluation of GAN-Based Model for Adversarial Training
by Weimin Zhao, Qusay H. Mahmoud and Sanaa Alwidian
Sensors 2023, 23(5), 2697; https://doi.org/10.3390/s23052697 - 01 Mar 2023
Cited by 1 | Viewed by 2114
Abstract
Deep learning has been successfully utilized in many applications, but it is vulnerable to adversarial samples. To address this vulnerability, a generative adversarial network (GAN) has been used to train a robust classifier. This paper presents a novel GAN model and its implementation [...] Read more.
Deep learning has been successfully utilized in many applications, but it is vulnerable to adversarial samples. To address this vulnerability, a generative adversarial network (GAN) has been used to train a robust classifier. This paper presents a novel GAN model and its implementation to defend against L and L2 constraint gradient-based adversarial attacks. The proposed model is inspired by some of the related work, but it includes multiple new designs such as a dual generator architecture, four new generator input formulations, and two unique implementations with L and L2 norm constraint vector outputs. The new formulations and parameter settings of GAN are proposed and evaluated to address the limitations of adversarial training and defensive GAN training strategies, such as gradient masking and training complexity. Furthermore, the training epoch parameter has been evaluated to determine its effect on the overall training results. The experimental results indicate that the optimal formulation of GAN adversarial training must utilize more gradient information from the target classifier. The results also demonstrate that GANs can overcome gradient masking and produce effective perturbation to augment the data. The model can defend PGD L2 128/255 norm perturbation with over 60% accuracy and PGD L 8/255 norm perturbation with around 45% accuracy. The results have also revealed that robustness can be transferred between the constraints of the proposed model. In addition, a robustness–accuracy tradeoff was discovered, along with overfitting and the generalization capabilities of the generator and classifier. These limitations and ideas for future work will be discussed. Full article
Show Figures

Figure 1

Back to TopTop