Topic Editors

Dr. Feiran Huang
College of Cyber Security/College of Information Science and Technology, Jinan University, Guangzhou 510632, China
Dr. Shuyuan Lin
College of Information Science and Technology / College of Cyber Security, Jinan University, Guangzhou 510632, China
Dr. Xiaoming Zhang
School of Cyber Science and Technology, Beihang University, Beijing 100191, China
Dr. Yang Lu
School of Informatics, Xiamen University, Xiamen 361005, China

Adversarial Machine Learning: Theories and Applications

Abstract submission deadline
closed (31 January 2024)
Manuscript submission deadline
31 March 2024
Viewed by

Topic Information

Dear Colleagues,

Adversarial Machine Learning has emerged as a critical and rapidly growing research area at the intersection of machine learning, cybersecurity, and artificial intelligence. It deals with the study of vulnerabilities and defenses of machine learning models against adversarial attacks. In recent years, machine learning has achieved remarkable success in various applications, including computer vision, natural language processing, speech recognition, and autonomous systems. However, as these models are increasingly deployed in safety-critical systems, there is a growing concern about their susceptibility to adversarial attacks. Adversarial attacks aim to deceive machine learning models into making incorrect predictions or decisions. These perturbations are often imperceptible to human eyes/insights but can cause significant changes in model outputs. The vulnerability of machine learning models to adversarial attacks has raised fundamental questions/problems about their robustness, reliability, and safety in real-world scenarios. This multidisciplinary topic aims to explore the recent advancements and applications of Adversarial Machine Learning. Adversarial Machine Learning poses significant challenges in various domains, including computer vision, natural language processing, and more. Adversarial attacks can lead to severe consequences, such as misclassification of images, manipulated data, or compromised model integrity. The development of intelligent defense techniques becomes crucial to safeguard the integrity and reliability of machine learning models in real-world applications. We invite researchers to submit original works that shed light on the theories and practical applications of Adversarial Machine Learning. We encourage submissions that contribute novel insights, methodologies, or empirical findings in this rapidly evolving field. The topics of interest include but are not limited to the following:

  • Interpretable/explainable adversarial machine learning
  • Adversarial attacks in computer vision and pattern recognition
  • Adversarial challenges in natural language processing
  • Adversarial scene Scenarios understanding: object segmentation / motion segmentation / visual tracking in video/image sequences by machine learning
  • Adversarial correspondence learning: enhancing robustness in image matching
  • Adversarial robustness in deep learning
  • Embedding adversarial learning
  • Violence/anomaly detection
  • Robustness estimation or benchmarking of machine learning models
  • Privacy and security concerns in adversarial machine learning
  • Real-world applications and case studies of adversarial machine learning

Dr. Feiran Huang
Dr. Shuyuan Lin
Dr. Xiaoming Zhang
Dr. Yang Lu
Topic Editors


  • adversarial attacks
  • machine learning
  • robust estimation
  • computer vision
  • natural language processing
  • deep learning
  • privacy preservation
  • correspondence learning

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
2.7 4.5 2011 16.9 Days CHF 2400 Submit
Machine Learning and Knowledge Extraction
3.9 8.5 2019 19.9 Days CHF 1800 Submit
2.4 3.5 2013 16.9 Days CHF 2600 Submit
Remote Sensing
5.0 7.9 2009 23 Days CHF 2700 Submit is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with and has built a direct connection between MDPI journals and Authors are encouraged to enjoy the benefits by posting a preprint at prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:
11 pages, 945 KiB  
Improving Adversarial Robustness via Distillation-Based Purification
Appl. Sci. 2023, 13(20), 11313; - 15 Oct 2023
Viewed by 869
Despite the impressive performance of deep neural networks on many different vision tasks, they have been known to be vulnerable to intentionally added noise to input images. To combat these adversarial examples (AEs), improving the adversarial robustness of models has emerged as an [...] Read more.
Despite the impressive performance of deep neural networks on many different vision tasks, they have been known to be vulnerable to intentionally added noise to input images. To combat these adversarial examples (AEs), improving the adversarial robustness of models has emerged as an important research topic, and research has been conducted in various directions including adversarial training, image denoising, and adversarial purification. Among them, this paper focuses on adversarial purification, which is a kind of pre-processing that removes noise before AEs enter a classification model. The advantage of adversarial purification is that it can improve robustness without affecting the model’s nature, while another defense techniques like adversarial training suffer from a decrease in model accuracy. Our proposed purification framework utilizes a Convolutional Autoencoder as a base model to capture the features of images and their spatial structure.We further aim to improve the adversarial robustness of our purification model by distilling the knowledge from teacher models. To this end, we train two Convolutional Autoencoders (teachers), one with adversarial training and the other with normal training. Then, through ensemble knowledge distillation, we transfer the ability of denoising and restoring of original images to the student model (purification model). Our extensive experiments confirm that our student model achieves high purification performance(i.e., how accurately a pre-trained classification model classifies purified images). The ablation study confirms the positive effect of our idea of ensemble knowledge distillation from two teachers on performance. Full article
(This article belongs to the Topic Adversarial Machine Learning: Theories and Applications)
Show Figures

Figure 1

Back to TopTop