Adversarial Attacks and Cyber Security: Trends and Challenges

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 30 September 2024 | Viewed by 885

Special Issue Editors


E-Mail Website
Guest Editor
Institute for Cyber Security, School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
Interests: software security; network security
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
State Key Laboratory of Blockchain and Data Security, Zhejiang University, Hangzhou 310058, China
Interests: network security; system security; program analysis

E-Mail Website
Guest Editor
School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
Interests: program security; threat detection; AI security

Special Issue Information

Dear Colleagues,

Security incidents, including adversarial attacks and vulnerabilities, have presented significant challenges to cyberspace security. To counter and defend against these threats, many researchers are leveraging cutting-edge technologies for automated and intelligent analysis as well as detection. Techniques such as information theory, graph theory, and artificial intelligence are extensively applied within security. Despite these advancements, the emergence of adversarial attacks and new security threats still leaves many unresolved problems, such as network attack detection, threat intelligence extraction, and the analysis of malicious behavior.

Therefore, this Special Issue intends to explore new approaches and perspectives on adversarial attacks and cyber security topics. This Special Issue will focus on (but is not limited to) the following topics:

  • Network attack intelligence detection;
  • Cyber threat intelligence and analysis;
  • System or mobile malware identification;
  • System or network attack attribution;
  • Vulnerability mining and analysis;
  • AI security and attack;
  • Data and privacy security;
  • Detection and evasion.

Dr. Weina Niu
Prof. Dr. Song Li
Dr. Xin Liu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • adversarial attacks
  • network security
  • system security
  • data security
  • AI security

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

46 pages, 27802 KiB  
Article
The Noise Blowing-Up Strategy Creates High Quality High Resolution Adversarial Images against Convolutional Neural Networks
by Ali Osman Topal, Enea Mancellari, Franck Leprévost, Elmir Avdusinovic and Thomas Gillet
Appl. Sci. 2024, 14(8), 3493; https://doi.org/10.3390/app14083493 - 21 Apr 2024
Viewed by 296
Abstract
Convolutional neural networks (CNNs) serve as powerful tools in computer vision tasks with extensive applications in daily life. However, they are susceptible to adversarial attacks. Still, attacks can be positive for at least two reasons. Firstly, revealing CNNs vulnerabilities prompts efforts to enhance [...] Read more.
Convolutional neural networks (CNNs) serve as powerful tools in computer vision tasks with extensive applications in daily life. However, they are susceptible to adversarial attacks. Still, attacks can be positive for at least two reasons. Firstly, revealing CNNs vulnerabilities prompts efforts to enhance their robustness. Secondly, adversarial images can also be employed to preserve privacy-sensitive information from CNN-based threat models aiming to extract such data from images. For such applications, the construction of high-resolution adversarial images is mandatory in practice. This paper firstly quantifies the speed, adversity, and visual quality challenges involved in the effective construction of high-resolution adversarial images, secondly provides the operational design of a new strategy, called here the noise blowing-up strategy, working for any attack, any scenario, any CNN, any clean image, thirdly validates the strategy via an extensive series of experiments. We performed experiments with 100 high-resolution clean images, exposing them to seven different attacks against 10 CNNs. Our method achieved an overall average success rate of 75% in the targeted scenario and 64% in the untargeted scenario. We revisited the failed cases: a slight modification of our method led to success rates larger than 98.9%. As of today, the noise blowing-up strategy is the first generic approach that successfully solves all three speed, adversity, and visual quality challenges, and therefore effectively constructs high-resolution adversarial images with high-quality requirements. Full article
(This article belongs to the Special Issue Adversarial Attacks and Cyber Security: Trends and Challenges)
Show Figures

Figure 1

17 pages, 3142 KiB  
Article
Segment Shards: Cross-Prompt Adversarial Attacks against the Segment Anything Model
by Shize Huang, Qianhui Fan, Zhaoxin Zhang, Xiaowen Liu, Guanqun Song and Jinzhe Qin
Appl. Sci. 2024, 14(8), 3312; https://doi.org/10.3390/app14083312 - 15 Apr 2024
Viewed by 355
Abstract
Foundation models play an increasingly pivotal role in the field of deep neural networks. Given that deep neural networks are widely used in real-world systems and are generally susceptible to adversarial attacks, securing foundation models becomes a key research issue. However, research on [...] Read more.
Foundation models play an increasingly pivotal role in the field of deep neural networks. Given that deep neural networks are widely used in real-world systems and are generally susceptible to adversarial attacks, securing foundation models becomes a key research issue. However, research on adversarial attacks against the Segment Anything Model (SAM), a visual foundation model, is still in its infancy. In this paper, we propose the prompt batch attack (PBA), which can effectively attack SAM, making it unable to capture valid objects or even generate fake shards. Extensive experiments were conducted to compare the adversarial attack performance among optimizing without prompts, optimizing all prompts, and optimizing batches of prompts as in PBA. Numerical results on multiple datasets show that the cross-prompt attack success rate (ASR) of the PBA method is 17.83% higher on average, and the attack success rate (ASR) is 20.84% higher. It is proven that PBA possesses the best attack capability as well as the highest cross-prompt transferability. Additionally, we introduce a metric to evaluate the cross-prompt transferability of adversarial attacks, effectively fostering research on cross-prompt attacks. Our work unveils the pivotal role of the batched prompts technique in cross-prompt adversarial attacks, marking an early and intriguing exploration into this area against SAM. Full article
(This article belongs to the Special Issue Adversarial Attacks and Cyber Security: Trends and Challenges)
Show Figures

Figure 1

Back to TopTop