Machine Learning Integration with Cyber Security II

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Cybersecurity".

Deadline for manuscript submissions: closed (20 June 2023) | Viewed by 8730

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computing and Cyber Security at the Texas A&M, San Antonio, TX 78224, USA
Interests: software engineering; software defined networking; software testing and cyber security
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the continuous expansion of machine learning algorithms in security and decision-making systems, attempts to attack these algorithms are also expanding. Adversarial attacks have recently been seen in social network platforms, network traffic, email spam detections, financial services and many other areas. In this context, we are inviting papers related to adversarial attacks and adversarial machine learning in all fields and applications, such as, but not limited to,

  1. Adversarial machine learning in text and national language processing;
  2. Adversarial machine learning in images and image processing;
  3. Adversarial attacks;
  4. Adversarial defense mechanisms;
  5. Misinformation in social networks and adversarial machine learning;
  6. Adversarial machine learning models;
  7. Social bots and trolls.

Dr. Izzat Alsmadi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • adversarial machine learning
  • adversarial attacks
  • social bots
  • social trolls

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Review

34 pages, 2856 KiB  
Review
Adversarial Machine Learning Attacks against Intrusion Detection Systems: A Survey on Strategies and Defense
by Afnan Alotaibi and Murad A. Rassam
Future Internet 2023, 15(2), 62; https://doi.org/10.3390/fi15020062 - 31 Jan 2023
Cited by 17 | Viewed by 7743
Abstract
Concerns about cybersecurity and attack methods have risen in the information age. Many techniques are used to detect or deter attacks, such as intrusion detection systems (IDSs), that help achieve security goals, such as detecting malicious attacks before they enter the system and [...] Read more.
Concerns about cybersecurity and attack methods have risen in the information age. Many techniques are used to detect or deter attacks, such as intrusion detection systems (IDSs), that help achieve security goals, such as detecting malicious attacks before they enter the system and classifying them as malicious activities. However, the IDS approaches have shortcomings in misclassifying novel attacks or adapting to emerging environments, affecting their accuracy and increasing false alarms. To solve this problem, researchers have recommended using machine learning approaches as engines for IDSs to increase their efficacy. Machine-learning techniques are supposed to automatically detect the main distinctions between normal and malicious data, even novel attacks, with high accuracy. However, carefully designed adversarial input perturbations during the training or testing phases can significantly affect their predictions and classifications. Adversarial machine learning (AML) poses many cybersecurity threats in numerous sectors that use machine-learning-based classification systems, such as deceiving IDS to misclassify network packets. Thus, this paper presents a survey of adversarial machine-learning strategies and defenses. It starts by highlighting various types of adversarial attacks that can affect the IDS and then presents the defense strategies to decrease or eliminate the influence of these attacks. Finally, the gaps in the existing literature and future research directions are presented. Full article
(This article belongs to the Special Issue Machine Learning Integration with Cyber Security II)
Show Figures

Figure 1

Back to TopTop